THE HEPPNER PROTECTIVE SHIELD:
Agentic Fidelity, Zero Data Retention, and the Cognitive Protection of Prompts as Opinion Work Product in the Age of Artificial Intelligence
By Roland G. Ottley, Esq., PA-C Principal Attorney | The Ottley Law Firm, P.C.
Author’s Note on Case Citations: All cases cited in this manuscript are real judicial decisions, not hypotheticals. United States v. Heppner, 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026) (Rakoff, J.), and Warner v. Gilbarco, Inc., No. 2:24-cv-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026) (Patti, M.J.), are decisions issued in February 2026. Their recency may cause some readers to question their existence; the full citations and secondary commentary are provided in the References section. Tremblay v. OpenAI and Concord Music Group v. Anthropic are similarly real pending or decided matters. Concord Music Group, Inc. v. Anthropic PBC, originally filed in M.D. Tenn., was transferred to N.D. Cal., where the May 2025 work product ruling was issued under docket No. 24-cv-03811-EKL (SVK), 2025 WL 1482734. The manuscript was finalized in March 2026.
Doctrinal Status Legend: This manuscript draws on three categories of legal authority, which are distinguished throughout as follows:
[SETTLED LAW] — Propositions supported by binding Supreme Court or circuit court authority that is well-established and not subject to reasonable dispute. Examples: Hickman v. Taylor, Upjohn, Kovel, FRCP 26(b)(3).
[EMERGING AUTHORITY] — Propositions supported by recent district court decisions, magistrate rulings, or a developing but not yet uniform body of authority. These propositions are persuasively supported but have not yet achieved the status of settled law. Examples: Tremblay v. OpenAI, Warner v. Gilbarco, United States v. Heppner.
[PROPOSED FRAMEWORK] — Analytical constructs and normative arguments advanced by this manuscript as frameworks that courts and bar authorities should adopt, but that have not yet been formally adopted by any court or ethics body. Examples: The Heppner Protective Shield Test, the Agentic Fidelity (AgFi) scoring system, the TOLFPC protocols.
Where a proposition draws on more than one category, the most authoritative category is indicated.
ABSTRACT
The rapid evolution of artificial intelligence from passive response systems into autonomous, agentic platforms has exposed a structural gap in the law. While courts have long developed doctrines governing evidentiary reliability, privilege, and work-product protection, those doctrines were not designed for a world in which legal reasoning is externalized, structured, and executed through machine intermediaries. This paper addresses a consequential and underexamined question at the intersection of attorney professional responsibility and artificial intelligence: whether an attorney’s litigation strategy prompts submitted to an AI research platform—containing no client confidences—are protected as opinion work product under Federal Rule of Civil Procedure 26(b)(3) and the common law doctrine established in Hickman v. Taylor, 329 U.S. 495 (1947).
The answer is strongly affirmative and logically compelled under Hickman v. Taylor and its progeny. While courts have not yet universally addressed this precise question, the existing doctrinal framework — the Hickman mental impressions categories, the Upjohn independence principle, and the Kovel agency doctrine — collectively and powerfully support the conclusion that attorney-engineered AI prompts are opinion work product. Any contrary rule would place significant strain on the foundational protection that Hickman established for the attorney’s mental processes, rendering the doctrine’s most vital protection increasingly difficult to sustain precisely as legal practice enters its most cognitively demanding technological era. The work product doctrine protects an attorney’s mental impressions, legal theories, and litigation strategy wholly independently of whether those materials contain attorney-client privileged communications. The absence of client confidential information is legally irrelevant to the existence of work product protection — and courts that have held otherwise have conflated two independent doctrines in a manner that Hickman itself forecloses.
The more operationally significant question — and the one this manuscript answers — is the differential practical discovery exposure that arises from the variation in data retention architectures across AI platforms. A litigation strategy prompt submitted to a platform with full zero data retention (ZDR) creates no retention record reachable by adverse party subpoena. The same prompt submitted to a consumer-grade AI platform with default retention policies creates a potentially discoverable record that the attorney must then affirmatively defend. But this distinction is critical and must be stated with precision: exposure is not loss of protection. The work product doctrine applies with equal force to prompts on consumer-tier platforms. What ZDR eliminates is not the doctrine’s applicability but the attorney’s defensive burden — the motion practice, briefing, and in camera review that the doctrine requires the attorney to undertake when a subpoena produces records that exist.
This manuscript resolves these tensions through a unified framework. It introduces Agentic Fidelity (AgFi) as a measure of system reliability and advances the Heppner Protective Shield (HPS) as a doctrinal structure that governs not only the reliability of AI-assisted legal work, but also the protection of the attorney’s cognition embedded within prompts. It establishes that attorney-generated prompts, when constructed and deployed under the Heppner Protective Shield, are inextricably intertwined with and constitute an extension of the attorney’s legal cognition, and therefore qualify as protected opinion work product.
The manuscript further extends this framework into the domain of healthcare law, demonstrating that the same structural analysis that governs work product protection also governs HIPAA compliance for AI-assisted legal work. Drawing on the HIPAA conduit exception — the same regulatory principle that explains why physicians never executed Business Associate Agreements with AT&T — the manuscript establishes that AI platforms are categorically business associates, not conduits, and that attorneys handling health information must apply the Heppner Protective Shield framework as a predicate HIPAA compliance obligation. The convergence of work product doctrine, professional responsibility, and HIPAA within a single unified framework is the manuscript’s central contribution to the literature on AI-integrated legal practice.
The full manuscript is available as a free PDF. Click the link below to download and read it.










0 Comments