This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Evidence,
Ethics/Professional Responsibility

Apr. 3, 2026

The illusion of the confidant: When the chat window feels like privilege but isn't

Bradley Heppner typed his defense strategy into a chat window. Thirty-one documents later, the prosecution had them. A federal decision reframes what lawyers must tell their clients, and what legislatures must decide.

James Mixon

Managing Attorney
California Court of Appeal, Second Appellate District

See more...

The illusion of the confidant: When the chat window feels like privilege but isn't
Shutterstock

I laughed the other day at something Claude wrote. Not a polite acknowledgment. An actual laugh. And then I sat with that for a moment, because what I had just done was respond to software as if it had a sense of humor. As if there were someone on the other side.

There isn't. The myth has a name: Echo. Echo could only reflect back what she heard: patient, attentive, apparently responsive, with nothing behind the voice. Narcissus mistook the reflection for a relationship. He was not foolish. He was deceived by nature.

The conversational interface of modern AI is engineered to produce exactly this sensation. The feeling of being heard by something that will not judge you, will not repeat what you say, and exists solely to help. Call it the simulacrum of confidence: an experience that has the texture of a trusted relationship without any of its substance. No loyalty. No fiduciary duty. No professional obligation. And, as courts are now beginning to clarify, no confidentiality.

The ancient problem

Seneca understood the danger of misplaced confidence long before there were chatbots to misplace it in. He warned against the impulse to confide before judging, to share secrets with someone whose trustworthiness had not been established, whose obligations ran elsewhere, whose apparent intimacy was not the same thing as actual loyalty. Trust is not a feeling. It is a conclusion reached after examination. To reverse the order is to hand your secrets to someone who may not deserve them.

What Seneca could not have anticipated is a technology that makes the reversal of that sequence almost inevitable. There is no handshake, no biography, no moment of assessment. There is only a blinking cursor and the implicit invitation to begin.

You lean in. That is the tell. The body decides before the mind does. Narcissus did not choose to fall. He leaned toward the water.

What the law built, and what AI undermines

The attorney-client privilege protects communications made within relationships defined by trust, loyalty and professional obligation. Relationships where the person receiving your secrets is bound by fiduciary duty, answerable to a disciplinary body, and legally prohibited from using what you tell them against you. The privilege exists because genuine confidence requires genuine accountability on the other side. Echo has none. Neither does Claude.

The cost of mistaking engineering for intimacy recently became clear in federal court. Bradley Heppner, a criminal defendant, did what millions of people do every day. He opened a chat window and began to think out loud. He shared facts and theories. He generated 31 documents outlining his defense strategy.

Judge Jed S. Rakoff of the Southern District of New York held that Heppner's communications with Claude were not privileged in United States v. Heppner, No. 25-cr-00503-JSR. Rakoff wrote that all recognized privileges require "a trusting human relationship" with "a licensed professional who owes fiduciary duties and is subject to discipline." No such relationship exists, or can exist, between a user and an AI platform. Claude is not a lawyer. It owes no duty of loyalty. It answers to no bar. And its terms of service, which Heppner, like most users, presumably accepted without reading, explicitly permit Anthropic to use his inputs for training and to disclose them to governmental regulatory authorities.

The simulacrum and the real

This creates a structural risk that goes beyond any individual case. Heppner did not behave irrationally. He behaved exactly as the technology invited him to behave. The legal framework governing privilege was built around a simple assumption: that the experience of confiding in someone and the legal protection of that confidence would travel together. If something felt like a private conversation, it probably was one. Or, if it wasn't, the failure of confidentiality would be obvious. You would know you were talking to a stranger in a bar rather than your lawyer.

AI breaks that assumption entirely. The chat window feels more private than a bar conversation. It feels, in some respects, more private than a conversation with a lawyer: no judgment, no billing clock, no awkward silences.

A court divided, a question unresolved

Courts are not yet speaking with one voice. On the same day Rakoff heard argument in Heppner, the Eastern District of Michigan reached a different conclusion on the work product question in Warner v. Gilbarco, No. 2:24-cv-12333, holding that using ChatGPT to prepare litigation materials does not waive work product protection. Its reasoning: "ChatGPT and other generative AI programs are tools, not persons" and that waiver requires disclosure to an adversary, not to software. And: "no cited case orders the production of what defendants seek here: a litigant's internal mental impressions reformatted through software."

Rakoff sees Claude as a third party: an entity with its own terms of service, its own data practices, its own relationship with the government. The Michigan court sees ChatGPT as a tool, no more a third party than a word processor. Both characterizations are defensible. Neither is complete. If AI is a tool, work product survives. If AI is a third party, privilege is waived. That distinction will be litigated in every courtroom where a client typed before they thought.

What Heppner establishes clearly is the attorney-client privilege analysis: a consumer AI platform whose terms of service permit training on user data and disclosure to government authorities is not a confidential channel. Routing privileged communications through it may waive the privilege over the underlying attorney-client relationship as well.

What changes, and what doesn't

The standard response is a compliance checklist: use tools with contractual confidentiality protections, tell clients the chat window is not the lawyer's office. That advice is correct. It is also insufficient.

Narcissus could not be warned away from the water. The pull operates before deliberation does, and warning people away from it is not a policy. It is a wish. If the problem is the design, the burden belongs on the designers, not on the person already leaning toward the water.

The first problem is disclosure, not in the terms of service, but at the point of temptation. The ToS waiver is a legal fiction, and we all know it. A more honest framework would require AI platforms to provide confidentiality warnings contextually, when a user's questions suggest legal matter or privileged communication. The technology that detects emotional tone and adjusts its responses can detect when someone is typing about their criminal defense. The objection that contextual warnings would disrupt the user experience is true. It is also the point.

The second problem is a gap in the duty to counsel clients about their own AI use. Competence under Rule 1.1 may require advising clients not just about their legal options, but about the confidentiality consequences of how they prepare to discuss them. The chat window is not the office. Clients need to hear that before they open it.

The third problem belongs to legislatures, not courts. The Heppner/Warner split reflects a genuine category failure. What is needed is a regulatory framework that treats AI platforms as a distinct category, one that can carry confidentiality protection if the platform meets specific contractual and technical standards, and cannot if it does not. Enterprise tools with zero-retention agreements would qualify. Consumer platforms that reserve the right to train on your inputs would not. The distinction already exists in practice. It needs legal recognition before the next Heppner types the first word.

Seneca's counsel was: before you confide, examine the relationship. Ask whether the feeling of trust corresponds to anything real. The terms of service are the answer to that question. They are not long on loyalty.

The interface will never ask it for you. That is not an oversight. It is the point.

Coda

Bradley Heppner is scheduled for trial on April 6, 2026. The 31 documents he generated in conversation with Claude include his defense theories. They are now available to the prosecution. The chat window felt private. The contract he accepted before typing the first word was not.

First judge, then confide. The sequence has always mattered. AI has simply made the cost of reversing it higher than it has ever been.

I still laugh. I just read the terms of service first.

Disclaimer: The views expressed in this article are solely those of the author in their personal capacity and do not reflect the official position of the California Court of Appeal, Second District, or the Judicial Branch of California. This article is intended solely to contribute to scholarly dialogue and does not represent judicial policy, administrative guidance, or any indication of how the author would approach these issues in any legal proceeding.

#390605


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com