Litigation Sentinel
Litigation TechCase Watch

Two Courts, Same Day, Opposite Answers: Your AI Conversations May Already Be Discoverable

On February 10, 2026, a federal judge in Manhattan ruled that AI-generated documents are not privileged. Hours later, a federal judge in Michigan ruled the opposite on work product. The split creates a governance gap most legal departments are not prepared for.

Wes ToddFebruary 24, 20269 min read · 1,247 readers this week

A Federal Judge Just Made Your AI Conversations Discoverable

On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled from the bench that 31 documents a criminal defendant generated using Anthropic’s Claude were not protected by attorney-client privilege or the work product doctrine. The defendant, Bradley Heppner, had fed information from his defense counsel into the consumer version of Claude, generated reports outlining his legal strategy, and transmitted those documents to his lawyers.

The FBI seized the documents during a search of Heppner’s home. His attorneys claimed privilege. Judge Rakoff rejected the claim on every ground they raised.

This is the first federal ruling to address whether conversations with a publicly accessible AI tool are privileged. Judge Rakoff’s written opinion, issued February 17, characterized it as a nationwide matter of first impression. The answer, at least in the Southern District of New York, is unambiguous: they are not.

AI is not an attorney. It cannot maintain a confidential relationship. And in the eyes of this court, sharing privileged information with a consumer AI tool is the same as sharing it with any other third party.

The reasoning was straightforward. Claude is not an attorney. There is no fiduciary relationship between a user and an AI platform. Anthropic’s privacy policy explicitly states that user inputs may be used for model training and disclosed to third parties, including governmental authorities. There was, as Rakoff put it, simply no reasonable expectation of confidentiality.

On work product, the court was equally direct. Heppner created the documents on his own initiative. His defense team at Quinn Emanuel did not direct him to use AI. Without attorney direction, work product protection does not attach.

The Same Day, a Different Court Reached the Opposite Conclusion

On the same day Judge Rakoff ruled from the bench in Manhattan, Magistrate Judge Anthony P. Patti of the Eastern District of Michigan issued a written ruling in Warner v. Gilbarco, Inc. that went the other direction entirely.

In Warner, the defendants demanded that a pro se plaintiff turn over her ChatGPT conversations, arguing that by using AI she had waived any work product protection. Judge Patti denied the request on two independent grounds.

First, the materials were prepared in anticipation of litigation, which brings them under the protection of Rule 26(b)(3)(A). Second, and more significantly, Judge Patti drew a distinction that Rakoff did not: attorney-client privilege waiver and work product waiver are not the same thing.

Attorney-client privilege can be waived by voluntary disclosure to any third party. Work product waiver requires disclosure to an adversary or in a manner likely to reach an adversary. Judge Patti then held that “ChatGPT and other generative AI programs are tools, not persons.” Sharing materials with a tool does not constitute disclosure to an adversary.

Judge Patti also rejected what he characterized as a fishing expedition — the defendants were seeking the plaintiff’s internal analysis and mental impressions, which are not discoverable as a matter of law.

Two federal courts. The same day. Opposite conclusions. One says AI is a third party that destroys confidentiality. The other says AI is a tool that preserves work product protection. Both applied existing legal frameworks. Neither invented new rules. They just read the same doctrines differently.

The Governance Gap This Creates

I spend most of my time working with General Counsel and CLOs who manage large litigation portfolios. Here is what this split means in practice: right now, today, your legal team is almost certainly using AI in some form. They are using it to draft briefs, analyze contracts, research case law, prepare for depositions, and generate strategy memos. Some of this is happening on enterprise platforms with confidentiality safeguards. Some of it is happening on consumer tools that train on inputs.

Most legal departments have no policy that distinguishes between the two. And after Heppner, the distinction is the difference between a protected communication and a discoverable document.

The question is not whether your team uses AI. The question is whether your governance framework treats a ChatGPT prompt the same as an email to outside counsel. Because after Heppner, opposing counsel will.

Judge Rakoff did leave a door open. In dicta, he noted that the analysis might differ if AI use had been directed by counsel under a Kovel-type arrangement. If counsel directs the client to use an AI tool, the AI might function as a lawyer’s agent. But that requires documentation, direction, and a governance framework that most legal departments do not have.

The ABA’s Task Force on Law and Artificial Intelligence released its final report in December 2025, and its central conclusion is relevant here: AI has moved from experiment to infrastructure for the legal profession, and governance is now a central responsibility. Formal Opinion 512, issued in July 2024, requires lawyers to maintain a reasonable understanding of the AI tools they use. Forty-four percent of law firms still have no formal AI governance policy.

That was a problem before Heppner. Now it is a liability.

What This Means for Corporate Legal Departments

The immediate practical implications break down into three areas.

First, consumer AI tools are a privilege risk. Any AI platform that trains on user inputs or reserves the right to share data with third parties cannot maintain the confidentiality required for attorney-client privilege. After Heppner, feeding case strategy, legal analysis, or client communications into these tools is functionally the same as publishing them. Every legal department needs a clear policy that specifies which AI tools are approved for privileged work and which are not.

Second, enterprise AI tools with proper confidentiality controls are a different story. Commentators analyzing the Heppner decision have noted that Judge Rakoff’s reasoning hinges on Anthropic’s specific privacy policy. An enterprise deployment that contractually prohibits training on inputs, encrypts data in transit and at rest, and maintains SOC 2 compliance should not trigger the same analysis. But “should not” is not “will not.” Until a court rules on this distinction, the safe approach is to have the governance framework in place before the question arises.

Third, work product may be more durable than privilege in the AI context. Judge Patti’s reasoning in Warner suggests that materials prepared in anticipation of litigation maintain work product protection even when generated through AI — because an AI tool is not an adversary. But this only helps if the materials were genuinely prepared in anticipation of litigation. Internal analyses, strategy documents, and case evaluations created before litigation is reasonably anticipated do not qualify. Timing and documentation matter.

The practical takeaway: privilege is fragile when AI is involved. Work product may be more resilient. But neither protection is automatic. Both require governance — policies, approved tool lists, documentation of counsel direction, and clear boundaries between consumer and enterprise AI platforms.

The Pattern I Keep Seeing

There is a pattern I keep coming back to in conversations with legal leadership. The tools arrive before the governance. The technology gets adopted before the policies catch up. And the risk materializes in exactly the gap between the two.

AI in legal departments follows the same trajectory as every other technology shift — from email to cloud storage to collaboration platforms. The tool gets adopted bottom-up. Individual attorneys start using it because it makes them faster. By the time leadership notices, the tool is embedded in workflows across the department. Then something happens — a ruling like Heppner, a discovery dispute, a breach — and everyone realizes the governance framework was never built.

The GCs I work with who have the least exposure to this kind of risk are not the ones who banned AI. That is unrealistic and counterproductive. They are the ones who built the governance layer before it was tested. Approved tool lists. Enterprise-grade platforms with audit trails. Clear policies on what types of information can be processed through which channels. Documentation of counsel direction when AI is used for litigation preparation.

After Heppner, the cost of not having that layer is no longer theoretical. It is discoverable.

What to Do This Week

If you manage a legal department, here are five things you should do in the next seven days.

One: Audit which AI tools your team is using. Not just the ones you approved — the ones they are actually using. Consumer tools, enterprise tools, browser extensions, embedded AI features in existing software. You cannot govern what you cannot see.

Two: Review the privacy policies and terms of service for every AI platform in use. Specifically, check whether the platform trains on user inputs, whether it shares data with third parties, and what its data retention and deletion policies are. If the policy says inputs may be used for training, that tool cannot be used for privileged work after Heppner.

Three: Establish a clear distinction between approved enterprise AI tools and consumer tools. Enterprise tools with contractual confidentiality protections should be the only platforms used for any work involving privileged information, litigation strategy, or client communications.

Four: Document counsel direction when AI is used in litigation preparation. If work product protection is your fallback, you need to show that AI use was at the direction of counsel, not a unilateral decision by the client or a staff member. Build this into your workflow.

Five: Brief your outside counsel. They need to know your AI governance expectations, and you need to know what tools they are using on your matters. If they are running case research through consumer AI tools, that is your exposure too.

The Heppner ruling is not the end of AI in legal practice. It is the beginning of AI governance as a non-negotiable requirement. The departments that build the framework now will operate from a position of strength. The ones that wait will learn the framework’s value from opposing counsel’s discovery request.

We built a short diagnostic that maps where your litigation operations have visibility and where the blind spots are — including governance gaps like this one. It takes about two minutes: /briefing

Want to see where your team stands?

The Executive Briefing takes 2 minutes and shows you exactly where the gaps are.

Take the Executive Briefing →

Continue Reading

Deep Dive

Litigation Management Is Dead. Here's What Replaces It.

8 min read
Litigation IntelligenceStrategy

How Top Insurers Are Using the Latest Tech Advancements to Solve Litigation with Less Tech

10 min read
Litigation TechRegulatory

New York Wants to Make AI Legal Advice a Felony. Five Other States Are Watching.

10 min read
Stay Informed

Subscribe to Litigation Sentinel

Join 1,847 litigation leaders who get weekly intelligence on strategy, technology, and the data that matters.

CaseGlide Software Suite
📋
Executive Briefing
Litigation readiness assessment
🏛️
Council Program
90-day activation platform
Trial
30-day proving ground
Litigation Sentinel
Published by CaseGlide · Request a Demo
© 2026 CaseGlide, Inc. All rights reserved.