Think your conversations with ChatGPT or Claude are just between you and the machine? Think again.
A groundbreaking ruling out of New York just changed the game for anyone using AI tools to discuss legal matters. In United States v. Bradley Heppner, No. 25 Cr. 503 (S.D.N.Y.), Judge Rakoff of the Southern District of New York said out loud what we’ve all been thinking: your chats with public AI platforms aren’t protected by attorney-client privilege—and they can absolutely be used against you.
What Happened?
Bradley Heppner found himself in hot water facing federal securities fraud charges. Like many of us, he turned to AI for help—specifically, Anthropic’s Claude. He used it to think through his legal exposure, explore defense strategies, and work through arguments for his case.
Bad move.
When the FBI searched his home, they seized 31 documents capturing his conversations with the AI. Heppner tried to claim privilege, arguing he’d used information from his lawyers and intended to share the AI outputs with counsel.
The court wasn’t buying it.
Why Privilege Failed
Judge Rakoff broke it down simply. Attorney-client privilege requires three things: a communication between attorney and client, an expectation of confidentiality, and the purpose of getting legal advice.
Heppner’s AI chats didn’t make the cut.
Claude isn’t a lawyer. This might seem obvious, but it matters. Privilege protects that special “trusting human relationship” with a licensed professional who owes you fiduciary duties. An AI chatbot? It doesn’t qualify.
There’s no confidentiality with public AI. Here’s the kicker: Anthropic’s privacy policy explicitly states they collect your inputs, use them for training, and can share data with third parties—including government agencies, no subpoena required. When you click “agree,” you’re essentially consenting to disclosure. Any privilege you might have had? Waived the moment you hit enter.
It wasn’t really about getting legal advice. Heppner claimed he was preparing to talk to his lawyers, but his counsel never told him to use Claude. That distinction matters. If your attorney directs you to use an AI tool as part of their work, there’s an argument it functions like an agent. Acting on your own with no direction from a lawyer? You’re just having a chat with a bot.
The work product doctrine didn’t save him either. Since Heppner used Claude independently and the conversations didn’t reveal his lawyers’ actual strategy, the protection didn’t apply.
What This Means for Lawyers and Clients
AI tools are incredibly useful. They can help you brainstorm, organize your thoughts, and even draft documents. But this ruling makes clear that convenience comes with risk.
If you’re using public AI platforms to discuss anything sensitive—legal strategy, business disputes, regulatory concerns—you need to assume it’s discoverable. That “private” conversation could end up as Exhibit A.
Here’s the practical takeaway:
Read the fine print. Before uploading sensitive information to any AI platform, understand the data policies. If they can train on your inputs or share with third parties, confidentiality is already compromised.
Get lawyer buy-in. If your attorney specifically directs you to use an AI tool as part of their work, you have a stronger argument for protection. Going rogue means going unprotected.
Treat public AI like public space. Whatever you type could potentially be seen by others. Draft accordingly.
This case is a first—but it won’t be the last. As AI becomes ubiquitous in legal work, courts will continue wrestling with these questions. For now, the message is clear: public AI tools are powerful, but they’re not your attorney, and they’re definitely not your confidante.
Use them wisely.