In partnership with

Hello to all ~10,000 of you from around the globe. Thank you for reading and sharing!

Human on the Hook Pt. 2: The Shield - What $20 a month can’t buy you

In Part One of this series, we examined a pattern that no one in the legal profession is tracking: sophisticated clients like Daniel Priestley and Robert Castro quietly handling legal matters with AI, pulling profitable work out of the system without ever engaging counsel. Priestley saved tens of thousands of dollars. Castro sold a home in five days. Both used AI to do the work. Both kept a human involved only for the shield.

But what exactly is that shield? And how fragile is it in the age of AI?

Same Week, Opposite Outcomes

The answer came into sharp focus during a single week in February 2026, when two federal courts reached opposite conclusions on the same fundamental question: are your conversations with an AI chatbot protected from discovery?

United States v. Heppner (S.D.N.Y., February 10, 2026). Bradley Heppner, facing securities and wire fraud charges, used Claude to develop defense strategy documents after receiving a grand jury subpoena. He had retained counsel but used the AI on his own initiative, without his attorney’s direction. Judge Jed Rakoff ruled the materials were not protected, neither by attorney-client privilege nor the work product doctrine. The reasoning: the work product doctrine protects a lawyer’s mental impressions and litigation strategy, not a client’s independent research conducted without attorney supervision. Heppner used a public, consumer-grade AI platform. Everything he typed, every strategy, every contingency, every vulnerability he explored, was ordered disclosed.

Warner v. Gilbarco (E.D. Mich., February 10, 2026). Same week. Sohyon Warner, a pro se plaintiff in an employment discrimination case, used a paid ChatGPT account to draft filings and analyze case-related documents. The defendant moved to compel disclosure of her AI interactions. The court denied the motion. The reasoning: Warner was acting as her own advocate, and ChatGPT was a tool, not a third party. Sharing materials with a tool doesn’t constitute disclosure that would waive protection.

Same question. Same week. Opposite outcomes.

The critical distinction was who directed the AI use. In Heppner, the client acted independently of counsel, fatally undermining any claim that the materials reflected attorney work product. Judge Rakoff pointedly noted that had counsel directed Heppner to use Claude, the AI might have been treated as functioning “in a manner akin to a highly trained professional who may act as a lawyer’s agent.” In Warner, the plaintiff was the advocate, and the court viewed the AI as analogous to a research database, not a confidant. There was no attorney relationship to evaluate.

There is no settled law. Two reasonable judges applied existing doctrine to novel facts and landed in entirely different places.

What This Means for Priestley

Now apply this back to Priestley. He used Claude, the same platform as Heppner, to develop litigation strategy for a real dispute. Under Warner’s reasoning, that work might be protected. Under Heppner’s, every prompt, every decision tree, every negotiation tactic Priestley explored is potentially discoverable by the opposing party. And because Priestley is a sophisticated operator who generates detailed strategic analysis (not casual queries), that discoverable material wouldn’t just reveal documents. It would expose a complete map of his negotiation strategy, risk tolerance, and fallback positions. That’s the kind of intelligence opposing counsel would pay a fortune to obtain in discovery.

This is the Rubicon that hasn’t been crossed, and may never be. AI can produce the work. It cannot provide the shield around the work. A lawyer doesn’t just bring expertise and judgment. A lawyer brings privilege, liability, and professional accountability: the “human on the hook” that no subscription can replace.

The Third Rail: Trade Secrets

Privilege is not the only shield at risk. There is a parallel erosion happening in trade secret law that extends the “human on the hook” problem well beyond the courtroom.

The threat runs in two directions simultaneously. In one direction, employees are feeding proprietary information into AI systems. In the other, AI is making it easier to reconstruct proprietary information from the outside, potentially destroying the legal basis for trade secret protection altogether.

The leakage problem.

In 2023, Samsung lifted a ban on employees using ChatGPT. Within three weeks, engineers had pasted semiconductor source code, internal meeting notes, and proprietary chip testing sequences directly into the tool. Those trade secrets were absorbed into a system whose terms of service allowed OpenAI to retain and potentially train on the inputs. Samsung’s response was to develop an in-house AI with a 1,024-byte prompt limit: a band-aid on a systemic wound.

Samsung is not an outlier. Across industries, one recent survey found 56% of workers have tried generative AI in the workplace. Sensitive data comprises up to 11% of what employees paste into these tools. Source code accounts for 42% of AI-related data policy violations, with developers uploading proprietary code for debugging help. IBM's 2025 Cost of a Data Breach Report found that shadow AI breaches (people using AI tools outside company approvals) cost organizations $4.63 million on average, which is $670,000 more than standard incidents. Shadow AI incidents represent 20% of all breaches. And 13% of organizations reported breaches of AI models or applications, with 97% of those lacking proper AI access controls.

The threshold problem.

The second direction is more subtle and potentially more consequential. Trade secret protection requires, among other things, that the information not be “readily ascertainable” through proper means. As AI becomes capable of synthesizing vast quantities of public data into competitive intelligence, the category of information that clears that legal bar keeps shrinking. As one attorney told Bloomberg Law: “The notion of a customer list as a trade secret is on borrowed time.” If an AI can reconstruct your customer list from LinkedIn profiles, press releases, and industry databases, a court may decide that list was never protectable in the first place.

The pattern mirrors what happened with cybersecurity: as information became easier to obtain through hacking, the threshold for what protective measures courts required to confer legal protection increased. AI is applying the same pressure to trade secret law. The bar for what qualifies as a protectable secret is rising precisely because the tools to obtain that information are becoming cheaper and more accessible.

Here is the irony that firm leaders should not miss: the same technology that is pulling Priestley-type disputes out of the legal system is simultaneously generating an entirely new body of counseling and advisory work inside it. Not novel practice areas that require reinvention. The work that needs doing right now, for clients firms already serve, on matters already in motion.

Has your client updated their employment agreements to address employee use of generative AI? Have their vendor contracts been revised to contemplate AI-generated deliverables, or to allocate liability when AI-drafted language contains errors? Do their NDAs account for the risk that confidential information pasted into a consumer AI tool may no longer qualify as confidential? Has anyone advised their board on how AI usage by executives could compromise privilege in future litigation?

These are not futuristic questions. They are live exposures, present in the contracts, policies, and workflows of virtually every client on your roster. And the uncomfortable truth is that most firms haven't addressed them, in many cases because they haven't fully grappled with the implications themselves. The firms that get ahead of this, that bring this guidance to their clients proactively rather than reactively, will demonstrate the value of the "human on the hook" in the most concrete way possible: by protecting clients from risks they didn't know they had.

The law is unsettled, the exposure is real, and most firms haven't updated a single document to address it.

In Part Three, we turn to the question every managing partner should be asking: what do I actually do about this?

That is it for now. Talk soon again.

By the Numbers

Privilege & Discovery

2  |  Federal courts that ruled on AI privilege in the same week (Feb. 10, 2026), reaching opposite conclusions

31  |  AI-generated documents seized from Heppner’s devices and ordered disclosed

Trade Secrets & AI Exposure

56%  |  of workers have tried generative AI in the workplace

11%  |  of data pasted into AI tools by employees is sensitive or proprietary

42%  |  of AI-related data policy violations involve proprietary source code

223  |  AI-related data security incidents per month at the average organization

45%  |  of sensitive AI interactions originate from personal accounts on corporate devices

Sources & Further Reading

United States v. Heppner, S.D.N.Y. (Feb. 2026)

Warner v. Gilbarco, E.D. Mich. (Feb. 2026)

National Law Review, “Same Week, Different Frameworks” (2026)

Mayer Brown, “M&A Discovery in the AI Era” (March 2026)

Bloomberg Law, “AI Will Force Trade Secret Calculus Shift, Escalate Tactics” (Feb. 2026)

Samsung ChatGPT Data Leak Incident, as reported by TechRadar, Gizmodo, and The Register (2023)

Kiteworks, “2026 AI Data Crisis: Protect Your Sensitive Information Now” (2026)

Wake up to better business news

Some business news reads like a lullaby.

Morning Brew is the opposite.

A free daily newsletter that breaks down what’s happening in business and culture — clearly, quickly, and with enough personality to keep things interesting.

Each morning brings a sharp, easy-to-read rundown of what matters, why it matters, and what it means to you. Plus, there’s daily brain games everyone’s playing.

Business news, minus the snooze. Read by over 4 million people every morning.

To read previous editions, click here.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate

Who is the author, Josh Kubicki?

Josh Kubicki teaches AI and the business of law at Indiana University Maurer School of Law and has trained over 3,000 lawyers on generative AI. He is the author of Brainyacts, read by nearly 10,000 legal professionals worldwide.

AI training, courses, and resources: kubicki.ai

Strategic advisory for firm leadership: joshkubicki.com

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.

Keep Reading