• The Brainyacts
  • Posts
  • 265 | The AI Mind Virus: How AI is Infecting Lawyers' Brains

265 | The AI Mind Virus: How AI is Infecting Lawyers' Brains

Brainyacts #265

Hello to all 7904+ of you from around the globe.

Like any virus, it enters quietly—through convenience, through novelty—and rewires behavior at the neurological level. But unlike a biological pathogen, the AI Mind Virus doesn’t provoke an immune response. It rewards us. It flatters us. And by the time we notice the damage, our cognitive routines have already shifted.

AI doesn't just change how we work. It infects how we think.

I was an early AI evangelist (still am) —my first 200 newsletters were filled with prompts, use cases, and tactical guidance to help lawyers harness this technology. I still believe in AI's transformative power. But I've discovered that every interaction with an LLM doesn't just inform us—it infects our cognitive patterns, setting a behavioral trajectory that unfolds in one of two devastating ways.

After diving deep into neuroscience, behavioral psychology, and reward mechanisms, studying the breadcrumbs of cognitive change across hundreds of legal professionals, I've identified what I call the AI Mind Virus. It manifests in two distinct strains, both of which threaten the core intellectual foundations of legal practice.

The evidence is hiding in plain sight:

  • The ChatGPT outage of June 4, 2024, that left millions paralyzed without their digital sidekick. That was more than a year ago and daily usage of AI has skyrocketed since then.

  • The lawyers who dismiss AI after one bad experience and now struggle to even understand what the major models are, what deep research or reasoning is, or how to even evaluate a good prompt from a weak one.

  • And the rising generation of lawyers producing fluent but hollow and flat work product, their critical thinking muscles quietly atrophying.

What follows is the most important piece I've ever written so far (there will be more) —not because it challenges AI adoption, but because it reveals how to use this powerful technology without losing our ability to think like lawyers.

The Precedent: Social Media's Neural Hijacking

Before we examine AI's cognitive impact, let's establish a behavioral guidepost. Over the past decade-plus, research has painted a clear picture of social media as a behavioral disruptor, often likened to addictive substances due to its manipulation of the brain's reward system. Platforms like Instagram, TikTok, and X (formerly Twitter) trigger dopamine releases—those "feel-good" chemicals—through likes, notifications, and endless scrolls, creating variable-ratio reinforcement schedules similar to slot machines.

This leads to compulsive use, with studies showing heavy engagement reduces gray matter in brain areas tied to impulse control and decision-making, shortening attention spans and impairing deep reasoning. Filter bubbles exacerbate this: algorithms curate content that reinforces existing biases, isolating users in echo chambers that polarize thinking and diminish critical evaluation of diverse viewpoints.

Emotionally, social media spikes stress hormones like cortisol during negative interactions while fostering addiction—especially in teens, where prolonged exposure alters neurophysiological pathways for reward, attention, and emotional regulation, contributing to anxiety, depression, and reduced cognitive flexibility. Recent 2024-2025 analyses warn that this "dopamine overdose" from quick hits of validation turns users into habitual seekers, eroding long-term judgment and even synthesis of information, much like how constant snacking undermines appetite for a full meal.

In essence, social media doesn't just distract; it rewires how we process reality, favoring speed and affirmation over depth and doubt.

AI doesn’t work like social media. But its effects are likely more insidious.

The AI Mind Virus: Two Distinct Infection Patterns

Through extensive observations and conversations, I've identified at least two primary manifestations of AI's cognitive impact. Type A is the Immunity Delusion—premature dismissal after disappointing early experiences that creates dangerous blind spots. Type B is Cognitive Dependency—seductive over-reliance that gradually erodes independent thinking.

Both strains share a common origin: the first meaningful LLM interaction doesn't merely provide information—it establishes expectations, habits, and learning patterns that determine future engagement. Whether underwhelming or exhilarating, that initial encounter sets a behavioral trajectory that unfolds predictably.

Type A: The Immunity Delusion

I saw this often in the early days of ChatGPT. But I still see plenty of it today because many still do not understand how to effectively interact with these models. This is one of the reasons I developed the training I did—I want people to learn how to use this technology in an effective manner. I want them to see the power and value it can produce, but only if our inputs are thoughtfully designed to maximize the output. The trope is "garbage in, garbage out" is certainly true.

This strain begins with an underwhelming first impression: a bad response, a hallucinated citation, or an ill-formed output. The result? Premature dismissal. "I tried it. It’s not useful." The user disengages, unaware that the models evolve monthly – even daily.

This is particularly common among senior legal professionals. I've met partners who tested ChatGPT once, got a bad response, and walked away for good. Meanwhile, their junior associates quietly use the same tools to draft memos, generate emails, or brainstorm positions—gaining compounding advantages while their supervisors remain frozen in outdated assumptions.

We saw this pattern in the Mata v. Avianca case (2023), where hallucinated citations led to public sanctions. Instead of learning from the mistake, many lawyers chose to abandon AI entirely. But that move doesn't protect them—it blinds them.

As one study found, early tech perceptions create long-lasting resistance. The initial "meh" becomes permanent bias. In a fast-evolving space like AI, that’s a fatal mistake.

Even law students aren’t immune. Some skip hands-on experimentation after one flawed output, concluding it's all hype. Others lean in, learning to simulate arguments, explore fact patterns, and refine their instincts. The gap between these two groups is growing fast.

Immunity feels safe. But it calcifies irrelevance.

Type B: Cognitive Dependency

Full disclosure: I went into full Type B infection within the last year. I have reversed its effects though and thankfully I am in remission. But it took a concerted and careful effort.

This strain hits when the initial "wow" of productivity that hooks you, leading to habitual over-use that subtly undermines core skills. Symptoms start innocently: leaning on LLMs for drafting memos, synthesizing research, or even brainstorming arguments, drawn in by the speed and flattering tone that delivers dopamine hits with every fluent response.

But underlying mechanisms mirror social media's reward loops—variable AI responses (sometimes brilliant, sometimes off) encourage compulsive prompting, reinforced by lowered cognitive friction that bypasses the effort needed for deep learning. We are wordsmithing prompts to get the AI to respond better rather than just doing the thinking ourselves. AI's sycophantic tone, speed of response, and dramatically lowered friction stimulate dopamine-driven reward loops. Prompts feel rewarding even when the outputs are empty or flawed.

In my own case, I found myself outsourcing routine analysis in my research, noticing outputs that masked shallow insights with polished language. It felt efficient, but my tolerance for ambiguity waned, and self-checking diminished. The "cognitive ease" of LLMs short-circuits the brain's natural effort-learning loop, removing desirable difficulty and weakening retention and long-term judgment formation. I was producing words but not novel and deep analysis.

For legal professionals, this plays out dangerously: associates producing "flat" briefs that lack nuanced pattern recognition, or partners tolerating hallucinations because the AI "sounds confident," echoing Dunning-Kruger effects where over-trust builds blindness. LLMs often respond in confident tones regardless of correctness, leading users to develop a miscalibrated sense of authority over AI-generated insights.

Long-term consequences include atrophied critical thinking, reduced retention from bypassing "desirable difficulty," and professional de-skilling—appearing hyper-productive while becoming less insightful. In firm redesigns I've led, I've seen this erode learning ecosystems, especially in automating entry-level tasks, leaving juniors without the grind that forges judgment.

The starkest illustration of Type B infection came during the ChatGPT outage on June 4, 2024. What started as a routine Tuesday turned into a global productivity meltdown when OpenAI's flagship LLM went offline for several hours, beginning around 7:30 AM PT and not fully resolving until late afternoon. Millions of users, from developers to marketers—and yes, plenty of legal professionals—were left staring at error messages, unable to draft emails, brainstorm ideas, or even summarize case law. Forums like Reddit and OpenAI's community exploded with frustration: "I can't start my workday without it," one user lamented, echoing a chorus of complaints about halted workflows and mounting anxiety.

This wasn't just technical downtime; it exposed the raw edges of Type B infection. Employees hesitated to dive into tasks independently, feeling paralyzed without their AI sidekick. As captured in recent research: "Some employees hesitated to start work without LLMs assistance, feeling anxious and inefficient when forced to complete tasks independently... feeling 'incomplete' or 'handicapped' when unable to access these models." This withdrawal-like response—procrastination, diminished self-confidence, and a spike in stress hormones like cortisol—mirrors the cravings and crashes of social media addiction, where the absence of dopamine hits from notifications leaves users unmoored. In the legal world, imagine a junior associate stalling on a brief or a partner second-guessing their instincts without that instant "validation." It's a stark reminder: when AI becomes the default, reclaiming independent thinking isn't just harder—it's downright uncomfortable.

Legal professionals depend on deep reasoning, pattern recognition, and contextual nuance—all of which erode under Type B infection. Law firms risk failing to detect "flat" thinkers until it's too late, and the quietest damage is to learning ecosystems, particularly in firms automating entry-level or procedural work.

The legal profession already struggles with wide variance in competence, rigor, and ethics. We rarely talk about that. But it’s odd that we demand perfection from AI when we’ve never held ourselves to the same standard.

What’s more concerning is this: AI doesn’t just reveal human gaps—it masks them.

In the old days, you could spot weak analysis by how it read. Now, AI smooths over those cracks. And while it may raise the floor—helping low performers produce passable output—it also conceals the holes. The surface looks level, but it isn’t. AI doesn’t just elevate weak work; it laminates over gaps in competence, judgment, and rigor. What was once visible—a lack of understanding, sloppy reasoning, ethical fuzziness—is now sealed beneath polished prose. And when the floor looks finished, no one checks what’s underneath.

The Neuroscience of AI Infection

Again, I’m not a neuroscientist, but I’ve spent time digging into the research on cognition, behavior, and learning. What I’ve found is that the patterns I’ve seen in practice closely mirror what science already knows about how our brains respond to stimulus, shortcuts, and reward systems. Here are three key mechanisms I believe help explain the AI Mind Virus—and why both overuse and premature dismissal can lead to lasting cognitive change.

Dopamine Dysregulation: LLMs may activate variable-ratio reward systems similar to social media or slot machines (e.g. the next interaction might be the winner). Immediate feedback and emotionally reinforcing tone create small dopamine hits, encouraging compulsive prompting and shortcut-seeking behavior.

False Confidence Effects: LLMs often respond in confident tones regardless of correctness. Users develop a miscalibrated sense of authority over AI-generated insights, mirroring Dunning-Kruger effects where the more you use, the more you trust—often unjustifiably.

Reward Without Work: The "cognitive ease" of LLMs short-circuits the brain's natural effort-learning loop. Removing desirable difficulty weakens retention and long-term judgment formation, much like how constant snacking undermines appetite for a full meal.

Recovery and Resistance: Strategic Principles

My own recovery from Type B infection, as well as my work helping professionals avoid Type A dismissal, has led me to develop specific resistance strategies:

  1. Simulate, Don't Delegate: Use AI to pressure-test arguments, simulate opposing views, or rehearse logic—not just execute tasks. This preserves the cognitive effort essential for legal reasoning while leveraging AI's capabilities.

  2. Friction as Fitness: Design AI prompts that preserve cognitive effort—avoid "one-shot" interactions like you are using Google to search for something. Treat your prompts as thoughtful instructions not pithy commands. Prioritize iterative work over immediate resolution. The goal is to maintain the beneficial stress of thinking while reducing mundane friction.

  3. Humble-Prompting: Use AI to challenge you. Examples include "Where is my logic weakest?" or "What's missing from this structure?" This approach uses AI to enhance rather than replace critical thinking.

  4. Avoid Loops: Build a habit of stepping back from the model after one or two iterations. Don’t just keep re-prompting—review the output the way you would your own work pre-AI. Pause. Reread. Reflect. Often, the best insight emerges not from another prompt, but from you re-engaging with the material. Thinking with AI requires interrupting its rhythm with your own.

The Path Forward

The AI mind virus is not inevitable, but it is insidious. Both strains develop gradually, often imperceptibly, until behavioral patterns become entrenched. The solution is not to avoid AI—that path leads to professional obsolescence—but to use it consciously, strategically, and with full awareness of its cognitive risks.

For legal professionals, this means developing AI literacy that goes beyond technical competence to include metacognitive awareness. We must learn to recognize the symptoms of both immunity delusion and cognitive dependency in ourselves and our colleagues. We must design systems and practices that harness AI's power while preserving the deep thinking, contextual reasoning, and judgment that remain uniquely human.

The stakes could not be higher. In a profession where the quality of thinking directly correlates with client outcomes and professional success, allowing AI to degrade our cognitive capabilities is not just personally damaging—it's professionally and ethically irresponsible.

The mind virus is real. The question is whether we will succumb to it or develop the immunity that comes not from avoidance, but from conscious, strategic engagement with one of the most powerful cognitive tools ever created.

To read previous editions, click here.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

I am a lawyer, entrepreneur, and teacher. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.