• The Brainyacts
  • Posts
  • 260 | 🏴‍☠️The AI Shift Lawyers Can’t Afford to Ignore

260 | 🏴‍☠️The AI Shift Lawyers Can’t Afford to Ignore

Brainyacts #260

Welcome to the New (Experimental) Version of Brainyacts

For long-time readers, you’re about to see something completely different. For new readers, welcome! I hope you find this newsletter incredibly useful, pragmatic, and engaging. My goal is simple: to inform and empower legal professionals—whether you’re a beginner, intermediate, or advanced user of generative AI.

If you’ve read previous versions of Brainyacts, you’ll know it was a potpourri of curated information. The feedback I received was that it was highly useful—signal, not noise in a market saturated with hype and BS. But I also know many of you didn’t have the time to read it all. So, I’m taking a bit of a risk here with this new approach.

I’d love your feedback on this new direction for Brainyacts. Did it teach you something? Was it helpful? Was it worth your time—or not? Just hit reply to this newsletter. Your responses come straight to me, and they help shape what Brainyacts becomes.

Thank you for reading, and I hope you enjoy this new take. Onward 👇

In this Brainyacts:

The rise and reality of Local AI: Don’t ignore this!

If time-pressed?: Get essentials in my Quick Take

Local AI: It’s Already in Your Organization—Are You Ready?

As a legal professional, you need to know two things about local AI—right now:

1. It’s Already Here (and You Might Not Know It)

Someone in your organization—maybe on your team—is likely experimenting with local AI. Running AI models directly on personal devices is becoming common (and easier), whether for work projects or productivity hacks.

  • Why It Matters: Are sensitive client data or company policies at risk?

  • What You Need to Do: Review your AI policies to ensure they address this growing trend. Ignorance isn’t an option.

2. It’s a Game-Changer (If You Use It Right)

You should try it! Local AI offers unmatched privacy and control—a personalized tool that can supercharge your productivity.

  • Why You Should Care: Local AI stays offline, trains on your workflows, and delivers bespoke results. Imagine an AI assistant that truly “gets” you.

  • What to Watch Out For: Use it intentionally. Secure your devices, safeguard sensitive data, and educate your team on responsible practices. 

Bottom Line: Local AI is here. Whether you’re managing its risks or leveraging its transformative power, you need to take action—now.

The Deep Dive: Explore the Full Reality

Why You Need to Know About Local AI Right Now

As a legal professional, there are two reasons why local AI needs to be on your radar—right now!

Reason #1: It’s already happening.

Chances are, someone on your team or in your organization is experimenting with running an AI model locally on their laptop or desktop. Maybe they’re testing its capabilities for work projects, or maybe they’re using it for personal productivity. Either way, you need to understand what local AI is, why it matters, and how to navigate the risks and benefits.

For example, as a law professor, I was asked by my administration to test whether I could bypass exam security protocols and use a local AI model during a law school exam. The result? It’s surprisingly easy—unless the exam software is locked down to the strictest security settings, which most professors avoid because they limit student access to digital resources.

The broader takeaway is this: just like exam settings need to adapt to the reality of local AI, so do your organization’s policies and practices. If employees or colleagues are using local AI models, do you know how to address it? Should you even care? These are the kinds of questions you need to be prepared to answer.

Reason #2: It’s your key to unlocking generative AI’s true potential.

On a more optimistic note, local AI isn’t just something to monitor—it’s something to embrace. For the beginner or intermediate user, local AI opens the door to unprecedented personalization and productivity. Imagine having a model that doesn’t just respond generically but is trained on your specific knowledge, your work product, and your unique workflows.

This is the holy grail of generative AI: a personalized assistant that truly understands you and compounds your productivity over time. Local AI makes that possible, giving you control and independence in ways cloud-based tools simply can’t match.

That’s why I’m writing this essay. To help you navigate both the risks and the immense opportunities of local AI. By the end, you’ll know what it is, why it matters, and how to start using it to enhance your practice—or prepare your organization for its inevitable adoption.

Let’s Cover Some Basics:

What Is Local AI (and What Models Does It Use)?

Local AI refers to running artificial intelligence models directly on your device—whether it’s your laptop, desktop, or even an external drive—without relying on cloud-based servers. Instead of sending data to third-party providers like OpenAI or Microsoft, local AI processes everything on your hardware. This means greater privacy, control, and the ability to tailor the AI to your specific needs.

What Models Does Local AI Use?

Local AI typically uses open-source AI models—a concept you may have heard of but might not fully understand. Open-source models are AI systems made available to the public by their developers, often with fewer restrictions and more flexibility compared to closed models like OpenAI’s GPT-4 or Microsoft Copilot.

For example:

  • Meta’s Llama (Llama 3.2): A powerful open-source language model optimized for running locally.

  • China’s DeepSeek: Advanced models designed to run on laptops and even mobile devices, requiring minimal hardware.

  • Stable Diffusion: A popular open-source text-to-image model that lets users create visuals locally, avoiding the cost and restrictions of proprietary services.

These models often compete with closed-source counterparts in quality and performance, and their open nature makes them ideal for personalization.

What Is Open Source (in Plain Language)?

At its core, open source means that the code or framework used to create a model is freely available for anyone to view, modify, or build upon. Think of it like a recipe: anyone can use it, tweak it, or combine it with other recipes to create something new. This is in contrast to closed-source models, where the recipe is a closely guarded secret—accessible only through subscription fees or APIs.

How Does Open Source Work?

Open-source projects thrive on collaboration. Developers around the world contribute improvements, identify bugs, and share insights, leading to rapid innovation. For instance:

  • Open-source models like Llama 3 (from Meta) or Gemma (from Google) benefit from global contributions, resulting in faster updates and unique capabilities.

  • Open-source platforms like Hugging Face act as hubs, connecting developers, users, and tools to make these models accessible.

Examples Lawyers Might Recognize

Even if you’re not familiar with open-source AI, you’ve likely encountered open-source software:

  • Linux: The operating system powering most servers worldwide.

  • Mozilla Firefox: A popular open-source web browser.

  • WordPress: The backbone of many websites, including legal blogs and firm pages.

Just as these tools revolutionized their fields, open-source AI is transforming how we interact with and deploy artificial intelligence.

Returning to Reason #1: It’s Already Happening

The reality is that local AI’s ease of access and growing capabilities make it incredibly tempting for personal or unofficial use. And this raises a key question for legal professionals: Do you know how to address this if it’s happening? Should you even care?

The Work Environment Reality: What You Need to Consider

Let’s start with your own workplace. If colleagues or employees are using local AI models, even with the best intentions, it may have significant implications for privacy, security, and risk management.

1. Policies and Protocols

Does your workplace’s current AI policy cover local AI? Many organizations are focused on controlling the use of cloud-based tools like OpenAI or Gemini, but local AI presents unique challenges:

  • Data Control: If employees use local AI on their devices, are they processing sensitive company or client data? How is that data stored, and does it comply with data protection regulations (e.g., GDPR, HIPAA)?

  • Device Security: A laptop running a local AI model is only as secure as its owner makes it. Are there safeguards to ensure the device isn’t compromised? What happens if the laptop is lost or stolen?

2. Potential Compliance Issues

Using local AI might inadvertently lead to compliance breaches. For example:

  • If client data is processed without proper safeguards, this could violate confidentiality or industry-specific regulations.

  • Open-source models, while powerful, can vary in quality and transparency. Some may have been trained on datasets that raise ethical or legal questions.

3. Monitoring and Accountability

Unlike cloud-based tools, local AI models don’t leave an audit trail that an organization can easily monitor. While this offers privacy for the user, it also makes it harder for firms to track how these tools are being used. This raises questions like:

  • Should the organization monitor or restrict the use of local AI models?

  • How can you ensure they’re being used responsibly and ethically?

The Client Perspective: Advising on Local AI Use

For law firms as well as in-house teams, your clients may also be grappling with the rise of local AI in their workplaces. Here’s how you can help them think through the implications:

1. Identifying Use Cases

Ask your clients whether local AI is already in use or could become relevant in their operations. Common scenarios include:

  • Employees using local AI for productivity tools, such as drafting documents or summarizing reports.

  • Teams experimenting with local AI for specialized purposes, such as internal R&D or analyzing proprietary datasets.

2. Revisiting Privacy and Security Protocols

Clients may not realize that local AI brings unique risks compared to cloud-based solutions. You can guide them in:

  • Reviewing how sensitive data is handled and stored on employee devices.

  • Ensuring proper encryption and backup protocols are in place for devices running local AI.

3. Compliance Considerations

Local AI introduces compliance risks that may not be immediately apparent. Help your clients evaluate:

  • Whether local AI use complies with privacy laws like GDPR, particularly if sensitive personal data is involved.

  • Licensing requirements for the open-source models they use. Some open-source licenses impose obligations that organizations must understand.

4. Training and Awareness

Clients may need help educating their teams about local AI’s capabilities and risks. For example:

  • Training employees on what kinds of data can and cannot be processed using local AI.

  • Encouraging teams to choose reputable, well-documented models (e.g., Llama 2 from Meta) rather than obscure or poorly vetted options.

Questions to Ask Yourself

To prepare for the rise of local AI in your work environment—or your clients’—start by asking these critical questions:

  1. Do you have an AI policy that addresses local AI specifically?

  2. Are there clear guidelines on how sensitive data can be processed using local AI?

  3. What are the potential compliance risks, and how can they be mitigated?

  4. How will you educate employees or clients about the responsible use of local AI?

  5. Do you have tools or practices in place to monitor and evaluate the use of local AI models?

Bridging the Gap: Awareness vs. Opportunity

Now that we’ve explored the risks and realities of unknown local AI use, it might seem like I’m warning you to steer clear of it altogether. But that’s not the case—quite the opposite, in fact.

Using local AI responsibly and intentionally isn’t just safe—it’s transformative. Knowing how to use it effectively is the key to unlocking its full potential. When you understand the risks and best practices, you can take full advantage of local AI’s power without worrying about the pitfalls.

Returning to Reason #2: It’s your key to unlocking generative AI’s true potential.

1. Client Information Privacy and Security

As a lawyer, safeguarding sensitive client information is non-negotiable. Local AI offers a significant advantage: your data never leaves your device. Unlike cloud-based tools, which often require anonymizing sensitive information before prompting, local AI allows you to interact with client data securely—just as you would in Word or Excel on your laptop.

This is particularly important when considering that many firms and in-house teams using foundational models like OpenAI or Microsoft Copilot often log user prompts and outputs for performance monitoring. While this isn’t typically malicious, it means someone else has a record of how you’re using AI. With local AI, all interactions stay on your device, offering unmatched privacy for sensitive workflows.

2. Freedom from Censorship and Bias

Cloud-based AI models, such as OpenAI or Gemini, are designed with system-wide guardrails to protect users and ensure compliance with regulations. While well-intentioned, these guardrails can sometimes lead to over-correction or bias, limiting how the AI responds. Local AI models, particularly open-source ones, are free from these restrictions. This gives you more flexibility to explore use cases without unnecessary interference or “persona drift” that affects how the AI interacts with you.

3. Customization and Control (No Hidden System Prompts)

When you use cloud-based AI, a hidden system prompt sits behind every query you input. This invisible layer governs how the model behaves and interprets your requests, sometimes creating frustrating or unexpected responses. For instance, if you’ve ever struggled to get a model to use a particular tone, format, or process, the system prompt is often the culprit. With local AI, there are no hidden constraints. You’re in full control of how the model behaves and can design it to interact precisely the way you want.

4. Personalization: Your Knowledge, Your AI

Imagine having an AI assistant trained specifically on your knowledge base. With local AI, that’s not just possible—it’s transformative. You can train the model on your personal repository of contracts, case law, deposition transcripts, or any other documents stored on your machine. This creates a hyper-personalized AI assistant tailored to your specific workflows, making it a powerful tool for drafting, research, and analysis. For many lawyers, this is the holy grail of AI: a tool that understands your knowledge and your style.

5. Independence and Portability

Unlike firm-wide or cloud-based solutions, local AI offers independence:

  • Portability Across Jobs: Your setup is yours alone. If you change firms or roles, you can carry your AI instance—and its personalized training—with you.

  • Offline Functionality: Whether you’re on a plane with no Wi-Fi or in a location with poor connectivity, your local AI will still work. This reliability is critical for lawyers who need consistent access to their tools.

  • Resilience Against Outages: If a cloud service provider experiences an outage or your firm’s network goes down, you won’t lose access to your AI capabilities.

6. Cost Savings

Once installed, local AI operates without ongoing subscription fees or API usage costs. While there may be an initial investment in hardware or setup, the long-term savings are significant—especially for smaller firms or solo practitioners managing tight budgets.

7. Future-Proofing Your Practice

The rapid pace of AI development can make cloud-based tools feel like a moving target, with frequent updates and changing functionality. Local AI provides stability. You choose the model, configure it to your needs, and aren’t at the mercy of a provider’s updates or policy changes.

Risks and Considerations for Running Local AI

While the benefits of local AI are compelling, it’s important to understand the trade-offs and risks before diving in. Here’s what you need to know to make an informed decision:

1. Hardware Requirements

Running AI models locally doesn’t require a supercomputer, but it does demand adequate hardware. For example:

  • Macs: A MacBook Pro with an M3 Max chip and 64GB of memory can handle most models, including LLAMA 3, via tools like LM Studio.

  • PCs: A machine with a robust GPU (e.g., NVIDIA RTX 3070 or higher) and at least 32GB of RAM is a good baseline for running larger models.

You can also store models on external drives to save space, but for serious, frequent use, you may need to invest in a higher-spec device. The good news? Hardware requirements are becoming more efficient with advancements like llama.cpp and lightweight models.

2. User Interface Trade-Offs

When using cloud-based tools like OpenAI or Gemini, you benefit from polished, intuitive interfaces designed by large UX teams. Local AI models, on the other hand, tend to feel more raw. While tools like LM Studio are user-friendly and improving, they won’t match the seamless experience of browser-based systems—yet.

However, this trade-off comes with a silver lining: unmatched customization. You can tailor your local AI’s interface and behavior to suit your unique workflows.

3. Backup and Data Loss

With local AI, your model and data live on your device. If you lose your laptop, you lose your AI. Backups are essential. Whether it’s saving logs of your prompts or creating periodic backups of your model, you’ll need to take ownership of your AI’s continuity.

As the ecosystem matures, we’ll see more plug-and-play tools for local AI backups and recovery, but for now, it’s a manual process.

4. Model Versatility

In a cloud environment, you can toggle between tools like OpenAI, Anthropic, or Gemini in your browser. With local AI, your setup is more constrained. Most devices can only handle one model at a time, maybe two, depending on your hardware and the model size.

5. Lack of Support

Local AI is a DIY experience. Unlike proprietary tools with dedicated support teams, you’ll need to rely on open-source communities for help. Fortunately, these communities are vibrant. The flip side? Once your local AI is up and running, it’s generally stable and doesn’t require frequent maintenance or updates.

6. Choosing the Right Model

The sheer number of available models can be overwhelming. Thousands of options exist, and not all are reliable or well-documented. To mitigate risks:

  • Stick with trusted names like Meta’s Llama, DeepSeek, or Qwen.

  • Use directories like Hugging Face to evaluate models based on community feedback and performance benchmarks.

Starting with widely recognized models from reputable organizations ensures you’re not inadvertently using a model with poor data training or questionable outputs.

7. Emerging Ecosystem

The local AI landscape is evolving quickly. While today’s tools may feel a bit raw, the rapid development of interfaces, optimization techniques, and community resources means the experience will be dramatically better in just a year. By getting in early, you’ll build foundational knowledge as the ecosystem matures, putting you ahead of the curve.

Selected Resources:
➭ What Does Using Local AI Look Like? Watch me hack a law exam using Local AI HERE

To read previous editions, click here.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

I am a lawyer, entrepreneur, and teacher. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.