- The Brainyacts
- Posts
- 260 | đ´ââ ď¸The AI Shift Lawyers Canât Afford to Ignore
260 | đ´ââ ď¸The AI Shift Lawyers Canât Afford to Ignore
Brainyacts #260

Welcome to the New (Experimental) Version of Brainyacts
For long-time readers, youâre about to see something completely different. For new readers, welcome! I hope you find this newsletter incredibly useful, pragmatic, and engaging. My goal is simple: to inform and empower legal professionalsâwhether youâre a beginner, intermediate, or advanced user of generative AI.
If youâve read previous versions of Brainyacts, youâll know it was a potpourri of curated information. The feedback I received was that it was highly usefulâsignal, not noise in a market saturated with hype and BS. But I also know many of you didnât have the time to read it all. So, Iâm taking a bit of a risk here with this new approach.
Iâd love your feedback on this new direction for Brainyacts. Did it teach you something? Was it helpful? Was it worth your timeâor not? Just hit reply to this newsletter. Your responses come straight to me, and they help shape what Brainyacts becomes.
Thank you for reading, and I hope you enjoy this new take. Onward đ
In this Brainyacts:
The rise and reality of Local AI: Donât ignore this!
If time-pressed?: Get essentials in my Quick Take
Local AI: Itâs Already in Your OrganizationâAre You Ready?
As a legal professional, you need to know two things about local AIâright now:
1. Itâs Already Here (and You Might Not Know It)
Someone in your organizationâmaybe on your teamâis likely experimenting with local AI. Running AI models directly on personal devices is becoming common (and easier), whether for work projects or productivity hacks.
Why It Matters: Are sensitive client data or company policies at risk?
What You Need to Do: Review your AI policies to ensure they address this growing trend. Ignorance isnât an option.
2. Itâs a Game-Changer (If You Use It Right)
You should try it! Local AI offers unmatched privacy and controlâa personalized tool that can supercharge your productivity.
Why You Should Care: Local AI stays offline, trains on your workflows, and delivers bespoke results. Imagine an AI assistant that truly âgetsâ you.
What to Watch Out For: Use it intentionally. Secure your devices, safeguard sensitive data, and educate your team on responsible practices.
Bottom Line: Local AI is here. Whether youâre managing its risks or leveraging its transformative power, you need to take actionânow.

The Deep Dive: Explore the Full Reality
Why You Need to Know About Local AI Right Now
As a legal professional, there are two reasons why local AI needs to be on your radarâright now!
Reason #1: Itâs already happening.
Chances are, someone on your team or in your organization is experimenting with running an AI model locally on their laptop or desktop. Maybe theyâre testing its capabilities for work projects, or maybe theyâre using it for personal productivity. Either way, you need to understand what local AI is, why it matters, and how to navigate the risks and benefits.
For example, as a law professor, I was asked by my administration to test whether I could bypass exam security protocols and use a local AI model during a law school exam. The result? Itâs surprisingly easyâunless the exam software is locked down to the strictest security settings, which most professors avoid because they limit student access to digital resources.
The broader takeaway is this: just like exam settings need to adapt to the reality of local AI, so do your organizationâs policies and practices. If employees or colleagues are using local AI models, do you know how to address it? Should you even care? These are the kinds of questions you need to be prepared to answer.
Reason #2: Itâs your key to unlocking generative AIâs true potential.
On a more optimistic note, local AI isnât just something to monitorâitâs something to embrace. For the beginner or intermediate user, local AI opens the door to unprecedented personalization and productivity. Imagine having a model that doesnât just respond generically but is trained on your specific knowledge, your work product, and your unique workflows.
This is the holy grail of generative AI: a personalized assistant that truly understands you and compounds your productivity over time. Local AI makes that possible, giving you control and independence in ways cloud-based tools simply canât match.
Thatâs why Iâm writing this essay. To help you navigate both the risks and the immense opportunities of local AI. By the end, youâll know what it is, why it matters, and how to start using it to enhance your practiceâor prepare your organization for its inevitable adoption.
Letâs Cover Some Basics:
What Is Local AI (and What Models Does It Use)?
Local AI refers to running artificial intelligence models directly on your deviceâwhether itâs your laptop, desktop, or even an external driveâwithout relying on cloud-based servers. Instead of sending data to third-party providers like OpenAI or Microsoft, local AI processes everything on your hardware. This means greater privacy, control, and the ability to tailor the AI to your specific needs.
What Models Does Local AI Use?
Local AI typically uses open-source AI modelsâa concept you may have heard of but might not fully understand. Open-source models are AI systems made available to the public by their developers, often with fewer restrictions and more flexibility compared to closed models like OpenAIâs GPT-4 or Microsoft Copilot.
For example:
Metaâs Llama (Llama 3.2): A powerful open-source language model optimized for running locally.
Chinaâs DeepSeek: Advanced models designed to run on laptops and even mobile devices, requiring minimal hardware.
Stable Diffusion: A popular open-source text-to-image model that lets users create visuals locally, avoiding the cost and restrictions of proprietary services.
These models often compete with closed-source counterparts in quality and performance, and their open nature makes them ideal for personalization.
What Is Open Source (in Plain Language)?
At its core, open source means that the code or framework used to create a model is freely available for anyone to view, modify, or build upon. Think of it like a recipe: anyone can use it, tweak it, or combine it with other recipes to create something new. This is in contrast to closed-source models, where the recipe is a closely guarded secretâaccessible only through subscription fees or APIs.
How Does Open Source Work?
Open-source projects thrive on collaboration. Developers around the world contribute improvements, identify bugs, and share insights, leading to rapid innovation. For instance:
Open-source models like Llama 3 (from Meta) or Gemma (from Google) benefit from global contributions, resulting in faster updates and unique capabilities.
Open-source platforms like Hugging Face act as hubs, connecting developers, users, and tools to make these models accessible.
Examples Lawyers Might Recognize
Even if youâre not familiar with open-source AI, youâve likely encountered open-source software:
Linux: The operating system powering most servers worldwide.
Mozilla Firefox: A popular open-source web browser.
WordPress: The backbone of many websites, including legal blogs and firm pages.
Just as these tools revolutionized their fields, open-source AI is transforming how we interact with and deploy artificial intelligence.
Returning to Reason #1: Itâs Already Happening
The reality is that local AIâs ease of access and growing capabilities make it incredibly tempting for personal or unofficial use. And this raises a key question for legal professionals: Do you know how to address this if itâs happening? Should you even care?
The Work Environment Reality: What You Need to Consider
Letâs start with your own workplace. If colleagues or employees are using local AI models, even with the best intentions, it may have significant implications for privacy, security, and risk management.
1. Policies and Protocols
Does your workplaceâs current AI policy cover local AI? Many organizations are focused on controlling the use of cloud-based tools like OpenAI or Gemini, but local AI presents unique challenges:
Data Control: If employees use local AI on their devices, are they processing sensitive company or client data? How is that data stored, and does it comply with data protection regulations (e.g., GDPR, HIPAA)?
Device Security: A laptop running a local AI model is only as secure as its owner makes it. Are there safeguards to ensure the device isnât compromised? What happens if the laptop is lost or stolen?
2. Potential Compliance Issues
Using local AI might inadvertently lead to compliance breaches. For example:
If client data is processed without proper safeguards, this could violate confidentiality or industry-specific regulations.
Open-source models, while powerful, can vary in quality and transparency. Some may have been trained on datasets that raise ethical or legal questions.
3. Monitoring and Accountability
Unlike cloud-based tools, local AI models donât leave an audit trail that an organization can easily monitor. While this offers privacy for the user, it also makes it harder for firms to track how these tools are being used. This raises questions like:
Should the organization monitor or restrict the use of local AI models?
How can you ensure theyâre being used responsibly and ethically?
The Client Perspective: Advising on Local AI Use
For law firms as well as in-house teams, your clients may also be grappling with the rise of local AI in their workplaces. Hereâs how you can help them think through the implications:
1. Identifying Use Cases
Ask your clients whether local AI is already in use or could become relevant in their operations. Common scenarios include:
Employees using local AI for productivity tools, such as drafting documents or summarizing reports.
Teams experimenting with local AI for specialized purposes, such as internal R&D or analyzing proprietary datasets.
2. Revisiting Privacy and Security Protocols
Clients may not realize that local AI brings unique risks compared to cloud-based solutions. You can guide them in:
Reviewing how sensitive data is handled and stored on employee devices.
Ensuring proper encryption and backup protocols are in place for devices running local AI.
3. Compliance Considerations
Local AI introduces compliance risks that may not be immediately apparent. Help your clients evaluate:
Whether local AI use complies with privacy laws like GDPR, particularly if sensitive personal data is involved.
Licensing requirements for the open-source models they use. Some open-source licenses impose obligations that organizations must understand.
4. Training and Awareness
Clients may need help educating their teams about local AIâs capabilities and risks. For example:
Training employees on what kinds of data can and cannot be processed using local AI.
Encouraging teams to choose reputable, well-documented models (e.g., Llama 2 from Meta) rather than obscure or poorly vetted options.
Questions to Ask Yourself
To prepare for the rise of local AI in your work environmentâor your clientsââstart by asking these critical questions:
Do you have an AI policy that addresses local AI specifically?
Are there clear guidelines on how sensitive data can be processed using local AI?
What are the potential compliance risks, and how can they be mitigated?
How will you educate employees or clients about the responsible use of local AI?
Do you have tools or practices in place to monitor and evaluate the use of local AI models?
Bridging the Gap: Awareness vs. Opportunity
Now that weâve explored the risks and realities of unknown local AI use, it might seem like Iâm warning you to steer clear of it altogether. But thatâs not the caseâquite the opposite, in fact.
Using local AI responsibly and intentionally isnât just safeâitâs transformative. Knowing how to use it effectively is the key to unlocking its full potential. When you understand the risks and best practices, you can take full advantage of local AIâs power without worrying about the pitfalls.
Returning to Reason #2: Itâs your key to unlocking generative AIâs true potential.
1. Client Information Privacy and Security
As a lawyer, safeguarding sensitive client information is non-negotiable. Local AI offers a significant advantage: your data never leaves your device. Unlike cloud-based tools, which often require anonymizing sensitive information before prompting, local AI allows you to interact with client data securelyâjust as you would in Word or Excel on your laptop.
This is particularly important when considering that many firms and in-house teams using foundational models like OpenAI or Microsoft Copilot often log user prompts and outputs for performance monitoring. While this isnât typically malicious, it means someone else has a record of how youâre using AI. With local AI, all interactions stay on your device, offering unmatched privacy for sensitive workflows.
2. Freedom from Censorship and Bias
Cloud-based AI models, such as OpenAI or Gemini, are designed with system-wide guardrails to protect users and ensure compliance with regulations. While well-intentioned, these guardrails can sometimes lead to over-correction or bias, limiting how the AI responds. Local AI models, particularly open-source ones, are free from these restrictions. This gives you more flexibility to explore use cases without unnecessary interference or âpersona driftâ that affects how the AI interacts with you.
3. Customization and Control (No Hidden System Prompts)
When you use cloud-based AI, a hidden system prompt sits behind every query you input. This invisible layer governs how the model behaves and interprets your requests, sometimes creating frustrating or unexpected responses. For instance, if youâve ever struggled to get a model to use a particular tone, format, or process, the system prompt is often the culprit. With local AI, there are no hidden constraints. Youâre in full control of how the model behaves and can design it to interact precisely the way you want.
4. Personalization: Your Knowledge, Your AI
Imagine having an AI assistant trained specifically on your knowledge base. With local AI, thatâs not just possibleâitâs transformative. You can train the model on your personal repository of contracts, case law, deposition transcripts, or any other documents stored on your machine. This creates a hyper-personalized AI assistant tailored to your specific workflows, making it a powerful tool for drafting, research, and analysis. For many lawyers, this is the holy grail of AI: a tool that understands your knowledge and your style.
5. Independence and Portability
Unlike firm-wide or cloud-based solutions, local AI offers independence:
Portability Across Jobs: Your setup is yours alone. If you change firms or roles, you can carry your AI instanceâand its personalized trainingâwith you.
Offline Functionality: Whether youâre on a plane with no Wi-Fi or in a location with poor connectivity, your local AI will still work. This reliability is critical for lawyers who need consistent access to their tools.
Resilience Against Outages: If a cloud service provider experiences an outage or your firmâs network goes down, you wonât lose access to your AI capabilities.
6. Cost Savings
Once installed, local AI operates without ongoing subscription fees or API usage costs. While there may be an initial investment in hardware or setup, the long-term savings are significantâespecially for smaller firms or solo practitioners managing tight budgets.
7. Future-Proofing Your Practice
The rapid pace of AI development can make cloud-based tools feel like a moving target, with frequent updates and changing functionality. Local AI provides stability. You choose the model, configure it to your needs, and arenât at the mercy of a providerâs updates or policy changes.
Risks and Considerations for Running Local AI
While the benefits of local AI are compelling, itâs important to understand the trade-offs and risks before diving in. Hereâs what you need to know to make an informed decision:
1. Hardware Requirements
Running AI models locally doesnât require a supercomputer, but it does demand adequate hardware. For example:
Macs: A MacBook Pro with an M3 Max chip and 64GB of memory can handle most models, including LLAMA 3, via tools like LM Studio.
PCs: A machine with a robust GPU (e.g., NVIDIA RTX 3070 or higher) and at least 32GB of RAM is a good baseline for running larger models.
You can also store models on external drives to save space, but for serious, frequent use, you may need to invest in a higher-spec device. The good news? Hardware requirements are becoming more efficient with advancements like llama.cpp and lightweight models.
2. User Interface Trade-Offs
When using cloud-based tools like OpenAI or Gemini, you benefit from polished, intuitive interfaces designed by large UX teams. Local AI models, on the other hand, tend to feel more raw. While tools like LM Studio are user-friendly and improving, they wonât match the seamless experience of browser-based systemsâyet.
However, this trade-off comes with a silver lining: unmatched customization. You can tailor your local AIâs interface and behavior to suit your unique workflows.
3. Backup and Data Loss
With local AI, your model and data live on your device. If you lose your laptop, you lose your AI. Backups are essential. Whether itâs saving logs of your prompts or creating periodic backups of your model, youâll need to take ownership of your AIâs continuity.
As the ecosystem matures, weâll see more plug-and-play tools for local AI backups and recovery, but for now, itâs a manual process.
4. Model Versatility
In a cloud environment, you can toggle between tools like OpenAI, Anthropic, or Gemini in your browser. With local AI, your setup is more constrained. Most devices can only handle one model at a time, maybe two, depending on your hardware and the model size.
5. Lack of Support
Local AI is a DIY experience. Unlike proprietary tools with dedicated support teams, youâll need to rely on open-source communities for help. Fortunately, these communities are vibrant. The flip side? Once your local AI is up and running, itâs generally stable and doesnât require frequent maintenance or updates.
6. Choosing the Right Model
The sheer number of available models can be overwhelming. Thousands of options exist, and not all are reliable or well-documented. To mitigate risks:
Stick with trusted names like Metaâs Llama, DeepSeek, or Qwen.
Use directories like Hugging Face to evaluate models based on community feedback and performance benchmarks.
Starting with widely recognized models from reputable organizations ensures youâre not inadvertently using a model with poor data training or questionable outputs.
7. Emerging Ecosystem
The local AI landscape is evolving quickly. While todayâs tools may feel a bit raw, the rapid development of interfaces, optimization techniques, and community resources means the experience will be dramatically better in just a year. By getting in early, youâll build foundational knowledge as the ecosystem matures, putting you ahead of the curve.
Selected Resources:
â What Does Using Local AI Look Like? Watch me hack a law exam using Local AI HERE

To read previous editions, click here.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
I am a lawyer, entrepreneur, and teacher. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.