• The Brainyacts
  • Posts
  • 226 | šŸŖ² šŸšØ More security flaws in Copilot

226 | šŸŖ² šŸšØ More security flaws in Copilot

Brainyacts #226

Itā€™s Friday. Yesterday, I had two guests join my law school class - the managing partner and a chief business officer of an AmLaw100 firm. During our chat on the impact of AI, they shared that they had conducted a review of time and billing data for the last several years. They were looking for the type of work that GenAI is already doing and will do more of in the near future. Using a conservative approach, they found upwards of 45% of their billed time is at risk.

Let that sink in!

Onward šŸ‘‡

In todayā€™s Brainyacts:

  1. Legal teams should use open-source and why

  2. What you need to know about Nvidia

  3. More security flaws in Copilot and other AI model news

  4. Oprah on AI and more news you can use

    šŸ‘‹ to all subscribers!

To read previous editions, click here.

Lead Memo

šŸ¦™ šŸ—ƒļø Forget OpenAI and Microsoft, Use Metaā€™s Llama

Meta reported significant growth for its Llama AI models, with downloads approaching 350 million and usage increasing 10x since January.

I want to take the opportunity to share why your team might want to consider using an open-source model like Llama.

First, what is open source:

Open-source refers to software that is freely available for anyone to use, modify, and share. Unlike traditional software, where the code is kept secret, open-source softwareā€™s code is open for the public to see and improve. This means a community of developers from around the world can work together to make the software better, faster, and more secure. Open-source is like a recipe that anyone can access, tweak, and share with others, which leads to rapid innovation and a wide range of tools that people and businesses can use without paying for expensive licenses.

And yes, it is secure. Too often people think open-source means no security - that is flat wrong!

Benefits for Legal Teams Using Open-Source AI Models

1. Cost Efficiency - No Licensing Fees: Open-source models are free to use, eliminating the need for expensive licensing agreements. This allows firms to allocate resources more effectively, particularly when scaling AI usage.

2. Customizability - Tailored Solutions: Open-source models can be fine-tuned and modified to meet specific legal needs, providing a level of customization that proprietary models often do not offer. This ensures that AI tools align closely with the teamā€™s specific requirements, such as handling specialized legal language or processes.

3. Data Privacy and Security - Full Data Control: Law firms can deploy open-source models in secure, on-premises environments or within their private cloud infrastructure, ensuring complete control over sensitive client data and compliance with privacy regulations.

4. Transparency and Trust - Auditable Code: Open-source AI models offer transparency, allowing law firms and in-house teams to inspect and understand the AIā€™s decision-making processes. This is crucial for maintaining ethical standards and meeting legal compliance requirements.

5. Vendor Independence - Avoid Vendor Lock-In: Open-source models provide flexibility and autonomy, enabling firms to switch tools or integrate with other technologies without being dependent on a single vendorā€™s ecosystem. 

6. Innovation and Community Support - Rapid Advancements: The open-source community continuously contributes to the development and improvement of models like Llama. Legal teams can benefit from these innovations, often at a faster pace than proprietary models can offer.

 

Practical Considerations for Implementing Open-Source AI 

1. Access and Licensing- Direct Access: Open-source models like Llama are readily available for download and use under open-source licenses. Legal teams should ensure compliance with these licenses, which typically involve straightforward requirements such as attribution.

2. Support Options:

  • Community and Documentation: Leverage the extensive documentation and active community support available for open-source models. These resources are valuable for troubleshooting and gaining insights into best practices.

  • Commercial Support: For more complex needs, consider contracting with companies that offer enterprise-level support for open-source models, including integration, customization, and maintenance services. 

3. Customization and Deployment

  • In-House Expertise: Teams can customize open-source models to suit specific needs, such as by fine-tuning the AI with legal-specific data. Deploying the model in a secure, on-premises environment or through a trusted cloud provider ensures that it operates efficiently and securely.

  • Scalability: Partnering with cloud providers like AWS, Google Cloud, or Azure can facilitate the scalable deployment of open-source AI models, offering infrastructure support that matches the teamā€™s growing demands. 

4. Security and Compliance

  • Data Protection: Implement robust security measures when deploying open-source models to protect client data. On-premises deployment or using a secure cloud environment helps maintain compliance with legal regulations.

  • Ethical AI Practices: Ensure that the AIā€™s deployment adheres to ethical guidelines and legal standards, particularly concerning bias, transparency, and fairness.

5. Long-Term Maintenance

  • Ongoing Updates: Engage with the open-source community or subscribe to a service that tracks updates and security patches to keep the AI model current and secure.

  • Sustained Support: Consider a hybrid approach where in-house teams handle daily operations while external vendors provide support for more complex technical challenges.

File this under ā€œYou Should Know Thisā€

Thanks to my student, Claire Thiel, and my whole class for inspiring this essay.

Nvidia Stock, the AI Chip Hype-Cycle: What You Need to Know

Nvidia has become a household name for anyone even remotely connected to technology, especially as the excitement around artificial intelligence (AI) continues to surge. But what makes Nvidia special, and why is it fundamental to the current AI boom? Hereā€™s what you need to know.

The Rise of Nvidia: From Graphics to AI

Nvidia started as a company focused on creating graphics processing units (GPUs) primarily for gaming. These GPUs were designed to handle complex calculations required to render high-quality graphics quickly. While that might sound niche, the same ability to perform many calculations simultaneously made GPUs ideal for other demanding tasksā€”most notably, AI.

As AI began to evolve, researchers discovered that training AI modelsā€”essentially teaching a machine to understand and generate human-like text, recognize images, or predict outcomesā€”required immense computational power. Traditional central processing units (CPUs), which power most computers, were not up to the task. GPUs, on the other hand, could handle these tasks much more efficiently due to their parallel processing capabilities. This is where Nvidiaā€™s innovation in GPUs came into play.

CUDA: The Key to Unlocking AI Power

One of Nvidiaā€™s most significant contributions to the AI revolution is its development of CUDA (Compute Unified Device Architecture) in 2006. CUDA is a platform and programming model that allows developers to use Nvidia GPUs for general-purpose computing, not just graphics. This innovation made it easier for AI researchers and developers to use GPUs to accelerate the training and operation of AI models.

Before CUDA, using GPUs for anything other than graphics was complex and impractical for most developers. CUDA simplified the process, enabling widespread adoption of GPUs in AI. Suddenly, what would take weeks or even months to compute on CPUs could be done in days or even hours on GPUs. This capability was crucial as AI models grew more sophisticated and required more computational power.

Why Nvidia Is Fundamental to AI Today

Today, Nvidia is at the heart of the generative AI boom. Generative AI, the technology behind tools like ChatGPT, DALL-E, and other AI-powered applications, relies heavily on massive amounts of data processing. Training these AI models involves running countless calculations to find patterns in data, which is where Nvidiaā€™s GPUs excel.

Moreover, Nvidia continues to innovate with new GPU architectures and specialized chips designed specifically for AI workloads. These advances not only make AI more powerful but also more accessible to companies of all sizes, further fueling the AI hype cycle.

The Hype and the Reality

Nvidiaā€™s stock has seen remarkable growth, reflecting the excitement and optimism around AI. However, like any technology trend, there is always a hype cycleā€”a period where expectations may outpace reality. Investors are betting on Nvidiaā€™s continued dominance in AI, but itā€™s important to understand that the market can be volatile. The success of Nvidia is tied not just to its current products but also to its ability to stay ahead of the curve in a rapidly evolving field.

šŸšØ This is not financial advice.

AI Model Notables

ā–ŗ Microsoft Copilot: from prompt injection to exfiltration of personal information - the cat-n-mouse game of AI development and theft.

ā–ŗ These first-of-their-kind agreements between the U.S. government and the AI industry, starting with OpenAI and Anthropic, are said to help advance safe and trustworthy AI innovation for all.

ā–ŗ Google Gemini will let you create images of people again after previously shutting is down due to some unfortunate problems.

ā–ŗ Google also announced Gems - Gems are custom versions of Gemini with specific instructions to act in a certain way. It's kinda like "Custom GPTs," but right now, it's mostly just a fancy prompt-saver. Gems are only available to Gemini Advanced users for now.

ā–ŗ Nvidia and Apple reportedly discussed joining OpenAIā€™s funding round with Microsoft, potentially valuing the AI startup at over $100 billion.

ā–ŗ Publishers and platforms are opting out of allowing Apple's AI to train on their content.

News You Can Use:

āž­ California lawmakers approved a bill proposing sweeping AI regulations, including safety testing requirements and potential legal consequences for harmful AI systems. The bill, which seeks to make companies liable if their artificial intelligence harms people, is at the center of a debate over how to regulate the technology.

āž­ Why are you not using GenAI? Likely because you arenā€™t confident and need some help (you need my training!). You are not alone. Only 23% of employees feel they have all the needed skills and knowledge, and nearly one-third of employees believe their company would be AI-ready if it offered more training around data and AI.

āž­ Sweden's Klarna says AI chatbots help shrink headcount.

āž­ Arizona State University personalizes learning and advances research with ChatGPT

āž­ Oprah Winfrey sets interview with OpenAI CEO Sam Altman for ABC Special on AIā€™s Impact. Coming Sept. 12.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8