• The Brainyacts
  • Posts
  • 177 | šŸ„· šŸ” They can hack into your AI model for $20

177 | šŸ„· šŸ” They can hack into your AI model for $20

Brainyacts #177

Itā€™s Tuesday. WTF! A fully AI autonomous robot inappropriately touches a female reporter. Attempted hug or cop a feel? Either way, funny, not funny. šŸ˜¬šŸ˜•

Letā€™s dig in!

In todayā€™s Brainyacts:

  1. Hacking AI models with $20

  2. Kate Middleton ushers us into the No-Trust Era

  3. Read OpenAIā€™s filing in Elon lawsuit other AI model news

  4. AI dolls to cure loneliness and other AI-related content

šŸ‘‹ to new subscribers!

To read previous editions, click here.

Lead Memo

šŸ„·Ā šŸ”Ā They can hack into your AI model for $20

Understanding the Security Risks in AI Language Models: Implications for Users and Organizations

In the rapidly evolving landscape of generative AI like OpenAI's ChatGPT and Google's Gemini, there are easy-to-miss security risks that are becoming an issue.

A recent research paper titled "Stealing Part of a Production Language Model" sheds light on a critical security vulnerability inherent in these AI systemsā€”a vulnerability that both casual users and organizations should be acutely aware of.

The Essence of the Threat

The paper introduces a novel cyberattack specifically designed to target and extract sensitive information from AI language models without needing direct access to their underlying architecture. This type of attack, known as a "model-stealing attack," focuses on pilfering the embedding projection layer of a transformer model. This layer is crucial as it represents how the AI model processes and understands input data, translating it into a format that the model can work with effectively.

The researchers demonstrated that for a relatively low costā€”under $20 for some modelsā€”it's possible to extract this essential component of the AI, revealing detailed information about the model's structure, including its "hidden dimension." The hidden dimension is a key indicator of the model's complexity and its ability to process information. Uncovering this detail for models like OpenAI's Ada and Babbage, and even the advanced gpt-3.5-turbo model, the researchers not only proved the feasibility of their attack but also exposed a significant oversight in the security of these AI systems.

Why This Matters

For casual users and developers utilizing these AI models through APIs for various applications, this revelation might initially seem distant from immediate concerns. However, the implications are far-reaching. The ability to extract such critical information cheaply and easily means that proprietary technologies can be replicated or reverse-engineered. This not only undermines the significant investment and research that goes into developing these models but also poses a threat to the integrity and confidentiality of applications built on top of them.

For organizations, particularly large law firms that are increasingly integrating AI into their operations, the stakes are even higher. Law firms deal with sensitive client information, relying on the confidentiality and security of the tools they use. An underlying vulnerability in an AI model could not only jeopardize client data but also expose firms to potential legal and reputational damage. Furthermore, firms investing in developing their proprietary AI models need to be acutely aware of these vulnerabilities to protect their intellectual property and ensure the security of their innovations.

Moving Forward: Defenses and Mitigations

The research concludes with discussions on potential defenses and mitigations against such attacks. For AI developers and organizations, this underscores the importance of adopting robust security measures that go beyond traditional cybersecurity practices. Protecting AI models requires a specialized approach, taking into consideration the unique ways in which these models can be exploited.

Implementing layers of security that obscure the inner workings of the AI, regularly updating and patching models to address newly discovered vulnerabilities, and monitoring usage patterns for anomalies that could indicate an attack are all crucial steps. Furthermore, the AI community must foster collaboration and share knowledge on emerging threats and best practices for defense.

For organizations building or heavily relying on AI, this research underscores the need for a proactive and comprehensive approach to securing these technologies. As AI continues to evolve, so too must our strategies for protecting these innovations and the privacy and integrity of the data they process.

Spotlight

āŒ›ļø šŸ¤„ We are officially in the AI-driven No Trust Era

Deepfakes go mainstream and into the pop culture psyche.

The Kate Middleton Photo Controversy:

On Mother's Day in the United Kingdom, the Prince and Princess of Wales released a photo of Kate Middleton with her children, which was intended to show her recovery from a recent surgery. However, the Associated Press (AP) issued a "kill notification" stating that the photo should no longer be used due to manipulation by the source. This revelation led to widespread speculation about the photo's authenticity and the involvement of AI in its creation.

The Implications of AI-Generated Content:

The Kate Middleton photo controversy has brought the issue of AI-generated content to the forefront of public discourse. With the rapid advancements in AI technology, it is becoming increasingly difficult to distinguish between real and AI-generated images. This has led to a growing concern about the spread of misinformation and the erosion of trust in the authenticity of visual content.

The Emergence of the AI-Driven No-Trust Era:

The Kate Middleton photo controversy may be a sign that we have officially entered the AI-driven no-trust era.

Regular readers of this newsletter know that I have been in the deepfake beat for some time. I still am. The primary reason is that deepfake are already being entered into evidence in court proceedings around the world. The old rules of authenticity and chain of custody are getting harder to apply in the AI world.

While Kate Middletonā€™s personal situation is none of my concern (though hope she is doing fine), this event hopefully serves as a warning sign to more people. We cannot trust audio, video, and photos anymore. It is too easy to fabricate them. This was just a sloppy job! How will we know if an expert doctored or completely manufactured a piece of evidence?

AI Model Notables

ā–ŗĀ Elon shares that Grok AI will be open-sourced this week

  • Grok was released in November 2023

  • It has a ā€œfun modeā€ and ā€œregular modeā€

  • Access Grok via X/Twitter Premium+ subscriptions

  • Soon should have a live search function into X/Twitter plus a 25k context window

  • Slated to run natively in Teslas soon

ā–ŗĀ OpenAI hires Musk foe Wachtell to fight lawsuit over company's mission

ā–ŗĀ Read OpenAIā€™s ā€œresponseā€ to Elonā€™s lawsuit via an ā€œapplication for complex designationā€

ā–ŗĀ Google restricts election-related queries for its Gemini chatbot

ā–ŗ ERNIE Bot (China) surpasses GPT-4 in Chinese, says Baidu founder

ā–ŗĀ Nvidia is sued by authors over AI use of copyrighted works

ā–ŗĀ Harvey AI seeks ā€˜all sortsā€™ of lawyers in AI legal hiring spree

News You Can Use:

āž­Ā Welcome to the valley of the creepy AI dolls

āž­ Executives are spending on AIā€”but just 38% are actually training their workers on it

āž­ Five of this yearā€™s Pulitzer finalists are AI-powered

āž­ What are Australian law firms doing with AI?

āž­ Criminal Liability:Ā Florida teens arrested for creating ā€˜deepfakeā€™ AI nude images of classmates

āž­ Vatican names Google AI boss to scientific academy

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8