• The Brainyacts
  • Posts
  • 223 | šŸ§« šŸ§  Using lab-grown human brain cells to power AI!

223 | šŸ§« šŸ§  Using lab-grown human brain cells to power AI!

Brainyacts #223

Itā€™s Tuesday. Want an interesting job or know someone who does?  Tesla is hiring workers for $48 an hour to wear motion-capture suits to train its humanoid robots. The role, "Data Collection Operator," requires workers to be between 5'7" and 5'11" tall and able to walk for over seven hours a day.

Here we go.

In todayā€™s Brainyacts:

  1. Nerfing AI models

  2. AI IMPACT RISK mnemonic

  3. Lab-grown human brains to power AI and other AI model news

  4. No Fakes Act is a boon for lawyers and more news you can use

    šŸ‘‹ to all subscribers!

To read previous editions, click here.

Lead Memo

šŸ‰ šŸ”« How and why AI models get ā€œNerfedā€

At one time or another, some of you might have run across the term "nerf" or heard about AI models "getting nerfed." If you're a Gen Xer like me, you probably remember the original Nerfā€”those soft, spongy sports balls that then evolved into foam-firing toy guns. The term "nerfing" actually originates from video game culture. In multiplayer games, when a particular weapon or system was made less effective by the game developers, players would say it got "nerfed." In essence, "nerfing" means reducing the power or effectiveness of something.

In the AI world, weā€™re seeing something similar. Many leading AI models have been accused of "nerfing" themselves. You might have noticed it too. Perhaps a model you once found incredibly smart and precise seems to have lost its edge. Responses that were once detailed and spot-on may now feel a bit watered down or overly cautious.

There are several reasons why AI models get nerfed, and itā€™s important to understand these as they directly affect how we use these tools in our legal work.

1. Ethical Concerns: When AI models become too proficient at certain tasksā€”especially ones with potentially harmful applicationsā€”developers might deliberately limit their capabilities. For instance, when ChatGPT first came out, it could generate all sorts of content, including instructions for making homemade bombs. To prevent misuse, developers had to "nerf" the model in those areas, making it less effective at generating such content.

2. Fairness and Bias: If an AI model is found to perform better for certain groups or in specific scenarios, developers might adjust it to promote fairness. This often involves retraining or fine-tuning the model to reduce any inherent biases, which can sometimes make the model seem less capable or less "sharp" in its responses.

3. Regulatory Compliance: As regulatory bodies around the world begin to impose restrictions on AI, developers might be forced to nerf their models to comply with these new rules. This could mean reducing the modelā€™s ability to generate certain types of content or making it more conservative in its responses.

4. Cost-Saving Measures: One of the more controversial reasons AI models might get nerfed is to save on compute resources. High-quality, precise responses require significant computational power, which is expensive. Some companies have been accused of releasing highly capable models to attract users, only to nerf them later once theyā€™ve built up a solid subscriber base. By reducing the model's effectiveness, they can cut costs on the backend while still maintaining their user base.

Companies like Anthropic and OpenAI have faced outcry over such practices, where models are perceived to have been deliberately nerfed to reduce operational costs after a surge in users.

Why This Matters for You: Understanding nerfing is crucial because it reminds us that these AI models are not static. They can and do change, sometimes without explicit notice to the users. As legal professionals, the tools we rely on today may not function the same way tomorrow. Whether itā€™s for ethical reasons, compliance, or cost-saving measures, AI models might get nerfed in ways that impact our work.

So, the next time you notice a change in your AI toolā€™s performance, you might be witnessing a nerfing in action. Itā€™s a good reminder to stay vigilant and continuously evaluate the tools youā€™re using to ensure they meet your needs.

Spotlight

ā˜¢ļø ā›”ļø AI IMPACT RISK

In the ā€œI wish I thought of thisā€ category, here is a 7-minute video introducing the IMPACT RISK mnemonic for remembering ten AI downsides.

AI Model Notables

ā–ŗ AI is power-hungry. Synthetic human brain cells can do the job using 100,000x less energy. This approach uses synthetic biology, such as miniature clusters of lab-grown human brain cells called organoids, to power AI. And yes, you can watch them via a livestream (though I have no idea what I am watching).

ā–ŗ OpenAI blocks Iranian influence operation using ChatGPT for U.S. election propaganda.

ā–ŗ OpenAI partners with CondĆ© Nast.

ā–ŗ Build your own AI robot for $150. Hugging Face just released a comprehensive tutorial on how to build your own AI robot. But it is expensive and requires 3D printing. But soon they will share a build tutorial that lowers the costs and uses off-the-shelf parts.

ā–ŗ Gemini Live, Google's answer to OpenAI's advanced voice bot, need some more work.

ā–ŗ Chinese company, Unitree, shares video of new G1 humanoid robot. This thing has uncanny speed and agility. It can be yours for $16,000.

News You Can Use:

āž­ Minnesota State Bar takes big step toward launching Gen AI regulatory sandbox.

āž­ Californiaā€™s controversial AI Bill has been amended to soften the requirements on AI companies:

  • The bill no longer allows Californiaā€™s attorney general to sue AI companies for negligent safety practices before a catastrophic event occurs.

  • AI labs are now only required to submit public ā€œstatementsā€ about their safety practices vs certifications ā€œunder penalty of perjury.ā€

  • Developers must now provide ā€œreasonable careā€ vs ā€œreasonable assuranceā€ that AI models do not pose significant risks.

āž­ NO FAKES ACT ā€“ A dream for lawyers, a nightmare for everyone else.

āž­ Berkeley Law launched a new Master of Laws program focused on AI law and governance, and it is expected to start next summer.

āž­ The US invested $335 billion in AI over the past decade, triple Chinaā€™s investment, with 71,000 AI job postings in 2024 alone.

āž­ The U.S. Agency for International Development will use OpenAI Enterprise to reduce admin burdens and ease partnerships.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8