- The Brainyacts
- Posts
- 223 | š§« š§ Using lab-grown human brain cells to power AI!
223 | š§« š§ Using lab-grown human brain cells to power AI!
Brainyacts #223
Itās Tuesday. Want an interesting job or know someone who does? Tesla is hiring workers for $48 an hour to wear motion-capture suits to train its humanoid robots. The role, "Data Collection Operator," requires workers to be between 5'7" and 5'11" tall and able to walk for over seven hours a day.
Here we go.
In todayās Brainyacts:
Nerfing AI models
AI IMPACT RISK mnemonic
Lab-grown human brains to power AI and other AI model news
No Fakes Act is a boon for lawyers and more news you can use
š to all subscribers!
To read previous editions, click here.
Lead Memo
š š« How and why AI models get āNerfedā
At one time or another, some of you might have run across the term "nerf" or heard about AI models "getting nerfed." If you're a Gen Xer like me, you probably remember the original Nerfāthose soft, spongy sports balls that then evolved into foam-firing toy guns. The term "nerfing" actually originates from video game culture. In multiplayer games, when a particular weapon or system was made less effective by the game developers, players would say it got "nerfed." In essence, "nerfing" means reducing the power or effectiveness of something.
In the AI world, weāre seeing something similar. Many leading AI models have been accused of "nerfing" themselves. You might have noticed it too. Perhaps a model you once found incredibly smart and precise seems to have lost its edge. Responses that were once detailed and spot-on may now feel a bit watered down or overly cautious.
There are several reasons why AI models get nerfed, and itās important to understand these as they directly affect how we use these tools in our legal work.
1. Ethical Concerns: When AI models become too proficient at certain tasksāespecially ones with potentially harmful applicationsādevelopers might deliberately limit their capabilities. For instance, when ChatGPT first came out, it could generate all sorts of content, including instructions for making homemade bombs. To prevent misuse, developers had to "nerf" the model in those areas, making it less effective at generating such content.
2. Fairness and Bias: If an AI model is found to perform better for certain groups or in specific scenarios, developers might adjust it to promote fairness. This often involves retraining or fine-tuning the model to reduce any inherent biases, which can sometimes make the model seem less capable or less "sharp" in its responses.
3. Regulatory Compliance: As regulatory bodies around the world begin to impose restrictions on AI, developers might be forced to nerf their models to comply with these new rules. This could mean reducing the modelās ability to generate certain types of content or making it more conservative in its responses.
4. Cost-Saving Measures: One of the more controversial reasons AI models might get nerfed is to save on compute resources. High-quality, precise responses require significant computational power, which is expensive. Some companies have been accused of releasing highly capable models to attract users, only to nerf them later once theyāve built up a solid subscriber base. By reducing the model's effectiveness, they can cut costs on the backend while still maintaining their user base.
Companies like Anthropic and OpenAI have faced outcry over such practices, where models are perceived to have been deliberately nerfed to reduce operational costs after a surge in users.
Why This Matters for You: Understanding nerfing is crucial because it reminds us that these AI models are not static. They can and do change, sometimes without explicit notice to the users. As legal professionals, the tools we rely on today may not function the same way tomorrow. Whether itās for ethical reasons, compliance, or cost-saving measures, AI models might get nerfed in ways that impact our work.
So, the next time you notice a change in your AI toolās performance, you might be witnessing a nerfing in action. Itās a good reminder to stay vigilant and continuously evaluate the tools youāre using to ensure they meet your needs.
Spotlight
ā¢ļø āļø AI IMPACT RISK
In the āI wish I thought of thisā category, here is a 7-minute video introducing the IMPACT RISK mnemonic for remembering ten AI downsides.
AI Model Notables
āŗ AI is power-hungry. Synthetic human brain cells can do the job using 100,000x less energy. This approach uses synthetic biology, such as miniature clusters of lab-grown human brain cells called organoids, to power AI. And yes, you can watch them via a livestream (though I have no idea what I am watching).
āŗ OpenAI blocks Iranian influence operation using ChatGPT for U.S. election propaganda.
āŗ OpenAI partners with CondĆ© Nast.
āŗ Build your own AI robot for $150. Hugging Face just released a comprehensive tutorial on how to build your own AI robot. But it is expensive and requires 3D printing. But soon they will share a build tutorial that lowers the costs and uses off-the-shelf parts.
āŗ Gemini Live, Google's answer to OpenAI's advanced voice bot, need some more work.
āŗ Chinese company, Unitree, shares video of new G1 humanoid robot. This thing has uncanny speed and agility. It can be yours for $16,000.
News You Can Use:
ā Minnesota State Bar takes big step toward launching Gen AI regulatory sandbox.
ā Californiaās controversial AI Bill has been amended to soften the requirements on AI companies:
The bill no longer allows Californiaās attorney general to sue AI companies for negligent safety practices before a catastrophic event occurs.
AI labs are now only required to submit public āstatementsā about their safety practices vs certifications āunder penalty of perjury.ā
Developers must now provide āreasonable careā vs āreasonable assuranceā that AI models do not pose significant risks.
ā NO FAKES ACT ā A dream for lawyers, a nightmare for everyone else.
ā Berkeley Law launched a new Master of Laws program focused on AI law and governance, and it is expected to start next summer.
ā The US invested $335 billion in AI over the past decade, triple Chinaās investment, with 71,000 AI job postings in 2024 alone.
ā The U.S. Agency for International Development will use OpenAI Enterprise to reduce admin burdens and ease partnerships.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8