- The Brainyacts
- Posts
- 152 | š¶ ā EU AI Act Cheat Sheet
152 | š¶ ā EU AI Act Cheat Sheet
Brainyacts #152
Itās Friday. Here is a short video on how a guy uses Google Bard as a collaborator to create a YouTube video on paper airplanes (big airplanes!).
Letās dive in!
In todayās Brainyacts:
AI supervising AI?
An EU AI Act cheat sheet
H&R Block launches AI Tax bot and more AI model news
Who owns your voice in China? and other related content
š to new subscribers!
To read previous editions, click here.
Lead Memo
š¹ļøš¤ OpenAI proposes that the best way to control superhuman AI is to use an AI supervisor, not a human one.
From OpenAI:
We believe superintelligence could arrive within the next 10 years. These AI systems would have vast capabilitiesāthey could be hugely beneficial, but also potentially pose large risks.
Today, we align AI systems to ensure they are safe using reinforcement learning from human feedback (RLHF). However, aligning future superhuman AI systems will pose fundamentally new and qualitatively different technical challenges.
Superhuman AI systems will be capable of complex and creative behaviors that humans cannot fully understand. For example, if a superhuman model generates a million lines of extremely complicated code, humans will not be able to reliably evaluate whether the code is safe or dangerous to execute. Existing alignment techniques like RLHF that rely on human supervision may no longer be sufficient. This leads to the fundamental challenge: how can humans steer and trust AI systems much smarter than them?
Naive weak supervision isn't enoughācurrent techniques, like RLHF, won't be sufficient for future superhuman models.
So, OpenAI is looking for research partners and is offering up serious $$. This is what they are looking for help with:
Superhuman AI Alignment: This is about making sure that extremely intelligent AI systems do what we want them to do, and don't act in ways that are harmful or unexpected. It's like teaching a super-smart robot to understand and follow our instructions correctly.
Learning from Limited Guidance: The grant is interested in how powerful AI systems can learn from simpler, less advanced systems or from limited human input. It's like a student learning from a less knowledgeable teacher and still excelling.
Understanding AI's Thought Process: This area focuses on figuring out how AI systems think and make decisions. It's like trying to read the mind of a computer program to understand why it does what it does.
AI Helping Humans Evaluate Other AI: Here, the focus is on using AI to help us check and understand the work of other AI systems, especially when the tasks are too complex for humans to assess on their own.
Honesty in AI: This involves ensuring that AI systems are truthful and transparent in their interactions, not misleading or hiding information.
Faithfulness in AI's Reasoning: This is about making sure that when AI systems explain their thought processes, these explanations are accurate and reflective of how they actually arrived at their decisions.
Robustness Against Manipulation: The grant is interested in making AI systems that are hard to trick or manipulate, ensuring they remain reliable even in unexpected situations.
Testing and Measuring AI Safety: This involves developing ways to test AI systems to make sure they are safe and aligned with our goals and values.
New Ideas and Approaches: Finally, the grant is open to innovative and novel ideas in the field of AI, especially those that help in aligning superintelligent AI systems with human intentions and values.
Why should legal professionals care or look into this?
While much of these challenges are tech-focused they inevitably invoke and demand policy and ethics guidance. We might be on the cusp of a new era of oversight - which is currently based on flawed human execution. AI promises to deliver at scale and without as many human tendencies (self-interest, self-enrichment, self-protection, etc.).
It seems to me that for anyone looking to usher in the next era of transparency, honesty, accuracy, non-bias, and fairness in our systems, lawyers might want to have a role.
Spotlight
š¶ ā EU AI Act Cheat Sheet
Oliver Patel, Enterprise AI Governance Lead at AstraZeneca was kind enough to share this cheat sheet. In case you missed it, the EU has finalized the language (though we havenāt seen it yet)and is currently getting member states and the EU Parliament to vote on the actual Act.
Read more here.
AI Model Notables
āŗ Was OpenAIās GPT-4.5 just leaked? āMost advanced model brings multi-modal capabilities across language, audio, vision, video, and 3D, alongside complex reasoning and cross-modal understanding.ā
āŗ Meta publicly launches AI image generator trained on your Facebook, Instagram photos
āŗ OpenAI has now fixed a āmajor outageā ā and reopened its Plus subscriptions
āŗ Tesla introduced Optimus 2, a humanoid robot and it dances better than most humans
āŗ Meta's Ray-Ban Glasses Added AI That Can See What You're Seeing
āŗ Dropbox spooks users with new AI features that send data to OpenAI when used
āŗ Google goes after business reliance on OpenAI and Microsoft with its own AI Studio offering
āŗ H&R Block launches AI tax filing assistant
News You Can Use:
ā OpenAI CEO opens up about recent firing and rehiring drama plus super intelligence is coming soon
ā Lightspeed Ventures discusses the challenges and opportunities for selling AI into the legal market
ā All AI news channel shows off its AI news anchors
ā ChatGPT could help a country get into the EU
ā Chinaās court hears nationās first AI voice rights case
ā This congressional candidate is using AI to have conversations with thousands of voters
ā The Philippines. Singapore. Canada. Are the most interested countries in generative AI according to this research
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8