- The Brainyacts
- Posts
- 196 | 💣 🧨 OpenAI is shedding AI safety leaders
196 | 💣 🧨 OpenAI is shedding AI safety leaders
Brainyacts #196

It’s Friday. “The goal for the product (a sales contract mark-up AI tool)”, Wang said, “is to take work from other law firms.”
“If it is cannibalizing our own work . . . that is because it is better than the work we provide now,” he said. “It can then be predatory about other lawyers’ work. So, I’m planning to cannibalize our competitors’ work, because there is plenty there to eat. This scarcity mindset people have, I think, is totally wrong.”
This is the right mindset to have IMO on law firm development and use of AI.
From this article.
Let’s get to it.
In today’s Brainyacts:
Explaining P(doom) and OpenAI resignations
Take over Brainyacts for an edition!
Reaction to Copilot is “meh” and other AI model news
This company is using ChatGPT to draft legal contracts and other AI-related content
👋 to new subscribers!
To read previous editions, click here.

Lead Memo
💣 🧨 What is P(doom)?
And Why So Many Resignations at OpenAI Recently?
The concept of "P(doom)” has gained traction among AI researchers and safety advocates. This metric reflects the growing anxiety surrounding the unchecked development of AI and its potential existential threats. Recent events at OpenAI underscore the urgency of these concerns and highlight significant internal discord.
The Concept of P(doom)
"P(doom)", short for "probability of doom," is a term used to estimate the likelihood that artificial intelligence will cause severe harm to humanity. This metric, popularized by the Rationalist community on platforms like LessWrong, has become a serious consideration among AI researchers and technologists. Esteemed figures in the AI field, such as Dario Amodei (CEO of Anthropic), Geoffrey Hinton (noted AI expert and ex-Google Brain), and Elon Musk, have expressed varying levels of concern, with their P-doom estimates ranging from 5% to 90%. These estimates reflect deep uncertainties about the future trajectory of AI and its potential impacts on society.
Recent Resignations at OpenAI
In recent weeks, OpenAI has experienced a wave of resignations, particularly among those responsible for AI safety and governance.
OpenAI co-founder and chief scientist Ilya Sutskever announced that he is leaving the company — following months of speculation of Sutskever’s role from the November 2023 Sam Altman ousting. Sutskever said he is confident that OpenAI will ‘build AGI that is both safe and beneficial’ under the current leadership. But, also leaving is Sutskever’s superalignment group co-lead Jan Leike (if you want to read Jan’s reason for leaving, check it out here).
Daniel Kokotajlo, a key figure on the governance team, resigned due to apprehensions about OpenAI's responsible conduct regarding artificial general intelligence (AGI). Daniel previously shared his P(doom) is 70%. Similarly, William Saunders, a member of the Alignment team since 2021, stepped down after three years at the company. Their departures, along with those of other members of the Superalignment team, signal significant internal strife.
The Superalignment team at OpenAI focuses on creating safeguards to prevent AGI from becoming uncontrollable. The resignation of these essential personnel raises critical questions about the company's commitment to AI safety and its ability to effectively manage the ethical implications of its technology. The lack of a response from OpenAI regarding these resignations further exacerbates concerns about transparency and accountability within the organization.
Analysis and Implications
The effective functioning of OpenAI's Superalignment team is crucial for establishing protective measures against AGI posing a threat to humanity. The loss of key members from this team could undermine these efforts, leaving significant gaps in the company's ability to manage and mitigate AI risks. As OpenAI grapples with these challenges, its ability to maintain leadership in AI development and adhere to its mission of ensuring that AGI benefits all of humanity is in jeopardy.
Conclusion
The concept of P(doom) encapsulates the profound uncertainties and risks associated with the rapid advancement of AI. Addressing these challenges transparently and effectively is essential to maintaining public trust and ensuring the responsible development of AI technologies. As stakeholders continue to debate the risks and benefits of AI, the concept of P(doom) serves as a stark reminder of the high stakes involved.

Spotlight
The Brainyacts Newsletter Takeover.
Get direct access to over 4000 legal people interested in the pragmatic use of generative AI. I am turning my newsletter over to you for an edition. It is The Brainyacts Newsletter Takeover.
— Josh Kubicki (@jkubicki)
9:08 PM • May 15, 2024
Learn How AI Impacts Strategy with MIT
As AI technology continues to advance, businesses are facing new challenges and opportunities across the board. Stay ahead of the curve by understanding how AI can impact your business strategy.
In the MIT Artificial Intelligence: Implications for Business Strategy online short course you’ll gain:
Practical knowledge and a foundational understanding of AI's current state
The ability to identify and leverage AI opportunities for organizational growth
A focus on the managerial rather than technical aspects of AI to prepare you for strategic decision making

AI Model Notables
► OpenAI now allows you to interact with tables and charts and add files directly from Google Drive and Microsoft OneDrive.

► OpenAI bringing Reddit’s content to ChatGPT
► Elon Musk’s AI, Grok is now accessible in Europe

► The 100 things Google just announced this week
► Microsoft AI event coming on Monday - here is some coverage on Copilot (takeaway - so far commercial buyers are “meh.”)

News You Can Use:
➭ Swedish payments unicorn Klarna is using ChatGPT to draft legal contracts—but AI isn’t replacing lawyers yet
➭ Pricing AI-driven legal services: It’s not a question of cost recovery
➭ Four ways boards can support the effective use of AI
➭ Sony Music warns tech companies over ‘unauthorized’ use of its content to train AI
➭ Hollywood at a Crossroads: “Everyone is using AI, but they are scared to admit it”
➭ Predict errors in LLM outputs and win $10k. Dislike hallucinations? Create a probability estimation model that determines whether the prompt provided to a language model will elicit an outcome that demonstrates factuality, bias, or misdirection. A new bounty program has been launched by Human Intelligence that even beginners can participate in.
➭ US senators unveil AI policy roadmap, seek $32B of government funding boost
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8