
Itโs Friday. โThe goal for the product (a sales contract mark-up AI tool)โ, Wang said, โis to take work from other law firms.โ
โIf it is cannibalizing our own work . . . that is because it is better than the work we provide now,โ he said. โIt can then be predatory about other lawyersโ work. So, Iโm planning to cannibalize our competitorsโ work, because there is plenty there to eat. This scarcity mindset people have, I think, is totally wrong.โ
This is the right mindset to have IMO on law firm development and use of AI.
From this article.
Letโs get to it.
In todayโs Brainyacts:
Explaining P(doom) and OpenAI resignations
Take over Brainyacts for an edition!
Reaction to Copilot is โmehโ and other AI model news
This company is using ChatGPT to draft legal contracts and other AI-related content
๐ to new subscribers!
To read previous editions, click here.

Lead Memo
๐ฃ ๐งจ What is P(doom)?
And Why So Many Resignations at OpenAI Recently?
The concept of "P(doom)โ has gained traction among AI researchers and safety advocates. This metric reflects the growing anxiety surrounding the unchecked development of AI and its potential existential threats. Recent events at OpenAI underscore the urgency of these concerns and highlight significant internal discord.
The Concept of P(doom)
"P(doom)", short for "probability of doom," is a term used to estimate the likelihood that artificial intelligence will cause severe harm to humanity. This metric, popularized by the Rationalist community on platforms like LessWrong, has become a serious consideration among AI researchers and technologists. Esteemed figures in the AI field, such as Dario Amodei (CEO of Anthropic), Geoffrey Hinton (noted AI expert and ex-Google Brain), and Elon Musk, have expressed varying levels of concern, with their P-doom estimates ranging from 5% to 90%โ. These estimates reflect deep uncertainties about the future trajectory of AI and its potential impacts on society.ย
Recent Resignations at OpenAI
In recent weeks, OpenAI has experienced a wave of resignations, particularly among those responsible for AI safety and governance.
OpenAI co-founder and chief scientist Ilya Sutskever announced that he is leaving the company โ following months of speculation of Sutskeverโs role from the November 2023 Sam Altman ousting. Sutskever said he is confident that OpenAI will โbuild AGI that is both safe and beneficialโ under the current leadership. But, also leaving is Sutskeverโs superalignment group co-lead Jan Leike (if you want to read Janโs reason for leaving, check it out here).
Daniel Kokotajlo, a key figure on the governance team, resigned due to apprehensions about OpenAI's responsible conduct regarding artificial general intelligence (AGI). Daniel previously shared his P(doom) is 70%. Similarly, William Saunders, a member of the Alignment team since 2021, stepped down after three years at the company. Their departures, along with those of other members of the Superalignment team, signal significant internal strifeโ.
The Superalignment team at OpenAI focuses on creating safeguards to prevent AGI from becoming uncontrollable. The resignation of these essential personnel raises critical questions about the company's commitment to AI safety and its ability to effectively manage the ethical implications of its technology. The lack of a response from OpenAI regarding these resignations further exacerbates concerns about transparency and accountability within the organization.
Analysis and Implications
The effective functioning of OpenAI's Superalignment team is crucial for establishing protective measures against AGI posing a threat to humanity. The loss of key members from this team could undermine these efforts, leaving significant gaps in the company's ability to manage and mitigate AI risksโ. As OpenAI grapples with these challenges, its ability to maintain leadership in AI development and adhere to its mission of ensuring that AGI benefits all of humanity is in jeopardy.
Conclusion
The concept of P(doom) encapsulates the profound uncertainties and risks associated with the rapid advancement of AI. Addressing these challenges transparently and effectively is essential to maintaining public trust and ensuring the responsible development of AI technologies. As stakeholders continue to debate the risks and benefits of AI, the concept of P(doom) serves as a stark reminder of the high stakes involved.

Learn How AI Impacts Strategy with MIT
As AI technology continues to advance, businesses are facing new challenges and opportunities across the board. Stay ahead of the curve by understanding how AI can impact your business strategy.
In the MIT Artificial Intelligence: Implications for Business Strategy online short course youโll gain:
Practical knowledge and a foundational understanding of AI's current state
The ability to identify and leverage AI opportunities for organizational growth
A focus on the managerial rather than technical aspects of AI to prepare you for strategic decision making

AI Model Notables
โบ OpenAI now allows you to interact with tables and charts and add files directly from Google Drive and Microsoft OneDrive.

โบ OpenAI bringing Redditโs content to ChatGPT
โบ Elon Muskโs AI, Grok is now accessible in Europe

โบ The 100 things Google just announced this week
โบ Microsoft AI event coming on Monday - here is some coverage on Copilot (takeaway - so far commercial buyers are โmeh.โ)

News You Can Use:
โญ Swedish payments unicorn Klarna is using ChatGPT to draft legal contractsโbut AI isnโt replacing lawyers yet
โญ Pricing AI-driven legal services: Itโs not a question of cost recovery
โญ Four ways boards can support the effective use of AI
โญ Sony Music warns tech companies over โunauthorizedโ use of its content to train AI
โญ Hollywood at a Crossroads: โEveryone is using AI, but they are scared to admit itโ
โญ Predict errors in LLM outputs and win $10k. Dislike hallucinations? Create a probability estimation model that determines whether the prompt provided to a language model will elicit an outcome that demonstrates factuality, bias, or misdirection. A new bounty program has been launched by Human Intelligence that even beginners can participate in.
โญ US senators unveil AI policy roadmap, seek $32B of government funding boost
Was this newsletter useful? Help me to improve!
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8


