097 | 👵☎️ Ma Bell

Brainyacts #97

In today’s Brainyacts we:

  1. give a shout out to one of you

  2. walk you through how ChatGPT was “taught”

  3. hear from Jack Dorsey & Mark Zuckerberg on AI fears

  4. see a glimpse of OpenAI’s Business subscription (maybe)

  5. ask if Google might have its Ma Bell Moment

  6. highlight a UK law firm AI spinout

  7. ask how to properly cite generative ai sources

👋 A special Welcome! to NEW SUBSCRIBERS.
To reach previous posts, go here.

Top Referee Shoutout!

For bringing at least 10 subscribers to Brainyacts, I give a special shoutout to each of you who reaches this milestone. Thank you all!

Today we have Mark Dodd.

Mark Dodd works as the Head of Market Insights globally at LOD (www.lodlaw.com), a pioneer of alternative legal services. Mark works at the intersection of law, technology and innovation and is working closely with LOD’s tech gurus and commercial experts on the application of generative AI in the legal space.

In a previous job, Mark worked for LexisNexis in London and helped steer the company’s AI submission to the House of Lords Select Committee on AI back in 2017.

🤳 📳 Mark is active on LinkedIn and welcomes connecting with people in the AI and legal space - or anyone who wants to hit a game of tennis.

Mark and I connected earlier today when he and I chatted for the LOD Podcast. That episode should be coming out next week. In the meantime, you can listen to previous episodes by going here: https://www.lodlaw.com/our-thinking/lodcast/

Thank you Mark 😄🙏🎉

🧑‍🏫🚌 How ChatGPT Was Taught

You likely hear about how these models are “taught” or “tuned” but you may struggle to understand exactly what that looks like. Today I want to share a simple guide for how ChatGPT was trained.

Below is a pic I found that illustrates this well. Following that, I wrote up the guide using a teacher/student/practice exam scenario that hopefully helps you understand this process. It really is remarkable but also shows you how vital human feedback is to the overall process.

Stage 1: Collect data from human responses This stage is essentially a teacher-student dynamic. Imagine you're tutoring a student (the AI), and they ask you, "why is the sky blue?" You would then give them an example of a correct answer. Over time, the student observes these correct responses and starts to understand how to form similar responses on their own.

Stage 2: Train a model to evaluate responses This is like a practice exam session. You give your student (the AI) a question like "why is the sky blue?" again. They will then provide multiple answers based on what they've learned from your previous lessons. Your job as the tutor is to review and rank these responses by how correct and relevant they are. Through this process, the AI learns not just how to respond, but also to recognize and rank the quality of responses, much like a student learning to self-assess their own work.

Stage 3: Reinforcement Learning This is the stage where your student (the AI) begins to take full practice exams independently. You might ask a new question like, "how does a telephone work?" The AI will propose an answer based on what it has learned. Then, the AI will use its self-evaluation skills (developed in Stage 2) to assess the relevance of its own answer. This feedback loop helps the AI fine-tune its own responses. Just as a student learns from grading their own practice exams, the AI progressively improves at answering new questions in a relevant and accurate manner.

This entire process is iterative and continuously refined, allowing the AI model to learn from past interactions and gradually become better at generating human-like text.

🛍️🖼️ Dorsey & Zuckerberg on AI Regulation & Governance

Both interviews within the last week.

First, we have a short clip I made of Jack Dorsey.

Jack Dorsey is an American Internet entrepreneur, philanthropist, and programmer who is a co-founder and former CEO of Twitter, Inc., as well as co-founder, principal executive officer and chairperson of Block, Inc., the developer of the Square financial services platform. He is also a co-founder of Bluesky, PBLLC, a decentralized social network initiative

Here Jack talks about how AI ‘doomers’ are overreacting and that we are nowhere near the danger point of AI hurting humanity. He talks about building the controls into the technology itself. I share this 1 min video because it sets up the next one with Zuckerberg.

Ok, so if you didn’t know who Jack is, that is understandable. But, I am guessing Mark Zuckerberg needs no intro.

Here Mark is talking to Lex Fridman. Lex asked Mark about the fear that AI will have too much power and control humanity. Mark’s response is super insightful and helpful. I won’t ruin the moment so please watch this short 3 ½ minute video. Hang in there as Mark unpacks this - trust me it is worth it.

Mark dissected the nuances between intelligence and autonomy in AI. He compares the human mind— split into the primitive and the neocortex—to a dual-mode AI system. Our primitive brain, regardless of our intelligence or advanced cognition brought by the neocortex, still has the reins of our instincts. Mark suggests that we can construct an AI system with a similar division. One 'primitive' mode where safety mechanisms reside and another, a 'super-intelligent' mode that handles all the high-level functionalities.

The perceived threat and loss of control largely come from granting AI autonomy—allowing it a form of 'free will.' With this autonomy, the outcomes become unpredictable, and therein lies the fear. The irony shines brightly when you consider the three cardinal warnings AI researchers and scientists have cautioned us about for decades:

  1. Do not allow AI to connect to the internet.

  2. Do not permit AI to code or program software.

  3. Do not grant AI agency or autonomy.

Yup! We've disregarded these warnings and done all three. Yet, as Mark and other experts like Jack point out, there's a glimmer of hope. Despite these transgressions, our current level of AI sophistication falls significantly short of exploiting these '3 things' to their full potential. Nonetheless, as AI continues to evolve, it's imperative we staunchly resist crossing these boundaries in the pursuit of advancement.

For the complete interviews:

News you can Use: 

Leaked OpenAI Memo Shows Upcoming Updates?

Is this the long awaited business version that OpenAI wanted to get out in April?

We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users. ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default. We plan to make ChatGPT Business available in the coming months.

Is Google Going to Have Its Ma Bell Moment?

The "Ma Bell moment" is a phrase that refers to a landmark decision in the history of American telecommunications, when Judge Harold H. Greene oversaw the breakup of the American Telephone and Telegraph Company (AT&T) — often known as "Ma Bell" — into a collection of smaller, independent companies, popularly called the "Baby Bells."

The breakup was the result of an antitrust lawsuit filed by the U.S. Department of Justice in 1974, which argued that AT&T held an illegal monopoly over telephone service in the United States. After nearly a decade of legal proceedings, AT&T agreed in 1982 to divest itself of its local exchange services, effectively splitting the company into eight parts: the new AT&T, which continued to offer long distance service, and seven regional Bell operating companies (the Baby Bells) that provided local telephone service.

The "Ma Bell moment" is therefore a symbol of major industry disruption, when a powerful monopoly was dismantled by regulatory action in order to foster competition and innovation. It is often used as a metaphor in discussions of antitrust policy, particularly in the tech industry, to refer to the potential for large and powerful companies to be broken up by government intervention.

This has been tried before, such as in 2020. Will be interesting if the advent of AI somehow generates a different outcome this time.

Largest UK Law Practice Focused On Employment Claimants Spins Out AI Startup

  • Grapple, an AI-driven employment law advice service, was developed by Alex Monaco, founder of Monaco Solicitors, the largest employment law practice in the UK solely working for claimants. The platform allows users to track their workplace grievances, assess the validity of their case, and draft legal letters to employers.

  • The service was created with a £99,000 grant from Innovate UK and has won the Law Society award for excellence in technology and innovation. Mr. Monaco is in advanced talks with unions, charities, and advice centers to license the technology, thereby levelling the playing field against employers equipped with HR and legal teams.

  • Grapple's future plans include the introduction of a premium version, potential white-labelling for other law firms, and the addition of features to draft key documents needed for tribunal hearings. Alex Monaco aims to build human advice into the system and eventually expand Grapple's reach globally, reinforcing his belief that "the law is not owned by lawyers."

If You Want to Cite Generative AI Models as a Source, How Would You?

I raised this question a few hours ago on Twitter. Interesting replies. What say you? It doesn’t have to be legal Bluebook citations - I’ll take any examples.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8