- The Brainyacts
- Posts
- 198 | 🍕🤢 Google, glue on pizza?
198 | 🍕🤢 Google, glue on pizza?
Brainyacts #198

It’s Friday. You can now buy your very own humanoid AI avatar robot.
Let’s get to it.
In today’s Brainyacts:
Learning how AI models think
ChatGPT desktop app
Put glue on your pizza(?) and other AI model news
Solo & small firms on AI and other AI-related content
👋 to new subscribers!
To read previous editions, click here.

Lead Memo
🤖💬 Learning how AI Models “Think”
Anthropic shared research that helps us better understand how these AI models think and behave. They used something called “dictionary learning.” Below is a synopsis of their research paper.
What is Dictionary Learning?
Imagine you have a big box of letters, like in a Scrabble game. You can use these letters to form words. Each letter by itself doesn’t mean much, but when you combine them into words, they start to make sense.
In the same way, dictionary learning helps us understand how AI models work by breaking down complex data into simpler, more understandable parts, much like how letters form words.
Why Does Dictionary Learning Matter?
AI Models as Black Boxes: When we use AI models, we often see them as "black boxes." We put something in, get an answer out, but we don’t know how the AI came up with that answer. This makes it hard to trust that the AI is safe and reliable because we don’t know if it might give harmful or biased responses.
Understanding Neurons: Inside the AI model, there are many neurons (like the letters in our analogy). These neurons activate (light up) in different ways when the AI processes information. However, looking at these activations directly is confusing because they don’t make much sense on their own.
Finding Features: Previously, we tried to match patterns of these neuron activations (features) to things we can understand, like concepts or ideas. Dictionary learning helps us do this by finding recurring patterns in how neurons activate. This way, we can describe what the AI is doing in simpler terms. Instead of looking at many neurons lighting up, we look at a few key features (like looking at words instead of letters).
Scaling Up: We started with simple AI models and found that dictionary learning could help us understand concepts like uppercase text or DNA sequences. We then tried to apply this to larger, more complex AI models. This was challenging because bigger models are like huge dictionaries with many more words and meanings.
Success with Big Models: By applying dictionary learning to a big AI model like Claude 3.0 Sonnet, we successfully found millions of features. These features correspond to various concepts, like cities (San Francisco), people (Rosalind Franklin), or programming code (function calls). These features are like words that represent complex ideas and can work in different languages or with images.
Importance of Dictionary Learning
Building Trust: When we understand how an AI model works inside, it’s easier to trust it. We can see why it gives certain answers and ensure it’s not biased or dangerous.
Improving AI: Knowing what features the AI uses helps us make the AI better. We can train it more effectively and fix problems, making it more reliable and useful.
Transparency: Just like reading words is easier than figuring out letters, understanding features in AI makes the whole process clearer. It helps everyone see what’s happening inside the AI model.
In short, dictionary learning takes some of the confusing internal workings of AI models and breaks them down into simpler parts we can understand. This is helping make AI more transparent, trustworthy, and effective.

Spotlight
👩💻 🤯 OpenAI’s Desktop App
At VivaTech, a European technology conference in Paris earlier this week, OpenAI’s head of developer experience Romain Huet demonstrated the ChatGPT Desktop app. It was a very natural conversation as he asked it to address the 400+ capacity audience. He even had it do so more enthusiastically and in French.
One of the most impressive moments came when Huet opened the camera module (coming in the next few months) in the ChatGPT Voice section of the desktop app. He took 10 seconds to draw a rough sketch of the Eiffel Tower and Arc de Triumph. ChatGPT looked at it via the camera and identified both from the sketch (about 3 minutes into the video if you want to jump ahead). It’s pretty incredible.
Next, he showed ChatGPT a paper map, pointed roughly to where he was, and asked how to get to the places in his sketch. It was able to give a detailed train route with stops and changes.

AI Model Notables
► OpenAI policy researcher Gretchen Kreuger resigned, with a post citing safety concerns about the company.
I gave my notice to OpenAI on May 14th. I admire and adore my teammates, feel the stakes of the work I am stepping away from, and my manager @Miles_Brundage has given me mentorship and opportunities of a lifetime here. This was not an easy decision to make.
— Gretchen Krueger (@GretchenMarina)
10:08 PM • May 22, 2024
► Google promised a better search experience via AI— as now it’s telling us to put glue on our pizza.
► Amazon plans to give Alexa an AI overhaul — and a monthly subscription price.
► UK watchdog looking into Microsoft AI taking screenshots via its Recall feature that captures everything you look at and do on your PC.
► OpenAI strikes a news media mega-deal - OpenAI will receive access to current and archived content from News Corp’s major news and information publications, including The Wall Street Journal, Barron’s, MarketWatch, Investor’s Business Daily, FN, and New York Post; The Times, The Sunday Times and The Sun; The Australian, The Daily Telegraph, The Courier Mail, The Advertiser, and Herald Sun; and others.
► McKinsey’s myIMO is trained on a vast repository of M&A playbooks and best practices to help companies make well-informed decisions about their integrations or separations – use cases in M&A.
► Khan Academy makes Khanmigo free for US teachers and partners with Microsoft.
► Microsoft Edge (their browser for all you Apple folks 🙂 ) unveils real-time AI translation for YouTube and more.
► Elon Musk’s xAI is working on making Grok multimodal.
► Meta could charge for a premium version of its AI assistant.

News You Can Use:
➭ FCC proposes all AI-generated content in political ads must be disclosed
➭ Nvidia just made $14 billion worth of profit in a single quarter thanks to AI chips, and it’s hitting the gas from here on out
➭ Arizona state lawmaker used ChatGPT to write part of law on deepfakes
➭ Clio’s 2024 Legal Trends for Solo and Small Law Firms has some interesting findings on GenAI - insert pic
➭ Democratic operative Steve Kramer indicted over AI Biden robocalls
➭ Let your own AI answer those calls you don’t want to - Truecaller partners with Microsoft to let its AI respond to calls in your own voice.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8