• The Brainyacts
  • Posts
  • 231 | ✂️ 🧘 OpenAI's o1 changes prompting - what you need to know

231 | ✂️ 🧘 OpenAI's o1 changes prompting - what you need to know

Brainyacts #231

It’s Tuesday. One of my law students sent me this. Being the father of a 14YO daughter, it hit home.

Onward 👇

In today’s Brainyacts:

  1. What is OpenAI’s new o1 model? And what you need to know.

  2. Copilot might be getting useful finally

  3. OpenAI’s o1 is smarter than most humans and other AI model news

  4. Ubers becoming robotaxis and more news you can use

    👋 to all subscribers!

To read previous editions, click here.

Lead Memo

🍓🍓 OpenAI changes prompting (for some)!

How to Adapt Your Prompting Habits for OpenAI’s New o1 Model

When generative AI models like ChatGPT first became popular, many users approached them in much the same way they had approached search engines like Google. They would type in vague or brief queries, expecting the model to “get” what they were looking for immediately. The results were often disappointing—too broad, unclear, or just plain wrong.

Eventually, users learned (and are still learning) that providing more information makes a difference. To get better results, we should specify context, include examples, and break down the tasks into steps. In short, the more deliberate and structured we were with our prompts, the better the responses we receive. This meant: 

  • Providing context: Telling the model what situation or problem you’re facing.

  • Offering examples: Giving the AI concrete references to work from.

  • Being explicit: Asking for precise outputs or step-by-step answers.

Mastering these techniques takes time, but it leads to noticeable improvements in the quality of the responses generated. As a result, users can get high-value, nuanced, and insightful results from the models. This is the basis for how I teach lawyers and law students on the pragmatic use of generative AI.

o1 is different

While there is a lot of excitement around o1 (there should be), you need to know that o1 is geared for math, science, and coding. Brainyacts readers are typically not using these models for this type of work.

For the average user looking to use AI to help think, draft, organize, refine, learn, and so on – o1 and GPT4o will behave about the same. There are some differences, but most will not notice them. I’ll get into that in a second.

As to prompting, o1 doesn’t need or even want you to micromanage it as we do with other models. Instead, it is built to think more like a human who has been entrusted to solve a problem on their own. It’s not just delivering quick answers; it pauses, thinks, and reasons, aiming for a more considered and thoughtful response.

So, what does this mean for prompting in o1?

 1. Keep It Simple: The first key change is to stop overcomplicating things. The long-winded, hyper-detailed prompts that we learned to rely on are no longer necessary. In fact, they can be counterproductive. The o1 model is designed to “fill in the gaps” on its own. You don’t need to spell everything out anymore. Instead of asking: “Can you write a 500-word essay about how legal factors impact location and operations of large data centers?” you can now say, “write about how legal factors impact data centers.” The model will infer what you need and deliver a detailed response.

 2. Let It Think for Itself: For other models, we often included phrases like “think step by step” or asked for explanations of reasoning. With o1, this is unnecessary. The model already has its own built-in reasoning process, and it will take the necessary steps to reflect on the problem before delivering a response. You don’t need to hold its hand—just ask your question and trust the model to work through the problem on its own.

 3. Use Delimiters for Clarity: This is where most users will get lost. While the o1 model thrives on simplicity, it does appreciate clarity. To avoid confusion and help the model break down complex prompts, it’s helpful to use delimiters like triple quotes, headers, or even Markdown (all new and foreign to most casual users). These function as signposts, making it easier for the model to know where one part of the prompt ends and another begins. For example, instead of writing a continuous block of text like, “Give me 5 legal issues with data centers and explain each,” you might say:

This gives the model clear guidance without overwhelming it with unnecessary detail.

Why This Matters

For many of us, this is a significant shift. I’ve been rallying and teaching people to craft detailed prompts, and now, we’re being asked to do the opposite—pare back, simplify, and trust the model to think critically on its own. This might feel counterintuitive at first, especially if you’ve grown accustomed to being specific and guiding the AI step by step.

A Comparison: GPT-4o v o1

So, o1 is geared for math, science, and coding, like I said above. For most users there will be little difference between responses for the typical use case. But I wanted to give you an example where you can see both how o1 is better at math and how each “fills in the blanks” in a similar way.

I tested both models with the following prompt that blends math, reasoning, and context to see how it would behave. You can see that while GPT-4’s answer is correct, o1’s catches the nuance and is more correct and precise. Regardless both use language to help explain itself.

▶ ▶PROMPT: If a law firm's lawyers collectivity work 100,000 hours. Bills out $37,250,000 to its clients. And collects 75% of what it works, what other information do you need to tell me the billing realization?

 GPT-4:

 o1:

Spotlight

🤯 👂 Ok, Microsoft Copilot starts to get useful

Yesterday, Microsoft launched is Wave 2 of Copilot. They are expanding Copilot's capabilities across its productivity suite. New additions include Copilot Pages for collaborative AI workspaces, Python support in Excel for advanced data analysis, and autonomous AI agents to automate business processes.

Below is a video showing what new Copilot in action inside PowerPoint.

AI Model Notables

Beginning yesterday, iPhone owners with compatible devices can update to the latest operating system, iOS 18. This update will give you access to Apple Intelligence features, but they are not slated to come out until October. The latest iPhone models like the 15 series and the iPhone 14 will support the new AI features, along with earlier models such as the iPhone 13, 12, and 11. Also only English language will be supported for now.

OpenAI o1 model is smarter than most humans as it scores a 120 on the Norway MensaIQ test.

OpenAI has not independently verified these results. If they did however, it would move them one more milestone closer to on their 5-tier roadmap towards Artificial General Intelligence (the kind fully autonomous kind we fear and hope for?)

Meta restarts plans to train AI using UK users’ public Facebook and Instagram posts.

Google sued for trademark infringement over ‘Gemini’ AI system.

► OpenAI increase usage limits on the new o1 model:

► For those ultra-worried about privacy and security in AI models, here is a way to try them that protects you - use DuckDuckGo’s AI Chat to test 4 models.

News You Can Use:

A company gave 1,000 AI agents access to Minecraft — and they built a society. Now you may dismiss this as fun and games but it is much more. Minecraft emulates human society and these AI agents, left to their own devices, began exhibiting behaviors such as commerce, governance, and chasing dreams. In one instance an AI agent became a priest and starting bribing people to convert! More in the video below.

Drones piloted by AI could prevent wildfires.

Uber will expand its alliance with Waymo. Robotaxi rides can currently only be hailed via Uber app in Phoenix; next year, they’ll add Austin and Atlanta.

Hasbro CEO says all his mates are using AI for their D&D games, which is apparently 'a clear signal that we need to be embracing it'.

Saudi Arabia just stepped up plans to court the world's AI leaders to its desert kingdom.

Intel gets Amazon AWS as a customer for its new AI chip.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8