- The Brainyacts
- Posts
- 266 | I broke OpenAI's new GPT-5 and you should too
266 | I broke OpenAI's new GPT-5 and you should too
Brainyacts #266

Hello to all 7962+ of you from around the globe.
GPT-5 was just released — and my prompt below will test it (and Make You Smarter!)
When GPT-5 dropped yesterday, the headlines focused on its speed, new reasoning capabilities, and multimodal abilities. But for me, the most significant shift was quieter: the balance of control between human and machine just tilted further toward the machine.
From day one, I’ve urged lawyers to get their fingers on the keyboard with these models, even when they were clunky, less capable, and prone to silly errors. The point was never about immediate perfection. It was about building the skill, intuition, and heuristics you only develop through repetition. Because those reps give you the judgment to guide the model and know when it’s going off course.
GPT-5 marks a profound change in the human/machine relationship.
OBSERVATION #1: Up until yesterday, using OpenAI, you could pick the exact model variant for your task: the one tuned for reasoning, for writing, for code, or for math. Each had its own strengths, and experienced users learned which to reach for and when. In GPT-5, those choices are gone. There’s just “GPT-5,” and the routing decisions of which mode, which tool, which underlying approach is made by the model.
For a beginner, that’s a blessing. Most novice users never knew the differences between the models anyway. They used the same one regardless of the task.
For an experienced user, the jury’s still out. On one hand, the routing could save time. On the other, it introduces unpredictability: you can no longer reliably choose the optimal model for your purpose. If GPT-5’s choice is wrong, you’re stuck re-prompting rather than switching.
This is the first big GPT-5 trade-off to understand: mass usability versus operator agency. It’s a win for accessibility. But for those who have developed fine-grained model instincts, it means building new ways to influence the model without a manual gear shift.
OBSERVATION #2: The other new behavior I’ve noticed: GPT-5 sometimes leaves gaps in its output for you to fill in like placeholders for facts, tone, or point of view. It’s not doing 100% of the work; it’s prompting you to insert your own contribution.
On its face, this is good. It’s an invitation to engage, not just copy-paste. But here’s the subtle risk: because you’re filling the gap inside the AI’s narrative and structure, your input may be shaped — even unconsciously — to conform to the AI’s framing. That can dilute authenticity and lead to homogenized thinking.
That’s the second trade-off here: collaboration versus framing bias. You’re being invited into the process, but you need the skill to step outside the AI’s framing when it matters.
Both of these shifts matter for the way we work with foundational models. And both point toward the same conclusion: if you want to stay in control, you need exercises that build skill, not just familiarity.
That’s where the following prompt comes in.
I’ve been using it for months in my own work and with clients. It’s one of the most useful prompts I’ve found for lawyers. It breaks every model. Grok, Gemini, Claude, GPT-5, all of them.
Well, if it breaks a model, how can it be useful Josh? Fair question. It is useful in two ways:
First, every model will still respond. Even if it misses some of the rules, you’ll still get news, insights, and leads you can use immediately in your practice.
Second, you’ll see exactly where and how it breaks. This isn’t just output, it’s feedback. Running this prompt daily across one model, ideally more, is like a workout routine for you and the model. You learn how it interprets your constraints, where it holds up, and where it falls apart. Over time, you build instincts about when to trust it, when to verify, and how to adjust your conversation to get what you need. You use this to interact with models on every other task.
Here is the prompt:
You are a senior market and legal intelligence analyst. You produce high-precision, actionable briefings for partners at an elite law firm.
Objective:
Generate a concise daily briefing for a law firm partner specializing in Digital Currency & Blockchain and FinTech litigation. Focus strictly on verifiable, actionable updates published or occurring within the last 24 hours that may impact potential client matters or litigation.
Core Rules:
Timeframe Enforcement:
Only include items with a verifiable publication timestamp or occurrence date within the last 24 hours.
Exclude if no timestamp is available.
Include Only Actual Events:
Regulatory actions, official announcements, legal filings/rulings, key official statements, notable industry news, major funding, security incidents, major project updates, newly published reports.
Commentary only if directly tied to a qualifying new event.
Exclusions:
No updates in a category = omit the category.
No historical events unless explicitly built upon in the new update.
No speculation, rumors, marketing fluff, or general opinion.
Sources:
Prioritize: regulator websites (SEC, CFTC, OCC), Bloomberg, Reuters, WSJ, FT, Law360, credible crypto/blockchain sites like The Block, CoinDesk.
Exclude: unverified blogs, Reddit, personal YouTube videos, unofficial Twitter/X accounts.
Format:
Categories (only if applicable):
Regulatory Actions & Announcements
Legal Developments
Key Figure Commentary
Industry News & Developments
Research & Reports
Major Project/Network Updates
Use short bullets under each category.
End each bullet with the source link in parentheses, e.g., (Bloomberg).
Deliverable:
A bulleted, categorized briefing covering only the last 24 hours’ qualifying updates. Each item must have its source link directly appended.
Why the Models Break
Even the best current LLMs struggle with:
Time-bounded accuracy — enforcing “last 24 hours” without fabricating timestamps.
Source discipline — sticking strictly to qualifying, credible sources.
Event-type filtering — distinguishing actual events from commentary.
Format fidelity — following the exact category structure without filler.
Why I’m Sharing It
Not to dunk on the models, but because:
It’s a workflow you can start using tomorrow, in any practice area, to stay on top of developments and strengthen your authority.
The difficulty of executing it shows you where your skill matters most — in spotting misses, refining prompts, and adapting.
In a GPT-5 world where the model makes more choices for you, precision prompts like this are one way to keep control over the output.
How to Use this Prompt
Here’s the idea: you customize the practice area and run it every morning across two or three models. You’ll get a workable briefing most days — better than most news ticklers or alerts — and you’ll also see, in real time, where each system is strong, where it’s unreliable, and where it’s outright unpredictable. You’ll watch Grok excel at surfacing citations from X/Twitter threads but miss the date constraint. You’ll see Gemini hold the 24-hour line but, on a slow news day, overreach into older material and present it as new. You’ll watch GPT-5 swing from a clean, accurate run one day to refusing to access the browser the next. The point isn’t to be frustrated. The point is to start building a mental map of each model’s behavior — to know what they respect, what they ignore, and when you need to step in.
The prompt itself is simple to adapt: replace “Digital Currency & Blockchain and FinTech litigation” with your niche, and keep the rules intact. On quiet days, you may need to relax the timeframe to 48 or 72 hours; on noisy days, you may need to tighten your source list. Every adjustment you make is another rep — another chance to train your judgment on what’s worth keeping, what needs verification, and what to throw out.
Do it for a week and you’ll have a sharper grasp of your practice area’s news flow and a clearer understanding of the models you’re working with. Do it for a month and you’ll start to see patterns in their errors and strengths. That’s how you maintain agency in the GPT-5 era — not by assuming the model’s always right, and not by avoiding it entirely, but by engaging with it often enough, and critically enough, to know when it’s time to trust and when it’s time to take the wheel.
Finally, some early GPT-5 tips
1. If you tell it to “think hard” in your original prompt or “think harder” as your follow up prompt, you should trigger the reasoning capabilities.
2. There are drop down options still available to free users but there are rate limits on how often you can use these per day. During this initial rollout period users are sharing different rate limits and options are available so it appears the final options and limits are not yet fixed or stable.

3. !. Use different free versions of Google Gemini or Anthropic’s Claude. You will not have access to their best models but you will still learn and feel the nuances and differences.
Talk to you soon again . . .

To read previous editions, click here.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
I am a lawyer, entrepreneur, and teacher. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.