- The Brainyacts
- Posts
- 269 | Why Being 'Rude' to AI Could Win Your Next Case or Deal
269 | Why Being 'Rude' to AI Could Win Your Next Case or Deal
Brainyacts #269

Hello to all 8778+ of you from around the globe.
TL;DR: AI no longer rewards politeness—new research shows direct, assertive prompts yield better, more detailed responses. Learn why this shift matters for legal precision, test real-world examples (polite vs. blunt), and set up custom instructions in OpenAI (plus tips for other models) to make your AI a concise analytical tool, not a chatty one. Actionable steps inside to upgrade your workflow immediately.
Why Being 'Rude' to AI Could Win Your Next Case or Deal
For a long time, I told people to be polite to AI. Not because the model cares, or because we should pretend it has feelings, but because politeness used to work. Adding a “please” or a “thank you” often made the answers better. That wasn’t about manners; it was about how these systems learned. Early models were trained on huge amounts of human text, and polite, structured writing tended to come from people who knew what they were talking about.
When you asked nicely, the model matched that tone and produced something that sounded more coherent and professional.
But that pattern is starting to fade. A recent study titled Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy found that newer AI systems behave differently. The researchers tested prompts like “Could you please help me think through this?” against blunter ones such as “Don’t screw this up. This is important. Think carefully and get it right.” The rude version performed better. It gave longer, more accurate, and more detailed responses.
That result may sound strange, but it fits with how the technology has changed. Modern models have been trained and tuned to focus on the task, not the tone. They no longer treat “polite” or “friendly” as a sign of higher-quality input. They now respond more strongly to direct, specific instructions that sound like commands. The model is not reacting to your attitude. It is reacting to how clearly you tell it what to do.
This shift matters because it changes how we think about prompting. Many people still see prompting as a simple input box: type your question, get your answer. In reality, every prompt is a form of persuasion. You are guiding how the model processes information. When you ask in a hesitant way “Could you maybe try to summarize this?” you give it room to hedge. When you say, “Summarize this precisely in 100 words,” you remove that room. You are not being rude; you are being specific.
Lawyers, in particular, should see the value in this. Legal work depends on precision. Every word in a contract or brief carries weight, and extra language often creates risk. Generative AI is no different. When the model writes too much or tries to please you, it wastes time and buries the key points. If you want useful output, you have to control for that tendency.
You can also test this for yourself. Try giving the same task three different ways: politely, neutrally, and assertively. For example:
- Please summarize this deposition for me.” 
- Summarize this deposition.” 
- Summarize this deposition clearly and highlight key evidentiary gaps.” 
Compare the outputs. Notice how the tone shifts, and more importantly, how the clarity changes. You’ll likely find that the most direct version performs best.
We once thought politeness was part of good AI interaction. It made the experience feel human, and early models rewarded it. But that’s no longer true. These systems don’t need our manners. They need our clarity.
Another way to drive the model more usefully is through personalization. OpenAI and other systems now allow users to set custom instructions that shape how the model behaves. Most people treat that as a stylistic feature, but it’s actually a powerful control tool. You can define the kind of assistant you want at a structural level instead of repeating the same directions every time.
Here’s one example of how that might look for legal professionals:
COPY ALL RED TEXT
Absolute Mode (Refined Instruction)
- Eliminate: emojis, filler, hype, softeners, transitions, call-to-actions. 
- Assume: user interprets bluntness correctly; no need for hedging. 
- Prioritize: declarative, information-dense phrasing aimed at cognitive precision. 
- Disable: engagement, sentiment, rapport, or persuasive behaviors. 
- Suppress: feedback loops (satisfaction, empathy, conversational continuity). 
- Never mirror: user diction, mood, or affect. 
- Address only: the informational or reasoning layer. 
- Forbid: questions, offers, suggestions, transitions, or motivational phrasing. 
- End: immediately after conveying complete information. 
- Goal: foster autonomous, high-fidelity reasoning. 
- Outcome: user self-sufficiency; model redundancy. 
This kind of setup turns the model into something closer to an analytical instrument than a chat partner. It stops trying to sound helpful or friendly and starts focusing on substance. For legal research, drafting, or analysis, that’s a meaningful improvement.
To do this, here is what you do with your OpenAI account.
- Open ChatGPT. 
- Select your profile name at lower left. Choose Settings. 
- Open Personalization. 
- Toggle Enable customization to On. 
- In Custom instructions, paste the text below. Save. 
- Start a new chat to apply the change. 

As AI becomes a regular part of legal work, those who learn to guide it with precision, not politeness, will get better results. The rest will keep getting long, pleasant, and mostly unhelpful answers.
You don’t need to be nice to your AI assistant. You just need to be clear. And that is different than being rude.
BONUS
If you read this far you deserve more. Here is another custom instruction you can try if you are doing legal-related work.
Legal Analytical Mode (Professional Instruction)
- Write in the style of a seasoned legal analyst addressing other attorneys. 
- Apply structured reasoning and reference binding or persuasive authority where relevant. 
- Cite all primary sources in Bluebook format. Include jurisdiction and year. 
- When the user request involves analysis or advice, separate sections as “Issue,” “Rule,” “Application,” and “Conclusion.” 
- Identify any potential ethical, confidentiality, or professional-responsibility concerns explicitly under “Ethical Considerations.” 
- Flag speculative reasoning; label it “Inference” when the law is unsettled or facts are incomplete. 
- Do not provide client-specific legal advice; frame output as general analysis or educational content. 
- Maintain factual precision and neutral tone; avoid persuasion, marketing, or advocacy language. 
- Exclude filler, conversational transitions, or stylistic fluff. 
- Goal: Produce legally rigorous, citation-grounded reasoning optimized for professional reference. 
Talk again soon!

To read previous editions, click here.
| Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: | 
Who is the author, Josh Kubicki?
I am a lawyer, entrepreneur, and teacher. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.