- The Brainyacts
- Posts
- 225 | 👾🕵️ Ask your AI vendor for their system prompts
225 | 👾🕵️ Ask your AI vendor for their system prompts
Brainyacts #225
It’s Tuesday. Watch Agassi vs Courier in AI-powered Ping Pong. Its a promo for tennis’s US Open and IBM Watson partnership. This year they will tap into the power of generative AI to add spoken commentary to match highlights, produce match previews, and write hundreds of match summaries called “Match Reports.”
Let’s dig in.
In today’s Brainyacts:
Ask your AI vendor for their system prompts
Everyone gets access to Anthropic’s Artifacts
OpenAI’s Strawberry & Orion and other AI model news
Police using AI to write crime reports and more news you can use
👋 to all subscribers!
To read previous editions, click here.
Lead Memo
🕵️👾 Why you need to ask your AI vendor for their “system prompts” to stress test the vendor’s commitment to transparency and trust.
What Are System Prompts?
System prompts, sometimes called “system messages,” are predefined instructions that guide the behavior of an AI model before it interacts with a user. Think of them as the AI’s “rules of engagement.” Unlike the prompts you input to get specific answers, system prompts shape the AI’s overall tone, style, and boundaries. They establish the context in which the AI operates, influencing its responses throughout the session.
For instance, a system prompt might instruct an AI to adopt a formal tone, prioritize concise legal analysis, or adhere to jurisdiction-specific guidelines. This underlying framework ensures that the AI aligns with the intended use case, making its outputs more reliable and consistent.
How Are System Prompts Used?
System prompts are usually set by AI developers or vendors and are often hidden from end users. They are crafted to tailor the AI’s responses to the specific needs of different industries, including legal. In legal applications, system prompts might direct the AI to focus on legal precedents, avoid speculative advice, or comply with ethical guidelines specific to legal practice.
For example, an AI tool designed for contract analysis might be guided by a system prompt to identify and flag ambiguous language or non-standard clauses. This prompt ensures that the AI consistently addresses these aspects across various contracts, providing legal professionals with a dependable tool for review.
In a celebrated move, Anthropic, the creator of the chatbot Claude, has become the first in the industry to publicly release the system prompts that guide its AI models—Claude 3 Opus, Claude 3.5 Sonnet, and Claude 3.5 Haiku. These prompts dictate what the AI models should or shouldn’t do and set the overall tone for their responses.
The most recent system prompts, as of July 12th, instruct all three models not to: open URLs, links, or videos; identify or name any individuals in images; and avoid starting responses with filler words like “certainly” or “absolutely.”
Claude 3.5 Sonnet’s knowledge base was updated in April, while Claude 3 Opus and Claude 3.5 Haiku were updated in August 2023. This means the models can respond to questions using information available up until those respective dates.
If the models encounter a query where information is difficult to find online, they will no longer apologize. Instead, they’ll inform the user that while they aim to provide accurate responses, there is a possibility of generating incorrect information.
🚨🚨You should definitely take 2 mins to read the system prompt for Claude 3.5 Sonnet.
Why Should Legal Professionals Care About System Prompts?
Understanding and accessing system prompts can give legal professionals more control over how AI tools perform. By knowing the parameters within which the AI operates, you can better assess its suitability for your practice. Moreover, transparency in system prompts can help you ensure that the AI aligns with your ethical and professional standards.
For example, if you are using an AI tool for drafting legal documents, you might want to know if the system prompt encourages the use of plain language or if it adheres to a specific legal style guide. This knowledge allows you to use the tool more effectively and confidently, ensuring that the AI’s output meets your professional requirements.
Tips for Asking AI Vendors About System Prompts
When engaging with AI vendors, it’s crucial to ask the right questions about system prompts to ensure the tools you adopt are tailored to your needs. Here are some tips:
1. Ask for Transparency: Inquire whether the AI tool uses system prompts and request a general overview of these prompts. Vendors should be willing to share how these prompts shape the AI’s behavior.
2. Seek Customization Options: Ask if the system prompts can be customized to better fit your practice’s specific needs. For instance, can the AI be guided to prioritize certain legal standards or to avoid particular legal pitfalls?
3. Evaluate Ethical Alignment: Ensure that the system prompts align with your ethical obligations. Ask vendors how they address ethical considerations in their system prompts, particularly in areas like client confidentiality and legal compliance.
4. Test the Impact: If possible, request a trial or demonstration where you can see the AI in action with its system prompts in place. This will give you a firsthand look at how the AI behaves and whether it meets your expectations.
5. Stay Updated: Technology evolves rapidly, so ask vendors how often system prompts are updated and whether you’ll be informed of significant changes that might affect the AI’s performance.
Spotlight
✂️ 📄 Anthropic’s Artifacts Open to All
Just a few hours ago, Anthropic shared that the new and popular feature, Artifacts, is now accessible to all users - including free!
I’ve made a video on this before but I wanted to make a new one. Here I use a complaint (one filed against Anthropic just last week).
What can you do with artifacts? A lot. I chose three approaches but there are many more.
Summary Tables: I prompted the tool to create a table listing key allegations alongside related facts. This feature can be incredibly useful for quickly summarizing and understanding complex legal documents.
Visual Aids: While the initial attempt to add graphics wasn’t exactly what I hoped for, it did show the potential for integrating visual elements into your document analysis. This could be particularly helpful for those of you who are visual learners.
Knowledge Quizzes: I also experimented with a quiz feature to test and refresh my knowledge of the document’s content. This could be a valuable tool for ensuring comprehension and retention, especially when working with dense legal texts.
Encouragement to Experiment
I strongly encourage you to explore the Artifacts feature. Whether you’re summarizing a case or simply trying to better understand a document, this tool offers a practical way to enhance your legal work. The best part? Similar functionality can be integrated into other LLMs you might already be using.
So dive in, give it a try, and see how it can streamline your work. I’m looking forward to hearing your thoughts and experiences with this new feature.
AI Model Notables
► OpenAI's Strawberry AI is reportedly the secret sauce behind next-gen Orion language model. Both models aim to improve AI reasoning. New model released to the public this Fall, most likely.
Here's what people are saying about them:
Strawberry:
Solves complex tasks (mathematics and programming)
Improves language understanding (with more ”thinking” time)
Has shown success in internal tests (such as the NY Times puzzle)
Orion:
Successor to GPT-4, aims to outperform it.
Train with data generated by Strawberry.
► Inflection AI bets on porting Pi chatbot data amid enterprise shift. Inflection hopes the partnership with DTI (Data Transfer Initiative) will give its users the flexibility to have control of their personal AI data, allowing them to move their personal and professional conversational history how they see fit. The step also comes as the company shifts its focus from consumer-focused Pi to enterprise-centric products.
► Elon Musk surprisingly came out in favor of the controversial California AI Bill (SB 1047)—a proposal that calls for large AI companies to implement greater safety protocols to prevent their models from causing harm to humanity—which saw many in the AI industry oppose it, believing it will stifle innovation. Some of Silicon Valley’s biggest, most powerful players (namely a16z and OpenAI) have been vocal about fighting it.
News You Can Use:
➭ Police officers are starting to use AI chatbots to write crime reports. Will they hold up in court?
➭ Gonzaga law school launches first-ever intersession week – a mini-term prior to the full semester term. It allows for one-credit courses that can be done in a week. One of the courses is “AI and the Legal Profession.”
➭ Thomson Reuters v ROSS Intelligence: A Delaware federal judge on Thursday delayed the first trial involving an artificial intelligence product, postponing a highly anticipated clash in which Thomson Reuters is accusing tech startup ROSS Intelligence of creating an AI legal research platform using copyrighted material from the media company's Westlaw database.
➭ Law school applicants rose nearly 6% this year - but what if generative replaces most work that new lawyers do?
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8