- The Brainyacts
- Posts
- 098 | 🧩💻 The Incomplete Guide To LLM
098 | 🧩💻 The Incomplete Guide To LLM
Brainyacts #98
In today’s Brainyacts we:
share a great benefit of writing this newsletter
give you the Incomplete Guide to LLM Ecosystems
detail AI ethics-related Google Search trends
learn there will be a US Senators-only AI briefing
further ponder citing GPT-4 given a California Law plugin, as an example
see the UK take the lead on opening the hood on leading AI models
take both pills
👋 A special Welcome! to NEW SUBSCRIBERS.
To reach previous posts, go here.
On this topic of my subscribers, as I wrap up the first 100 days later this week, I want to share with you this tweet. It captures some of the greatest attributes of writing this newsletter so far.
The most effective networking strategy I’ve found has nothing to do with conferences, cocktail hours, cold emails, or any of the common ideas you hear.
1) Do interesting things.
2) Share them publicly.Like-minded people will come to you.
— James Clear (@JamesClear)
2:00 PM • Jun 7, 2023
🚨 Remember Brainyacts is a safe place. I sometimes talk about things that may seem too in depth or detailed for the average person. But I believe you will run into these things eventually. I would rather you encounter them for the first time here rather than in the wild, in a meeting, or in an environment not conducive to absorption.
With that . . . .
🧩💻 The Incomplete Guide to LLM Ecosystems
As we continue to navigate the digital transformation era, the integration of Large Language Models (LLMs) and generative AI into the legal sector is no longer an option, but a necessity. These technologies offer a world of possibilities for streamlining tasks, enhancing decision-making, and innovating service delivery.
This "Incomplete Guide to LLM Ecosystems" offers a starting point for legal professionals seeking to understand and integrate these tools into their operations. It emphasizes the various ways to access and use LLMs, considering the unique needs and capacities of your legal practice.
Why is it incomplete? Because the pace of change is rapid. And there are so many variations coming online each week that it is near impossible to track, let alone explain.
Hence, this "Incomplete Guide to LLM Ecosystems" is your compass for this technological terrain. It's designed to help you, as a legal professional, grasp the fundamental aspects of LLMs and generative AI—making these technologies less of a mystery and more of a tool for strategic advantage. Whether you're exploring how to adopt AI into your practice or seeking to optimize your current usage, this guide offers vital insights to inform your decision-making.
Remember, the future of law is being written in code, and this guide will help you stay on the cutting edge of this evolution.
1. Consumer-Facing Models
At the heart of generative AI, we find consumer-facing models developed by leading tech companies. These user-friendly interfaces, such as chatboxes, are accessible by the average user and include familiar names like OpenAI's ChatGPT and GPT-4, Bing Chat, Google Bard, and Anthropic's Claude.
Considerations:
The risk of "shadow IT" where employees circumvent official channels to use these tools, potentially compromising security.
Privacy concerns arising from user interactions.
The potential for misuse, including unethical applications.
2. API-Enabled Applications
For those seeking a more tailored approach, API-enabled applications offer access to LLMs through specialized interfaces. These applications may house different performance parameters than their consumer-facing counterparts.
Examples: Jasper, Hugging Face, OpenAI Playground, and countless OpenAI plugins.
Considerations:
Stability and robustness of the APIs in use.
The potential for outdated LLM versions due to API access.⚠️
User experience is heavily influenced by the quality of the specific use case or user interface.
Additional development resources may be necessary.
3. LangChain Models
LangChain provides a new level of versatility, offering an interface that connects to multiple language models. This allows users to access and integrate different models for various applications.
Examples: Chatbots, content generators, and knowledge bases that leverage various language models.
Considerations:
Increased system complexity and integration needs.
Potential user confusion when interacting with multiple models.⚠️
Higher resource requirements for system maintenance.
4. Trust Layer Models
With a layer of control and monitoring, trust layer models aim to ensure responsible and ethical use of language models.
Examples: Lega - Lega empowers law firms and other enterprises to safely test and use various LLMs; Microsoft’s Azure instances, Google's Fairness Indicators, and Jigsaw's Perspective API.
Considerations:
Importance of enforcing and maintaining usage policies.
Necessity for transparency in monitoring access and content.
Potential restriction on creativity due to system controls.⚠️
5. Input/Output Engineered Models
These models are tailored to deliver semi-automated prompts and responses, providing an extra layer of control and enhancing interaction quality.
Example: Prediction Guard which describes itself as a “wrapper.”
Considerations:
Consistent and meticulous input/output engineering is crucial.
Over-engineering may reduce model adaptability.⚠️
Continual model tuning and optimization is required.
6. Data-Infused Models
Incorporating proprietary or external data with the base language models, data-infused models create a hybrid knowledge set tailored to specific informational needs.
Examples: Co-counsel https://casetext.com/cocounsel/, GPT-3 Med.
Considerations:
Maintaining data quality and accuracy is paramount.⚠️
Handling sensitive data raises privacy and security concerns.
Legal and ethical considerations around data use must be addressed.
7. Secured Proprietary Models
At the top tier, secured proprietary models combine proprietary data and language models within robust security protocols. This allows for powerful applications while safeguarding data and ensuring compliance.
Examples: Google Health, Salesforce Einstein
Considerations:
The need for high-level security protocols.
Complex implementation and maintenance.⚠️
Compliance with legal and regulatory standards.
😇🔦 Google Searches for “AI Ethics” and “Responsible AI” Are Surging
Well well, Google Search is still relevant in the age of generative AI? You bet it is. I thought I would show you some quick research on the number of searches people are making in Google on the following topics. What this shows us is the interest and desire to learn more on these topics.
AI Ethics
AI ethics is the set of moral principles guiding responsible artificial intelligence use. Businesses are increasingly recognizing ethical issues related to AI, with searches for "AI ethical issues" growing by 7519% in the past five years.
These concerns can lead to product failures, reputational damage, and legal problems. Countries like the EU are implementing regulations, such as "The AI Act," to govern AI usage and ethical guidelines.
This creates opportunities for lawyers and law students to offer guidance, ensure compliance, and navigate the legal complexities of AI. Studying AI ethics equips students for careers at the intersection of law, technology, and ethics.
What's Next
AI ethics is part of the Responsible AI meta trend.
Searches for “responsible AI” have increased by 3450% over the last five years.
Responsible AI is a framework that helps organizations make appropriate ethical choices when implementing AI.
Gartner ranks responsible AI in the innovation trigger of their “Artificial Intelligence Hype Cycle”.
As businesses strive to align themselves with Responsible AI, I wonder if we might see a movement similar to the LEED certification in ‘green’ architecture. Could Responsible AI become a meaningful differentiation or brand play?
Explainable AI
Search on this topic is showing significant growth too. This search is about finding AI tools that are able to provide a clear and understandable explanation for its decisions and actions. It seems that with generative AI, we desire insight into how AI is responding to us. This is not remarkable but it does show the chasm between use cases and knowing how the tools work.
News you can Use:
Senators Only For This AI Briefing
In an unprecedented move amid rising concerns about election integrity and national security, the U.S. Senate is set to hold its first members-only briefing on Artificial Intelligence (AI) this week. The decision, announced by Senate Majority Leader Chuck Schumer via Twitter, comes as the rapid spread and evolution of AI technology has sparked fears over the potential manipulation of the 2024 Presidential election through 'deepfakes' and other AI-powered techniques. As AI continues to permeate various sectors, concerns have also been raised over its use by international adversaries such as Russia and China to enhance their military capabilities and surveillance.
The briefing will see the participation of some senators who have voiced their apprehensions about AI's potential impact on swaying voter opinion, along with high-ranking officials expressing their fears and hopes for this powerful technology.
Tomorrow, the Senate will convene the first-ever Senators-only briefing on Artificial Intelligence.
— Chuck Schumer (@SenSchumer)
12:05 AM • Jun 13, 2023
Citing California Law Via OpenAI Plugin
Ok, yesterday I raised the question of how to cite to ChatGPT or other generative AI tools. Some folks replied “never!” as these are not research tools. Agreed. But with more plugins coming online everyday, this is changing.
For instance, now there is a plug in for California Law.
Never a dull moment!
UK Gets the Goods on Leading LLMs Before Other Nations
Google DeepMind, OpenAI, and Anthropic have agreed to provide the U.K. government with access to their AI models for research and safety purposes, according to an announcement by Prime Minister Rishi Sunak during London Tech Week. The priority access aims to facilitate better evaluations and understanding of the opportunities and risks associated with these systems.
The U.K. government plans to host a global summit on AI safety in the fall and establish a Foundation Model Taskforce for AI safety research, backed by £100 million of funding. Sunak also mentioned semiconductors, synthetic biology, and quantum as other areas of focus for the U.K.
In the Memetime:
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8