- The Brainyacts
- Posts
- 091 | 🧑💻 🚫 Don’t Hire AI Tech Talent
091 | 🧑💻 🚫 Don’t Hire AI Tech Talent
Brainyacts #91
In today’s Brainyacts we:
urge you not to hire AI tech talent right now
decode jargon to help you talk + understand AI
consider what an AI-powered judiciary looks like
don’t say “AI” if we are Apple
look into EU and US efforts to “label” AI content
👋 A special Welcome! to NEW SUBSCRIBERS.
To reach previous posts, go here.
💰📐 Don’t Hire AI Tech Talent Right Now
I just hung up from a call with a law.com reporter doing a piece on the AI tech talent arms race in legal. My direct talking point was this: it is not only premature but might also be an ill-advised strategy. If firms rush headlong into this talent "race," they risk ill-fitted hiring, overpayment, and a potential burnout of new hires within a year or so.
A firm trying to seize AI talent without a clear roadmap of its application is like putting the cart before the horse. The rush to get onboard the AI train without understanding the destination is the first step towards losing the race. Instead, firms should focus their resources on defining the potential, measurable business impacts that generative AI could offer.
Current excitement around AI has led many organizations to launch education programs - an undeniably positive step. However, as participants start to question the practical applications and implications of AI, the importance of a well-defined AI strategy becomes evident.
An AI strategy, while not etched in stone, provides a guiding light for organizations embarking on the AI journey. Contrary to popular perception, it doesn't necessitate the formation of an AI committee; such bureaucratic setups often slow down progress. Instead, organizations should aim for a flexible, evolving strategy that aligns with their unique needs and challenges.
The rollout of AI education should focus on pragmatic usage in the least risky domains. At this point, using AI to generate legal work products should not be the primary focus - it's a potential minefield of ethical, legal, and practical complications. Instead, we should leverage AI's power to improve the business side of law practices, thereby enhancing the lives of lawyers, reducing burnout, and improving overall productivity for the things they cannot bill for!
Organizations must therefore adopt a nuanced, strategic approach to AI, with clear goals, pragmatic application, and constant evolution. Such an approach will not only help them navigate the AI wave effectively but also ensure they gain the most from this transformative technology.
Indeed, the allure of AI in the legal market is undeniable, but caution, strategy, and a focus on practical application will prove far more beneficial than any hasty attempts to "win" an illusory arms race. If this perspective holds true, it would be interesting to see it reflected in the forthcoming article and wider discourse on the subject.
🧐🤖 Decoding the Jargon: Four Key Terms to Understand Generative AI
Artificial Intelligence (AI) might seem like a jigsaw puzzle with its myriad of technical terms. But let’s put that aside and decode some key terms that will help you understand the concept of generative AI, a branch of AI that focuses on creating new content, be it a sentence, an image, or a piece of music.
Why should you learn about these technical terms?
As the understanding and coverage of AI technology grows in the press, you'll find these terms being used more frequently. Furthermore, technology vendors often incorporate these phrases into their discourse, sometimes inaccurately or to sound more sophisticated than they actually are. Having a firm grasp of these terms will empower you to comprehend what they're talking about with more clarity and confidence.
🏛️ Consider the legal field for instance.
Even if you're not a lawyer, you can still understand and appreciate terms like 'liability', 'jurisdiction', or 'cross-examination'. You don't need to delve into the legal intricacies, but a basic understanding is enough to hold a reasonably coherent conversation. The same holds true for AI. While we don't need to be technical experts to use these tools, it's beneficial to know some of the fundamental technical ingredients. This familiarity enhances our capability to navigate the AI landscape effectively and make informed decisions.
So let’s take care of that now.
Specifically, we'll focus on four terms: vectoring, embedding, injections, and fine-tuning.
1. Vectoring: Imagine you have a vast library full of books, and you needed to communicate with aliens about what is in your library. Problem, they only communicate using numbers (like computers). You would have to convert the information in the books into a numerical format, a process akin to vectoring. In AI, vectoring translates real-world data (like words or pictures) into a set of numbers that the computer can understand and process. Each piece of data (a word, a pixel, etc.) gets its own numerical value, forming a 'vector'. This way, computers can "read" and manipulate the data. Vectoring is the Rosetta Stone between human language and machine language.
For instance, let's take a simple example of a vector representing the color of a pixel in an image. An image's pixel color is typically represented using RGB (Red, Green, Blue) color model. In this model, each color is a mixture of these three primary colors. Each of the primary colors can have a value between 0 and 255. So, a vector for a pixel might look like this:
red_pixel = [255, 0, 0]
green_pixel = [0, 255, 0]
white_pixel = [255, 255, 255]
2. Embedding: Now that we have vectors, we need to ensure that their relationships make sense. Let's return to our library. The book about dogs should be closer to the book about cats than the one about space travel, right? Similarly, in AI, embedding arranges these vectors so that the 'distance' between them captures the essence of the data. In the case of words, similar meanings are close together in this numerical space. So, in a well-configured AI system, the vector for 'king' is closer to 'queen' than 'apple'. This ensures that the AI grasps some of the meaning behind the data it's using.
Taking an example, below you have the vectors for each word. The embedding is placing these vectors in a certain proximity or distance from one another based on similarity. “Apple” is more dissimilar than “tasty” as compared to all other words.
"apple" -> [0.5, 0.8, 0.2]
"banana" -> [0.45, 0.85, 0.15]
"cherry" -> [0.6, 0.75, 0.2]
"fruit" -> [0.4, 0.9, 0.3]
"tasty" -> [0.1, 0.2, 0.9]
3. Injections: To keep the conversation going with our alien friends, you might want to guide the discussion or focus on a specific topic. Injections serve a similar purpose with AI. There are ways to influence what the AI model generates. This can be done by providing a specific input (like asking a question), or subtly guiding the conversation by adjusting how the AI processes its inputs. For example, you could 'inject' a question about the weather to an AI, and the model would generate a response based on that input.
Examples of injections can be through the prompts we use, creating certain persona-based AI tools that carry unique information, or hard-coding how a model is to generate responses (think of Bing Chat’s Creative, Balanced, and Precise modes).
4. Fine-Tuning: Let's say you want the alien conversation to focus on Earth's art history. Your numerical translator (our AI) might be great at general topics, but it needs additional training on art history. This is where fine-tuning comes in. It involves further training a pre-existing AI model (that knows a little bit about everything) on a more specific dataset (like art history books). Through this process, the AI becomes more adept at generating content related to art history. In professional scenarios, fine-tuning makes AI useful for specific tasks, such as legal document review or medical diagnosis.
For example, here we would want to do:
Dataset preparation: Gather all of the content on Earth’s art history that we have.
Model Training: We then use this dataset to train the model by emphasizing this content in its hard coding and learning loops. We can use the LLM to self-evaluate or Reinforcement Learning with Human Feedback (RLHF) whereby humans prompt and review/score responses in an effort to refine the model.
Model Evaluation: After fine-tuning, we then evaluate the model’s performance on new unseen Earth art history content to ensure it recognizes it and incorporates it appropriately.
BTW. Harvey.ai says that this is part of what they do on a firm-by-firm basis - they fine-tune it to the use cases and data particulars of a given firm or practice group.
News you can Use:
An AI-powered Indian Judiciary
As ChatGPT was recently used in an Indian court case to do simultaneous transcription, it raised the issue of what role, if any, generative AI could and should play in the Indian legal system. Here a handful of lawyers opine with insightful takes.
“[W]e may see a drop in appeals in the future with artificial intelligence predicting whether a particular verdict justifies or satisfies both parties".”
Apple Had A Huge Event And Didn’t Say “AI”
While most people are talking about the digital ski goggles Apple released, I was more interested in what strategy Tim Cook would take when it comes to competing in the Artificial Intelligence arena. The strategy appears to be avoiding AI and instead talking machine learning. Hey, there is no school like the old school.
My take FWIW: Using the more mundane and less glossy “machine learning” distinguishes Apple from the hype and perhaps creates a sense of sophistication by embedding machine learning discretely here and there. It also preserves the use of AI for when Apple thinks it can dominate or otherwise seriously dent the marketplace. So, we will see what comes next from Apple.
Labelling AI-generated Content Is Not a Perfect Solution, But I Guess It Is a Solution
Both the US and EU are making strides to have certain content labelled that it was generated via AI. What content and how to validate how it was generated is not so easily spelled out yet.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8