- The Brainyacts
- Posts
- 243 | š¤šø A $2000/hr paralegal?
243 | š¤šø A $2000/hr paralegal?
Brainyacts #243
Itās Tuesday. Um . . . really?
Onward š
In todayās Brainyacts:
Using ChatGPT for legal research: a contrarian (nuanced) view
AI impact on our fatigue (a research effort)
Tour a real AI data center (Elonās) and other AI model news
Court TV strikes AI deal and more news you can use
š to all subscribers!
To read previous editions, click here.
Lead Memo
šļø šš» Can you use ChatGPT for legal research? Yes. Yes you can.
In this weekās newsletter, Iām spotlighting a thought-provoking, contrarian perspective from my students on a pressing question in the field: should we rely on ChatGPT for legal research? Amid the widespread debate over the risks of AI in the legal realm, these students offer a refreshing viewpoint. They argue that while AI tools like ChatGPT may stumble with accuracy, they can still serve as valuable aids in refining our research methods and critical thinking. They also bring us back to the core of our profession, highlighting the ethical considerations essential to using AI responsibly. Dive in for a nuanced take on AIās evolving role in legal research!
The following was written by Claire Thiel, Yiming Gu, and YG Lee.
Weāve all heard about the āChatGPT lawyerā who used cases from ChatGPT in his brief, only to find out the cases did not actually exist. So, you are probably wondering if generative AI tools ever be a useful tool for legal research or if they should be avoided completely. We believe that LLMs CAN be useful for legal research! But it is important that the user is prudent and thoughtful when using them. Weāll focus specifically on ChatGPT for this memo and discuss hallucination as well as how to be a responsible user of generative AI in your legal research.
Background on Hallucinations:
When you ask ChatGPT or any other large language model (LLM) about a specific case or for a case citation in your legal research, it might give you answers citing one or more cases. However, as many have noticed, these cases might not always be realāor they may be irrelevant to your query. This phenomenon is known as AI hallucination. (This term āhallucinationā can be misleading, as this article points out.)
Legal hallucinations generally break down into two main categories: correctness and groundedness.
Correctness refers to whether the response is factually accurate. There are three possibilities:
Correct: The response is factually accurate and relevant
Incorrect: The response contains factually inaccurate information
Refusal: The model refuses to answer or provides an irrelevant response
Groundedness evaluates whether the modelās response is properly supported by the sources it cites:
Grounded: The responseās key facts are backed by valid legal sources.
Misgrounded: The model cites sources, but they donāt actually support the claim.
Ungrounded: The response doesnāt cite any sources at all, even though it should.
However, itās important to note that these issues are not unique to LLMs. Even traditional search engines can return irrelevant or inaccurate results. However, with LLMs, the risk of hallucinations may seem higher because the responses sound so confidentāeven when theyāre incorrect!
ChatGPT examples
This is an example of ChatGPT providing legitimate cases (and not hallucinating!). The response is both correct and grounded. You can see that there are links to show where the information is coming from, and the links are to legitimate legal sources. It is important not to rely solely on ChatGPTās description of the case but to also read through the case yourself.
Here, the statute provided by ChatGPT is incorrect. The statute cited is a real statutory citation and does have to do with landlord-tenant law in Indiana, but rather addresses a landlordās duty relating to smoke detectors, not a landlordās duty to mitigate damages. This means that the response is misgrounded. But, if you, as the user, donāt go read the statute on your own, you would not know that it is a misgrounded answer.
You may be asking, how can I trust ChatGPT if hallucinates sometimes? Generally, ChatGPT does not do as well when there is no statute or case that directly addresses the question posed. In this example, there is no Indiana statute that directly addresses a landlordās duty to mitigate damages, which led the LLM to hallucinate. So if you are researching something novel, ChatGPT may not be the best tool. But if the law is somewhat established in the area in which you are searching, ChatGPT can likely get you headed in the right direction.
Another (and possibly better) way to use ChatGPT
This example provides a different (and possibly more effective) use for ChatGPT. Rather than relying on ChatGPT to complete legal research for you, you can ask it to help you understand a legal concept so that you are more prepared as you begin the research yourself. Think of ChatGPT as your mentor! This helps you to have a general understanding of a legal concept before you do a deep dive into Lexis or Westlaw. Instead of using ChatGPT to help you find the answers, a safer and more effective way to use it is to help you find the right questions to ask.
What is a lawyerās duty when using Gen AI for legal research?
Contrary to popular belief, ChatGPT CAN be a useful tool for starting your legal research, especially if you donāt have access to other legal-specific LLMs. But users should never become too reliant on ChatGPT and use the information before engaging in some verification of it. For example, ChatGPT might be helpful if you donāt know where in the code to look for a certain statute or what types of cases you are looking for. Asking a generic prompt will likely provide you with a good first step, cutting down the time you spend looking through various materials. But you, as the user, should know that it is probably best for you to read the case, statute, or source to verify that it does indeed relate to the legal question you asked.
In addition to verifying the information provided by ChatGPT to ensure there are no hallucinations, a lawyer should also be cognizant of their ethical duties, making sure that their use of Generative AI in legal research complies with professional responsibility requirements. Specifically, lawyers need to pay attention to the following ethical duties:
Lawyers have a duty of candor to the court. If you are using ChatGPT without verifying the outputs, you could be in serious trouble.
Our tip: Double check all the AI-generated legal research outputs with authoritative sources, such as case law or statutes, before incorporating them into your briefs or arguments.
Lawyers are bound to protect client confidentiality. Inputting confidential client information into Generative AI tools could lead to the disclosure of sensitive data, especially if you do not know where the input data is located and how the AI company uses the input information.
Our tip: Avoid using client-specific confidential information to generate an answer in a generic Generative AI platform, such as ChatGPT. Rather, ask generic questions that will guide your legal research.
Lawyers must obtain informed consent from their clients. Clients may or may not be aware of the risks or limitations of Generative AI tools, including but not limited to how Generative AI tools are operated and what risks are involved when using Generative AI tools.
Our tip: Provide full disclosure to your clients about your use of Generative AI tools and obtain explicit informed consent from them. The disclosure must ensure that the client understands the risks involved in your use of Generative AI.
Lawyers are required to provide competent representation to their clients. The ABA has extended the competence requirement to include the use and understanding of various technologies, requiring that attorneys understand the AI tools they are using.
Our tip: Get involved in ongoing training on AI technologies. (Reading Brainyacts is one great way to do this!) In addition, as previously stated, always verify AI-generated answers to make sure that your use of generative AI will not adversely affect your client.
Conclusion
We hope you now can see that ChatGPT CAN in fact be helpful in conducting legal research! It is not to be used as a replacement for legal research but rather can be a helpful starting point. It can save you time by helping you to better understand topics that are new to you and can give you a starting point for where to look for cases and statutes.
Spotlight
š£ļø š² AI and the New Way We Work: How Has Generative AI Changed Your Day?
I recently found myself deep in this reflection. On a day packed with back-to-back commitments, I took a 50-minute walk to reset my mind, bringing my phone along for a simple, productive conversation with ChatGPT. I covered project points, drafted a complex email, and unpacked a new research topicāall while taking in the sun and steps. By the time I returned to my desk, three looming tasks were already behind me. I felt sharper, less stressed, and more ready for what came next. That small experience with AI wasnāt just a time-saver; it was a shift in how I moved through my day.
Now, I want to hear from you. How has AI woven itself into your workday? Do you find it helps streamline mental workload, spark creativity, or access a flow state? Have you noticed any changes in your cognitive fatigue, stress, or focus? Whether youāre experimenting with tools to ease email overload or using AI to make complex cases more manageable, your story is part of a larger picture that weāre all just starting to understand.
Your experiencesāyour successes and even your challengesācould offer valuable insight into the real impact of generative AI on legal professionals. Share your story with me so I can begin capturing the frontier of this phenomenon.
Hit reply to this email to send me a note!
AI Model Notables
āŗ See inside a state of the art and largest AI data center.
A look inside Elon Musk's @xAI liquid-cooled supercomputer cluster in Memphis, the worldās largest and most powerful AI supercomputer, comprised of 100,000 H100 Nvidia GPUs xAI is currently more than doubling its size by adding another 50,000 H100s and 50,000 H200 Nvidia GPUs
āŗ Everyoneās talking about āJarvisā because it represents a big leap in making AI a practical, everyday assistant. Googleās Project Jarvis is designed to handle digital tasks for usersālike researching, shopping, or booking flightsādirectly in the Chrome browser. Itās powered by the new Gemini 2.0 model, an advanced AI engine, and functions by analyzing the userās screen and interacting with web content on their behalf. No release data confirmed yet.
āŗ Meta has signed its first news content multi-year deal with Reuters to incorporate its news content into the Meta AI chatbot, allowing it to provide real-time answers to user queries about news and current events.
āŗ Elon Muskās xAI adds image understanding capabilities to Grok.
āŗ Meta also released NotebookLlama, an open-source version of Googleās NotebookLM that converts PDFs into podcasts using text-to-speech technology.
āŗ Google DeepMind develops AI conflict mediator - The system is designed to help people or groups with opposing viewpoints understand each other.
News You Can Use:
ā Humanizing Justice: The transformational impact of AI in courts, from filing to sentencing.
ā āIām being ambushed with thoughts from a robotā: Redditors are complaining about loved ones using ChatGPT during arguments.
ā AI pets offer a playful peek into the future.
ā A study reveals that generating a single AI image can consume as much energy as half of a smartphone's battery charge.
ā We finally have an āofficialā definition for open source AI.
ā Universal Music Group (UMG), (one of the biggest music labels, most recently famous for pulling their music off TikTok over a royalty dispute) has made a deal with an āethical AI company,ā Klay Vision, to make AI-generated music.
ā Court TV just inked a deal with the AI company Veritone to license their archival footage.
ā A 25-year-old wanted to jump on the ever-so-profitable ransomware bandwagon with the help of ChatGPT-like bots. Instead of making money, he received a prison sentence. It took him only 6 hours to create the malicious code.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8