- The Brainyacts
- Posts
- 093 | This Judge ❤️ ChatGPT
093 | This Judge ❤️ ChatGPT
Brainyacts #93
In today’s Brainyacts we:
share a law student internship idea with you
dig into corporate legal team use of generative AI
shake our heads at a move to license AI developers
smirk at a defamation lawsuit against ChatGPT
see internet AI generated content doubled soon
give you a judge embracing ChatGPT
👋 A special Welcome! to NEW SUBSCRIBERS.
To reach previous posts, go here.
⚖️👷♂️ A Transformative Internship Program: Bridging Law, AI, and Ethics
I am excited to share an idea that has been simmering in my mind for a bit, something I believe could truly improve the way law students and professional legal service firms engage with the world of generative AI.
As we navigate the current legal market landscape, we're increasingly grappling with two core problems.
Firstly, there is a pressing need for pragmatic skills and real-world use cases in the business and practice of law.
Secondly, we face challenging ethical dilemmas when incorporating AI into our legal environments. Add to that the fact that law students will most definitely be lacking in fundamental knowledge, skills, and experience with generative AI, and we have a considerable gap to bridge.
The solution? I am proposing the creation of a comprehensive, 10-week internship program designed to tackle these issues head-on.
The program will educate law students about the fundamentals of generative AI and the core business aspects of law, including the law firm economic model, the billable hour, and the broader legal services ecosystem. Students will assist corporate legal teams, law firms, and other organizations by developing a generative AI playbook to guide use cases and provide ethical guidance. Through this, they will gain practical skills and experience while earning credit hours, simultaneously contributing to a community at the cutting-edge of these evolving issues.
However, this ambitious project presents a few challenges.
Funding is a primary concern. Could intern hosts contribute a fee? Could law firms sponsor interns for their clients? Or could we set this up as an educational non-profit and seek charitable funding from firms and other groups? The latter sounds great but also sounds slow!
I want to offer this to multiple law schools and so am thinking it might be best for my company to be the official organization that law students will intern with. This will likely make it easier for me and others to centralize and control this.
My most pressing issue is making sure whatever law school the student comes from, that this program is approved so that they can get credit hours for it. Absent credit hours, the need for payment to the intern magnifies - which I am totally in support of!
What I really need is the requisite critical mass - willing students, cooperating law schools, and supportive sponsors or hosts. Sounds simple, right?
A glimpse into the program structure gives an idea of what to expect:
Pre-Internship Bootcamp (1 week): Business of Law Fundamentals
A week-long bootcamp prior to the official start of the internship. Interns will learn about the business of law, including the economic model of law firms, the billable hour system, the legal services ecosystem, the client-law firm relationship, key legal technologies, and more. This foundational knowledge will enable them to better understand the context in which they will be applying their knowledge of AI and ethics.
Week 1: Orientation and Introduction to AI and Ethics
Activities: An orientation to the program and a primer on AI basics, foundational ethical theories, and their intersection with law.
Week 2-3: AI in Law and Ethics
Activities: Detailed exploration of how AI is used in law, including legal research, litigation, and contract analysis. Focus on ethical and legal considerations around these applications, including privacy, bias, and fairness.
Week 4-6: Practical Training with Partner Organizations
Activities: Interns will be embedded within partner organizations (law firms or legal departments) for hands-on training. They'll work on projects or tasks related to AI and ethics under the supervision of a mentor.
Week 7: Case Study Analysis
Activities: Interns will work in teams to analyze real-world case studies involving AI and ethics in law. They'll present their analysis and proposed solutions to the rest of the group and invited experts.
Week 8: Guest Lectures and Networking Events
Activities: A series of guest lectures from leading figures in AI, ethics, and law. Networking events will offer opportunities to connect with professionals in the area.
Week 9-10: Reflection and Next Steps
Activities: Interns will have time for reflection on their learning and preparing final presentations summarizing their insights and potential applications in their future legal careers. They'll also receive feedback from program coordinators and their mentors.
Throughout the program, interns will meet weekly with a mentor to discuss their progress, challenges, and learning.
I can deliver the education and the mentoring. What I need is willing and able hosts and partners.
To turn this vision into reality, I am inviting your thoughts, suggestions, and assistance. Your industry insights and experience would be invaluable as I continue to flesh out this project. Whether you'd be interested in offering guidance, sharing this idea with your networks, or potentially sponsoring an intern, I would appreciate any support you could provide.
Hit me up with any reactions or suggestions: [email protected]
🙇♂️🧪 Worrisome Use of ChatGPT (and related tech) for Legal Research
Not sure if you caught it, but a few weeks ago Thomson Reuters released its ChatGPT & Generative AI within Corporate Law Departments report.
It is a free download and worth a review. I wanted to share some interesting takeaways.
First, it appears that corporate legal teams are more open to using generative AI than law firms (at least in the US!). Perhaps it has something to do with incentives (aka billable hour). I don’t believe most US law firms have tapped into the power of generative AI for the business of law side yet. That will be changing.
Regardless, it is great to see more openness on the in-house side of things.
Second, it is a bit concerning to see that the second most popular use case is legal research. We (Brainyacts subscribers) know that these tools are not truth or fact generators. While I use them as conceptual research tools and maybe some very preliminary legal research, I am a bit worried that some are over relying on these for core legal research!
Third, continuing with the theme above - not only are in-house teams using it for legal research, they also see the best use case for their outside law firms as being legal research too!
I truly would like to know how people are interpreting ‘legal research’ and how they are doing it.
News you can Use:
You May Need A License to Build AI Things
This idea is ridiculous. So ridiculous, I don’t even want to say more about it so I will ask GPT-4 to say it for me.
▶︎▶︎PROMPT: Why is this a dumb idea? [pasted article text]
Grabbing Headlines By Suing OpenAI for Defamation
First. ChatGPT is not a human communicating facts.
Second. OpenAI states clearly that “ChatGPT may produce inaccurate information about people, places, or facts.”
Third. ChatGPT is not a truth machine.
Wordpress Just Literally Almost Doubled the Amount on AI Generated Content on the Internet
The introduction of Jetpack AI Assistant by WordPress is poised to have a profound impact on the amount of AI-generated content available on the internet. With an astonishing 810 million websites relying on WordPress, accounting for approximately 43% of all websites on the internet, the integration of this AI plugin has the potential to exponentially increase the volume of AI-generated content online.
By seamlessly incorporating AI capabilities directly within the WordPress interface, a vast number of website owners, bloggers, and content creators will now have easy access to powerful text generation and editing tools. This accessibility and convenience are likely to encourage a significant surge in the adoption and utilization of AI-generated content across a wide range of industries.
Smart Judge Advocating for ChatGPT Use (smart use)
Summary: Judge Scott U. Schlegel of the 24th Judicial District Court in Louisiana, known for his online court system, has released a series of AI-generated videos to help explain legal proceedings. While he sees value in generative AI for busy lawyers, he cautions against its use in judicial decision-making. He shares concerns about the authenticity of evidence in the era of deepfakes and talks about using existing technologies to make the justice system more accessible.
Key Points:
Judge Schlegel has created AI-generated videos explaining court procedures, designed to replace traditional explanatory pamphlets.
Schlegel uses AI to help generate scripts based on his legal experience, using the tech to showcase potential uses and efficiencies within the legal system.
The judge supports the use of AI tools by lawyers, provided they are used responsibly and lawyers understand the limitations of AI. He opposes the use of AI by judges in decision-making processes.
Schlegel believes AI has the potential to disrupt the mentorship and development of new lawyers due to a potential loss of nuance and hands-on interaction.
Quotes:
On AI use by lawyers: "I think it can be very useful for a busy lawyer who knows what they’re talking about. … Because ChatGPT is like my 17-year-old son—tons of knowledge, is gonna say that they’re right, adamant, they’re obstinate that they’re right. And if you don’t know what the topic is, you’re gonna buy that argument and you’re gonna get yourself in trouble."
On using AI to generate informative videos: "As a judge I don’t have time to sit here and write articles about my workflows. So when I have ChatGPT, I can actually tell it my actual intelligence and prompt it correctly to generate a script based on my actual workflow that I generated from my years of experience and use that to tell the story to the world."
On authenticity and deepfakes: "The biggest thing that scares the heck out of me as a judge is authenticity. How do I authenticate evidence? … And then who are the alleged experts now? So whenever I have a video, and somebody says, it’s a deepfake, do we now have to get experts in here?"
Ideas:
Schlegel advocates for more courts to utilize simple technological solutions such as online calendars and chatbots for frequent queries to increase efficiency and accessibility.
He emphasizes that while AI is beneficial for certain tasks, it should not replace the learning process and personal interactions necessary in training new lawyers.
Schlegel is also concerned about the challenges posed by deepfakes in authenticating evidence and the need for new types of expert testimony in courts.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8