- The Brainyacts
- Posts
- 247 | đ€ đœ Droid Porn (not what u think)
247 | đ€ đœ Droid Porn (not what u think)
Brainyacts #247
Itâs Tuesday. Advances in the deeptech layer of generative AI may be slowing. But the applications era is accelerating. And this is why you should be psyched! More to come on this shortly . . .
Onward đ
In todayâs Brainyacts:
Students use LLM to interpret statutory language
34 cases with hallucinated filings
Some âdroid pornâ and other AI model news
The Beatles, Jerry Garcia and The Vatican and more news you can use
đ to all subscribers!
To read previous editions, click here.
Lead Memo
đ đ Law students are not (yet) bullish on using LLMs to interpret statutory language
Regular readers know that Iâve been featuring my law students this semester, spotlighting their exploration of legal issues posed by generative AI. In this edition, we have three law students diving into a hot topic: whether AI can tackle the kind of statutory interpretation work that judges sometimes wrangle withâjust like Judge Kevin Newsom did recently when he turned to ChatGPT and other models to interpret statutory phrases like âlandscapingâ and âphysically restrained.â
The following was authored by Graham Vogtman, Jacob Orban, and Hunter Schmittou.
Eleventh Circuit Judge Kevin Newsom has written that judges might consider using large language models to decide tricky cases of statutory interpretation. In May he asked ChatGPT and Googleâs Bard to give the ordinary meaning of âlandscapingâ. (The case involved a dispute over a landscaping contract.) Snell v. U.S. Specialty Ins. Co., 102 F.4th 1221 (11th Cir. 2024) (Newsom, J., concurring). And in September he asked ChatGPT, Claude, and Gemini to give the ordinary meaning of the phrase, âphysically restrainedâ. (The case involved possible sentencing enhancement for physical restraint.) United States v. Deleon, 116 F.4th 1260, 1270 (11th Cir. 2024) (Newsom, J., concurring).
Other papers have explored similar questions. Some have been cautious, studying the tendency of LLMs to give subtly different responses to the same queries. Others have been far more confident (perhaps even provocatively ironic).
Being law students, we decided to enter this sandbox. One of our classes a few years ago asked us to write a 10-page(!) memo to a senior partner explaining whether a pool house is a âdwellingâ under Illinois law. To brand new law students, it was an exercise in tedium; the statute wasnât clear, the memoâs format was odd, and the memo seemed much too long. (What senior partner has time to read a 10-page memo?) Could ChatGPT have solved these problems? How would it interpret the statutory language? Does it think the pool house in our scenario is a dwelling? Hereâs what we found.
First we gave ChatGPT context, telling it things like âyou are an expert legal scholar with a specialty in statutory interpretation,â and, âI am an attorney working on a criminal case,â and, âI am going to provide you with the facts of the case and the statute I need to interpret and apply to those facts,â etc.
Then we fed ChatGPT the facts: Itâs a residential home. Thereâs a detached pool house in the backyard. It has four sturdy walls, a roof, a lockable door, and windows. Inside are chairs and a table, a TV, a refrigerator and bar, a bathroomâthe whole works, really. Itâs used during the warmer months. The family once used it for a big kidsâ sleepover, but otherwise itâs just a really nice pool house.
And then we fed ChatGPT the statute: An Illinois statute entitled âDwellingâ defines a dwelling as a âbuilding or portion thereof, a tent, a vehicle, or other enclosed space which is used or intended for use as a human habitation, home or residence.â
Query: Is the pool house a âdwellingâ under the Illinois statute?
Hereâs what ChatGPT said:
What do we take away from ChatGPTâs output?
Well, ChatGPT breaks down the statute into its elements like a run-of-the-mill textualist. This is, of course, a mark for the LLM; a textualistâs version of statutory interpretation often begins by dissecting the words or phrases of the statute. Whatâs interesting, however, is that ChatGPT just does it; it doesnât explain what rules of grammar or contextâwhat syntactic or semantic canons of interpretationâit incorporated when breaking down the statuteâs text. The user is therefore required to glean those implicit âcanonsâ from the LLMâs output and compare their result with the result of using other canons.
To see how this works, letâs return to the statutory language: A dwelling is a âbuilding or portion thereof, a tent, a vehicle, or other enclosed space which is used or intended for use as a human habitation, home or residence.â ChatGPTâs output assumes that the latter phraseâthe used-for-human-habitation phraseâmodifies all the preceding sorts of structuresââbuilding or portion thereof, a tent, a vehicle, or other enclosed space.â This is what Scalia & Garner call the âSeries-Qualifier Canonâ: Where there is a series of nouns followed by what appears to be a phrase ascribing some necessary attribute to or qualifying the nouns, the phrase applies to all nouns in the series. Plus, since the statute uses the language âor other enclosed space,â ChatGPT appears to be using something like the ejusdem generis canon: Where general or catchall words follows a series of specific terms, the catchall language must be understood as applying only to things of the same sort captured by the specific terms.
However, the statute could also be read such that the used-for phrase applies only to the âother enclosed spaceâ termâa version of a the Nearest-Reasonable-Referent canonâso that buildings, tents, and vehicles need not necessarily be used for human habitation to be âdwellings.â
And thereâs a further wrinkle: Are the final two words, âhome or residence,â included as part of the used-for phrase, or are they actually in the series of structures? If those final two words are included in the used-for phrase, then we would have to confront the dissonant use of Oxford commas. After all, under this reading, an Oxford comma appears after âvehicleâ and before âor other enclosed spaceâ, but an Oxford comma does not appear within the used-for phrase between âhomeâ and âor residence.â If we say instead that âhome or residenceâ is actually in the series of terms, then the Oxford comma problem disappears, for there simply would be no Oxford comma in the sentence.
ChatGPTâs outputâat least explicitlyâengages in none of this analysis. Nor does itâagain, at least explicitlyâengage with other contextual canons of interpretation. (Is the statuteâs title, âDwelling,â relevant to its interpretation?) What it does do is seemingly adopt a control set of canons, render an interpretation accordingly, and then apply its interpretation to the facts. We can debate whether its interpretation is right. (For what itâs worth, our authors canât even agree on whether the pool house is a dwelling.) And, of course, thereâs the separate question whether there ever is a ârightâ interpretation of a statute. Setting those aside, however, the takeaway is that ChatGPT gave a decent exercise in the basics of a real statutory interpretation problem.
It's just another data point for Judge Newsom, et al. Further study can ask ChatGPT follow-up questions in the same prompt, engaging it in competing canons or giving it further context. We can also explore, as Judge Newsom and some authors do, instances of LLMsâ giving slightly different answers to identical prompts within the realm of statutory interpretation. (As it happens, Claudeâs output on identical prompts said the pool house is a dwelling.) And we could see whether giving the LLM âlegalâ contextâtelling it that itâs a legal scholar confronted with legal text in a legal caseâaffects its interpretation. Would telling it instead that itâs the author of a dictionary who wants to include examples of its definition render a similar output? How about a teacherâs explaining a definition to her students? A news anchorâs telling a factual story? Give the LLM a story. Play around with it. Ask it questions. Critique it. It gives all of us more data to work with. And then we can start to see what the data say.
For one of our runs with ChatGPT on the âdwellingâ question, see here: https://chatgpt.com/share/672bbd74-ccd0-8003-b289-b69a10b03e5d.
Spotlight
34 cases with hallucinations in filings: A survey
Professor Heidi K. Brown reached out to me sharing her recent article surveying 34 cases so far in which lawyers, pro se litigants, one persistent law firm, and one expert witness have submitted court filings with hallucinated cases or unreliable GenAI-based research. I found it a helpful and valuable read.
For background, Heidi is the author of The Introverted Lawyer, Untangling Fear in Lawyering, and The Flourishing Lawyer. She has a law degree from the University of Virginia School of Law and a masterâs degree in applied positive psychology (MAPP) from the University of Pennsylvania.
AI Model Notables
âș Droid Porn (noun): A genre of viral robotics content in which humanoid robots, powered by cutting-edge AI, are shown performing complex tasks that were, until recently, exclusive to humans. Often depicted in ultra-sleek, high-production videos with a healthy dose of dramatic lighting and suspenseful background music, these clips invoke a mix of awe and existential dread. Watching droid porn is a guilty pleasure for tech lovers and skeptics alikeâitâs like watching the future do squats, backflips, or, occasionally, fall flat on its face.
âș Global diplomacy will impact AI rollout and development. đ
âș OpenAI claims ChatGPT rejected over 250,000 requests to generate fake images of candidates in the month before the US presidential election.
âș Amazon developing AI glasses for delivery drivers: The goal is to âshave secondsâ off delivery times by giving drivers turn-by-turn directions.
âș Chinaâs Baidu to unveil AI smart glasses to compete with Meta.
âș The Washington Post launched "Ask The Post AI," a new generative AI search tool that taps into the publication's archives to provide direct answers and curated results to reader queries.
News You Can Use:
â US judicial panel to develop rules to address AI-produced evidence.
â The Beatlesâ final song, "Now and Then," became the first AI-assisted track to receive Grammy nominations.
â Jerry Garcia's estate announced it is bringing the late Grateful Dead icon's AI-recreated voice to audiobooks and written content in 32 languages.
â Vatican, Microsoft create AI-generated St. Peterâs Basilica to allow virtual visits, log damage.
â Deepfake presumption creeps into society: The images of Spainâs floods werenât created by AI. The trouble is, people think they were.
Photograph: David Ramos/Getty Images
â On Monday, Americans for Responsible Innovation (ARI) launched a new campaign urging President-elect Donald Trump to appoint Elon Musk as Special Advisor to the President on AI, a new position from which Musk could shape federal strategy on artificial intelligence.
â AI-powered tool may offer quick, no-contact screening for conditions like high blood pressure and diabetes using a brief video of someone's face and handsâwith accuracy at levels comparable to or exceeding those of cuffs and wearable devices.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8