• The Brainyacts
  • Posts
  • 162 | ⛹️‍♀️ 🏛️ Justice Kagan dunks on Congress

162 | ⛹️‍♀️ 🏛️ Justice Kagan dunks on Congress

Brainyacts #162

Sponsored by

It’s Friday. Trying something new with an advertiser. Welcome MIT Executive Education! They are offering an excellent 6-week online course for AI and business strategy. See below and click to learn more.

Let’s keep learning!

In today’s Brainyacts:

  1. The Supremes and AI

  2. How to trust generative AI - Google’s ASPIRE

  3. Zuckerberg (Meta) going all-in on AI and other AI model news

  4. Google News keeps featuring AI-generated content and other related content

👋 to new subscribers!

To read previous editions, click here.

Lead Memo

👩‍⚖️🧑🏻‍⚖️👨🏾‍⚖️ Supreme Court Tracking AI (or trying to)

“Congress knows that there are going to be gaps because Congress can hardly see a week in the future with respect to this subject, let alone a year or a decade,” she said. The court, meanwhile, doesn’t “even know what the questions are about AI, let alone the answers.”

~ Supreme Court Justice Elena Kagan

Ok, maybe this is not a dunk per se, but it calls into question the knowledge and capability of Congress, and the court in keeping up with AI.

During the Relentless v. Department of Commerce case, which has nothing to do with AI, justices brought AI into the discussion. They highlighted AI's complexity and its implications for judicial interpretation. This case touches on the Chevron deference doctrine, which if overturned, could reduce federal agency powers significantly.

Justice Elena Kagan and Brett Kavanaugh both noted AI as a key area of future legislative and regulatory focus, recognizing Congress's reliance on agency expertise to shape and define AI policy. However given AI's technical nature and rapid development, there is cause for concern that over-relying on agencies might make matters worse. In fact, it appears that certain Justices are weary that there will be no definitive authority on such matters. So will it be up to the courts or congress?

The discussion also acknowledges recent regulatory efforts against AI abuses and the Supreme Court's ongoing consideration of AI's impact on the legal system, including privacy concerns and the risk of dehumanizing law.

In his December year-end report, Chief Justice John G. Roberts Jr. addressed this topic, stating AI “obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike.” He also warned that it “just as obviously it risks invading privacy interests and dehumanizing the law,” advocating for a cautious and humble approach to the judiciary's adoption of AI tools.

Spotlight

🙏 🧞 Putting our trust in LLMs

The Trust Factor: Bridging the Gap Between Lawyers and LLMs with ASPIRE

In the legal profession, where every decision and document needs to be built on the foundation of reliable information, trust in technology is crucial. As legal professionals increasingly rely on generative AI and their large language models (LLMs) for tasks ranging from research to case analysis, ensuring the accuracy and reliability of these AI tools is paramount.

One significant challenge in adopting LLMs is their tendency for inaccuracies and 'hallucinations' – instances where the model generates incorrect or misleading information. This is where ASPIRE, a new framework developed by Google AI, steps in.

ASPIRE is designed to enhance the selective prediction capabilities of LLMs, enabling them to provide answers with a confidence score. This feature is particularly valuable in high-stakes environments like the legal field, where the cost of misinformation can be high.

So, when you prompt an ASPIRE-driven AI model, you might get back answers along with a confidence score. This will help you determine what level of verification you need to apply to the response. If a reply comes back with a confidence score of 20% - you better be highly skeptical of its accuracy. But get one with a 90%, and you should still verify, but can also have a higher degree of trust.

ASPIRE enhances LLM reliability through:

  1. Task-Specific Tuning: Tailoring LLMs to specific knowledge domains or tasks (e.g., legal research or a contracts repository), ensuring relevance and precision in the legal context.

  2. Answer Sampling: Generating multiple answers and selecting the most plausible one, akin to a lawyer's meticulous review process.

  3. Self-Evaluation Learning: Continuously improving the LLM's ability to differentiate between accurate and inaccurate information.

The integration of ASPIRE into LLMs has shown promising results, significantly improving accuracy and reliability. This advancement opens new possibilities for legal professionals, from conducting precise legal research and drafting robust legal documents to analyzing complex case law efficiently.

While ASPIRE marks a significant stride in AI development, it's important to remember that LLMs are still evolving, and even the most advanced models are not infallible. However, ASPIRE shows us a way for more reliable and trustworthy AI applications in the legal sector, fostering more trust in this emerging technology.

Artificial Intelligence online short course from MIT

Study artificial intelligence and gain the knowledge to support its integration into your organization. If you're looking to gain a competitive edge in today's business world, then this artificial intelligence online course may be the perfect option for you.

  • Key AI management and leadership insights to support informed, strategic decision making.

  • A practical grounding in AI and its business applications, helping you to transform your organization into a future-forward business.

  • A road map for the strategic implementation of AI technologies in a business context.

AI Model Notables

 Mark Zuckerberg’s new goal is creating artificial general intelligence

 OpenAI announced a new partnership with Arizona State University, its first higher education deal granting the institution full enterprise access to ChatGPT for coursework, tutoring, and more.

 Microsoft announces next wave of AI innovations from Microsoft Education that will help unlock productivity and personalize learning.

 POLITICO Launches AI-generated Bill Summaries available to POLITICO Pro subscribers.

 AI robot maker Figure announces BMW partnership - Figure's humanoid robots enable the automation of difficult, unsafe, or tedious tasks throughout the manufacturing process, which in turn allows employees to focus on skills and processes that cannot be automated, as well as continuous improvement in production efficiency and safety.

News You Can Use:

Google News is increasingly featuring AI-generated articles, raising concerns about content reliability and originality.

The World Health Organization (WHO) is releasing new guidance on the ethics and governance of GenAI which outlines over 40 recommendations for consideration by governments, technology companies, and healthcare providers to ensure the appropriate use of LMMs to promote and protect the health of populations.

Google CEO says more job cuts are needed in 2024 in order to reach ‘ambitious goals’ related to AI.

AI buzzes Davos, but CEOs wrestle with how to make it pay.

China's Industry and IT Ministry has released draft guidelines to standardize China's AI industry.

New York's Department of Financial Services proposes Artificial Intelligence Guidance to combat discrimination.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8