138 | 🎙️🤖 Chat (ro)bots

Brainyacts #138

It’s Friday. And we are “making chat (ro)bots.” So sit back. Kick up your feet. Watch some talking robots.

Let’s get going!

In today’s Brainyacts:

  1. Are you indemnified?

  2. AI Standard Clauses to update tech agreements

  3. Benchmarking LLMs and more AI model news

  4. Woodpecker breaks hallucinations and other related news

👋 to new subscribers!

To read previous editions, click here.

Lead Memo

🙈🙉🙊 Who’s Indemnifyin Who?

Understanding Indemnification in the AI Landscape

It's now widely accepted that most generative AI models contain some copyrighted content. However, the implications of this – for copyright holders, AI companies, and users – remain largely undefined. This uncertainty is especially pertinent if you're a user.

Whether you're employing generative AI for individual tasks or incorporating it into your business operations, you should be well-informed about indemnification.

Here's the crux of the matter: The legal repercussions of using AI-generated content that might contain copyrighted material remain undecided. Ongoing legal disputes and forthcoming policies may offer clarity, but currently, we're navigating the complex intersections of groundbreaking technology and undefined liability.

Broadly, AI companies fall into two categories:

  1. Those that place the onus on you: These companies expect you to defend any legal challenges arising from the AI-generated content they provide.

Example: OpenAI, the renowned market leader, states, "You will defend, indemnify, and hold harmless us, our affiliates, and our personnel... relating to your use of the Services." This stipulation applies whether you're a ChatGPT user, a Plus subscriber, or utilizing their API for custom solutions. Their terms can be reviewed [here].

Similarly, Anthropic, another heavyweight in the field, refrains from offering protection, reserving the right to "hold harmless" itself from any potential liabilities, including intellectual property-related ones. Their terms can be explored [here].

Harvey.ai, known for its legal models, maintains broad and ambiguous terms. Although they acknowledge potential "exogenous" content, they largely disavow responsibility. Dive into their terms [here].

It's important to emphasize that these companies, while protective of their stance, offer invaluable services. Given the prevailing uncertainties, assuming the responsibility of protecting every user could be a gargantuan task.

  1. Those that offer indemnification: However, this often comes with conditions and nuances.

Example: Adobe provides complete indemnification for its Firefly image generator, but this benefit is reserved for those with enterprise licenses.

Microsoft made waves by announcing its commitment to indemnification. However, a closer inspection of their terms reveals variances across products.

Their expansive use of AI means different products might have varied indemnification policies. Some, like the Microsoft 365 Copilot for Word, Excel, and PowerPoint, and Bing Chat Enterprise, offer this protection. GitHub Copilot, a collaborative effort with OpenAI, is also covered. Yet, users of the "preview" feature on Azure, powered by the Azure OpenAI Service, might find themselves shouldering the responsibility for copyright infringement claims.

Navigating all of the terms associated with Microsoft products is a grueling task that often leads to confusion and misalignment. I’ve tried to make sense of them all but I am left stymied. I am not even sure which products are which at some point. The language they use is inconsistent and self-referencing often creating loops where I am left wondering if I am still reading about the product I thought I was.

I don’t think Microsoft is doing this on purpose. Rather I believe it is due to complexity and velocity. They are a complex organization with many different product teams. Each team is moving at a different pace and likely trying to keep their Terms of Use/Service updated. But those updates do not keep pace with the other things - so they lag and might be out of sync.

Takeaway? Read those Terms of Use/Service so you know what risk you have. Even if it is slight, it doesn’t hurt to know.

Spotlight

Interesting Contract Language Resource.

Bonterms is an initiative aimed at streamlining the legal and contractual processes involved when companies want to use or offer cloud and potentially other tech services.

Instead of every company creating its unique contract, leading to long negotiation periods and frustrations, Bonterms promotes the idea of open-source standard agreements. These are contracts that are collaboratively developed by experts, made freely available, and can be easily customized as needed.

They recently released their AI Standard Clauses. Organizations can use these to guide and update any existing technology-related agreements.

AI Model Notables

 OpenAI vs. Anthropic vs. Cohere: Benchmarking LLMs

 OpenAI forms team to study ‘catastrophic’ AI risks, including nuclear threats

 Microsoft earnings: Stock soars as AI drives record revenue again

 Sneak peek at Google’s Gemini (supposed to outperform GPT-4)

 Google Maps updates: Immersive View for routes and other AI features

 Gmail could soon let you dictate all your work emails

 Drivers will soon be able to chat with their cars with ChatGPT (DS and Mercedes)

 Anthropic, Google, Microsoft & OpenAI announce Executive Director of the Frontier Model Forum & over $10 million for a new AI Safety Fund

News You Can Use:

A new online tool allows users to identify, track and learn about the legal status of training data sets for generative AI, and a quick glance shows that many may have licensing issues.

Researchers develop ‘Woodpecker’: A groundbreaking solution to AI’s hallucination problem

Airbnb using AI to block 'high risk' Halloween party bookings

The White House to unveil sweeping AI executive order on Monday

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8