189 | 🕶️ 🤓. Zuckerberg's shades

Brainyacts #189

It’s Tuesday. Last week I asked for feedback on the future of Brainyacts by asking you to tell me what you would like to see in the future editions. Here are the results:

First, thank you for sharing your feedback.

Second, I am happy to see the majority want to see it continue. And it appears that moving to a once-a-week edition is preferred while doing more of a deep dive into each one. I can appreciate this. Couple this with some interest in specialized topics and guests/interviews - and you have me thinking about how to make this much more valuable for all of you while making it easy to consume. After all - I write these to be read rather than clutter your inbox.

Finally, you will see some of you suggested I stop the newsletter. Kudos to those who chose this. I know there are many alternative sources out there and they grow in number every day. I will try to distinguish Brainyacts more going forward.

Let’s dig in.

In today’s Brainyacts:

  1. Synthetic evidence

  2. Meta’s AI Ray-Bans

  3. Amazon avoiding copyright and other AI model news

  4. Wendy’s Drive-Thru AI experience and other AI-related content

👋 to new subscribers!

To read previous editions, click here.

Lead Memo

📑 🧑🏻‍⚖️ More lawyers, judges, and AI companies are facing the stark reality of the rise of synthetic evidence (aka deep fakes)

Technological Interventions

The introduction of Content Integrity tools by the likes of Microsoft marks a proactive approach to enhancing transparency in digital content. These tools enable the attachment of "Content Credentials" to media by the creator, indicating the origin, date, and any alterations made post-creation. This level of transparency is crucial not only for political campaigns and news organizations but also forms a potential bedrock for legal standards regarding the admissibility of digital evidence. Such tools reflect a growing trend of leveraging technology to fortify the truthfulness of digital media.

Check out The Coalition for Content Provenance and Authenticity (C2PA). This is a group focused on developing technical standards for certifying the source and history (or provenance) of media content. Companies such as Adobe, Google, Intel, Microsoft, and others are involved.

Legal Adjustments and Challenges

The legal system's response to the rise of generative AI and deepfakes is notably cautious and varied. Federal judges are currently debating how to adapt evidence rules to accommodate the unique challenges posed by AI-generated content. The key concern is crafting a definition of AI that is precise enough to be enforceable yet flexible enough to stay relevant amidst rapid technological advancements. This reflects a broader judicial dilemma: balancing the need for comprehensive, robust laws against the risk of quickly outdated regulations due to the fast pace of technological change.

Related; there are already suggestions for changes to the Federal Rules of Evidence.

Reference articles:

Spotlight

🚀🤖 Meta’s AI Ray-Bans

You might have heard about these a few months ago. You might have had a flashback to Google Glasses and shuddered at the thought of a new version of this colossal failure of a product.

But wait, hear Mark (Zuckerberg) out, and watch this:

AI Model Notables

 The first meeting bot specifically for legal professionals, for use in depositions, hearings, and more.

 Ex-Amazon exec claims she was asked to ignore copyright law in race to AI.

 Microsoft has unveiled a new AI model that can generate a realistic video of a human face, talking, from an image and an audio clip.

 Microsoft launches Phi-3, its smallest AI model yet - capable of being running locally on a phone

 Apple's AI will work without an internet connection

 Apple has acquired the Paris-based artificial intelligence startup Datakalab amid its push to deliver on-device AI tools.

 OpenAI releases statement on commitment to child safety: adopting safety by design principles

News You Can Use:

What it’s like to order Wendy’s Drive-thru via their AI order taker:

 How United Airlines uses AI to make flying the friendly skies a bit easier

 Deepfakes of Bollywood stars spark worries of AI meddling in India election 

 ChatGPT replacing human investment advisors isn’t a question of ‘when’ but ‘if

 Tech exec predicts ‘AI girlfriends’ will create $1B business: ‘Comfort at the end of the day’

 A new AI Think Tank launches to focus on optimistic development and uses of AI – The Abundance Institute

 79% of Florida Bar members say AI should be “very closely regulated in the legal profession.”

 How Hollywood agents are addressing potential risks of AI for their clients

 Why LLMs are predicting the future of compliance and risk management

Does the Department of Homeland Security have to disclose, under public record laws, every aspect of its employees' interactions with OpenAI?

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8