• The Brainyacts
  • Posts
  • 193 | šŸ“° šŸšØNYTimes Asks How Law Firms Use AI

193 | šŸ“° šŸšØNYTimes Asks How Law Firms Use AI

Brainyacts #193

Itā€™s Tuesday. I read all of your feedback. At the end of every newsletter is a poll. Many of you take it - thanks. Please do so as I use feedback to help shape this newsletter.

A few editions ago I asked you all about the future of Brainyacts. There will be changes coming but this is certainly not the end of it. I want to share two comments from the many I get as they delight me and hopefully you share their sentiment.

ā€œI am glad you are keeping up at least weekly feeds. I totally disagree with the rationale for stopping Brainyacts because your content is available elsewhere. You frequently have it here first, lend perspective to emerging events, and identify/link topics and articles that are important for us to know about. Keep up the good work.ā€

ā€œI just stumbled upon your newsletter- and I love it. [A]nd I think that you are a beast (in the best way!).ā€

Thank you! You all make me feel like a beast when you share your feedback!

Letā€™s get to it.

In todayā€™s Brainyacts:

  1. Voice Banks (preserving our voices for AI)

  2. Testing and Building Your Deep Fake Detector

  3. Microsoft has a new AI model and other AI model news

  4. NYTimes asking how law firms are using AI and other AI-related content

šŸ‘‹ to new subscribers!

To read previous editions, click here.

Lead Memo

šŸ—£ļøšŸ”’ The Ethical and Emotional Implications of AI Voice Preservation

The following post was inspired by Randy Travis.

In his first new release since a 2013 stroke left him unable to sing, Randy Travis harnesses AI technology to offer fans a nostalgic, folksy song, "Where That Came From." This project uses an AI model and a surrogate singer to recreate Travis' distinctive baritone, reflecting both the promise and ethical questions surrounding voice cloning in the music industry.

The Memo

The technological resurrection of voices using AI, as demonstrated by Randy Travis' song, has opened a Pandora's box of possibilities, concerns, and ethical dilemmas surrounding digital immortality. This development begs a pressing question:

Should we all consider creating voice banks to preserve our voices, allowing future generations the chance to interact with us even after we are gone?

Emotional Resonance and Ethical Boundaries

The emotional allure of voice preservation is palpable. Many people treasure old voicemails of deceased loved ones as sonic mementos that momentarily bridge the gap between life and death. In a similar vein, AI voice generation, like the project involving Randy Travis, showcases a tender blend of nostalgia and innovation. It allows fans to experience new music from an artist who can no longer perform, preserving his musical essence in a format that echoes across time.

However, this technology also ventures into ethically ambiguous territory (like everything about AI). The idea of interacting with a digitally recreated voice of a deceased relative or friend straddles the line between comforting and eerie. It introduces a form of presence that is simultaneously there and not thereā€”a ghost in the machine, so to speak. This raises profound questions about the psychological impact on individuals who might find themselves regularly conversing with an AI facsimile of a loved one.

Legal Considerations and Voice Rights

From a legal perspective, the burgeoning use of AI in voice cloning also introduces a complex web of rights and permissions. The recent passage of Tennesseeā€™s ELVIS Act, which allows legal action against unauthorized recreations of an artistā€™s voice, underscores the necessity for robust legal frameworks to manage these technologies. For non-celebrities, the idea of a personal voice bank brings about its own set of legal challenges. How do we regulate the use of an individualā€™s voice after their death? Who holds the rights to control and consent to the usage of these digital artifacts?

To safeguard against misuse, any system of voice banking would need stringent controls over who can access and utilize these voices. The creation of such banks would necessitate clear guidelines and perhaps even contractual agreements stipulating the terms under which these voices may be used posthumously.

The Double-Edged Sword of Digital Immortality

Finally, the notion of AI voice preservation as a form of digital immortality offers both solace and a caveat. While it provides a unique way to endure beyond our physical existence, it also poses significant philosophical and ethical questions about the essence of human identity and the nature of grief and memory. Are we merely the sum of our recorded words and actions, or is there something ineffable about our presence that cannot be captured or recreated by algorithms?

Spotlight

šŸ“²šŸ“ø Spotting AI-Generated Images

Ok, I cover a lot on the deepfake/AI-generated content front. Today I thought it might be helpful to walk through some examples and provide tips on how to spot potentially AI-generated images.

Below I am showing images side-by-side. On the left is the image. On the right I have indicated what you should look for to spot AIā€™s influence. At the end, I will share one image that I challenge you to reply to this email with what you think makes it AI-generated.

Ready? Here we go:

  1. Know what is supposed to be in the picture. Here there we have a fairly convincing photo of a Cyber Truck post-accident. Most people would accept this as real. But if you know Cyber Trucks (I had to look this up), you will know that their door handles are not on the doors and they have a side-view mirror (every vehicle needs one by law).

  1. Patterns. Text. Fingers. This picture has all three going on. AI-generated images struggle with each of these. Unless a person takes further steps to correct, many images will have issues with these.

  1. Fingers. This is a great example of what looks like a random candid. Just look at the fingers of both people.

  1. Text and lettering. Often most text in the background will be nonsensical and gibberish. Scan for any lettering to see if it is actual text (of any language).

Ok, now that you have a deepfake-detecting super power, here is your test. What seems ā€œoffā€ in this pic? Hit reply to his email with your answer.

AI Model Notables

ā–ŗ Is OpenAI messing with us?

Last week a new model - gpt2 - appeared on an industry board. Many reported that it was superior to GPT-4, Gemini, and Claude. It was removed. But new versions(?) have appeared - 'im-a-good-gpt2-chatbot' and 'im-also-a-good-gpt2-chatbot'. Sam Altmen (CEO of OpenA)I tweeted the above earlier in the day. So either he is just having some fun; these are indeed new models that are showing immense improvements over all others; or they are older models that have been reconfigured in some way. We shall see what comes next.

ā–ŗ JPMorgan Unveils IndexGPT: ā€œIndexGPT is a new range of thematic investment baskets created with the help of OpenAIā€™s GPT-4 model. The tool generates a list of keywords associated with a theme, which are then fed into a separate natural language processing model that scans news articles to identify companies involved in the space.

In essence, itā€™s a largely automated way to create so-called thematic indexes, which identify investments based on emerging trends ā€” think cloud computing, e-sports or cybersecurity ā€” rather than on traditional industry sectors or company fundamentals.ā€

ā–ŗ OpenAI is developing Media Manager (to be released in 2025), a tool that will enable creators and content owners to tell us what they own and specify how they want their works to be included or excluded from machine learning research and training.

ā–ŗ OpenAI is releasing a tool designed to detect content created by its own popular image generator, DALL-E.

ā–ŗ Microsoft readies new AI model to compete with Google and OpenAI.

ā–ŗ Coca-Cola commits $1.1 Billion to using Microsoft's AI.

ā–ŗ New startup claims ā€œhallucination-freeā€ AI for businesses.

ā–ŗ More details of the AI upgrades heading to Appleā€™s iOS 18 have leaked.

ā–ŗ X launches Stories, delivering news summarized by Grok AI.

ā–ŗ Access ChatGPT via this webpage now: https://chatgpt.com/

ā–ŗ Chip maker Nvidia launches ChatRTX that gives you the ability to train your own large language model on local content from your PC.

News You Can Use:

āž­ The New York Times wants to know how lawyers are using generative AI. Interestingly, they specifically call out contract lawyers.

āž­ Audible, Amazonā€™s audiobook company, has over 40,00 books ā€˜read by AIā€™.

āž­ South Korean woman falls for deep fake Elon Musk, loses $50K in romance scam.

āž­ DocuSign acquires AI-powered contract management firm Lexion.

āž­ Former Trump strategist now an evangelist for AI in political campaigns.

āž­ Teens donā€™t know how to talk to adults but they are opening up to AI chatbots (what could go wrong!?).

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8