• The Brainyacts
  • Posts
  • 222 | šŸ„ø šŸ«£ Your next ZOOM could easily have a fake participant

222 | šŸ„ø šŸ«£ Your next ZOOM could easily have a fake participant

Brainyacts #222

Itā€™s Friday. Here is a comprehensive list of 777 known and studied Generative AI risks, compiled by MIT. Itā€™s worth checking out.

Be careful out there!

Here we go.

In todayā€™s Brainyacts:

  1. An exercise in reasoning (becoming a better user of AI)

  2. Real-time video deepfakes are here (coming to your next Zoom)

  3. MAGA Mickey with a beer and other AI model news

  4. Ex-CEO of Google sees AI companies using lawyers to steal Ip and more news you can use

    šŸ‘‹ to all subscribers!

To read previous editions, click here.

Lead Memo

šŸ‹ļø šŸ¤“ An exercise to make you better at using GenAI/LLMs. Reasoning Engines, Not Search Engines

For most people, it is still too easy to fall into the trap of viewing Large Language Models (LLMs) or GenAI tools as merely advanced search engines. After all, these models will produce answers to almost any question at the drop of a hat, making them seem like a more powerful version of Google. However, this perception misses the mark. LLMs are not search engines; they are reasoning engines.

For legal professionals who rely on precision and nuance, understanding this distinction is crucial for leveraging AI effectively in their work.

What is Reasoning?

Reasoning is the process of logically analyzing information to form a conclusion or judgment. Itā€™s how we synthesize information, identify connections, and solve problems. In the context of LLMs, reasoning can be broadly categorized into three types: deductive, inductive, and abductive reasoning.

Each type serves a different purpose and has distinct applications, strengths, and weaknesses in legal practice. To help you become a better and smarter user of LLMs, I suggest you familiarize yourself with the different types of reasoning and then run a short experiment. Doing this will help you craft better prompts as well as better understand what the AI model is doing in its responses to you.

Here we go:

Testing Deductive Reasoning: Applying General Principles to Specific Cases

Use Case: Interpreting Legal Precedent to Predict Case Outcomes

Scenario: Youā€™re preparing a brief and need to predict how a court might apply a specific legal precedent to the facts of your current case. Deductive reasoning allows you to apply the general rule established by the precedent to the specifics of your case.

Experimenting with LLMs: Provide the LLM with the text of a relevant legal precedent and a set of specific facts. Ask it to apply the precedent to these facts and predict the outcome. For instance, if the precedent involves the doctrine of res judicata, see if the LLM correctly applies it to a hypothetical case where a similar issue has been previously adjudicated.

Strengths and Weaknesses: LLMs are generally strong at deductive reasoning when the rule is clear-cut and well-represented in their training data. However, they can struggle with nuances, such as distinguishing between binding and persuasive precedent or applying the rule in a jurisdiction-specific context. You should experiment with various cases to see where the model excels or falters in applying deductive reasoning. Again you are not using the model to generate case citations and case-specific language - this is about learning how the model behaves.

Testing Inductive Reasoning: Generalizing from Specific Instances

Use Case: Identifying Emerging Legal Trends

Scenario: You want to identify a trend in how courts have been interpreting a particular statute over the past decade, perhaps to advise a client on the likely outcome of a pending case. Inductive reasoning allows you to generalize from specific court decisions to predict future rulings.

Experimenting with LLMs: Feed the LLM summaries or excerpts from a series of cases interpreting the statute in question. Ask it to identify patterns in the rulings and predict how a court might rule in a new case involving the same statute. For example, if the statute involves consumer protection, the LLM might recognize a trend where courts have increasingly favored consumer rights in recent years.

Strengths and Weaknesses: LLMs can be quite effective at identifying patterns in large datasets, making them useful for spotting trends across cases. However, they are limited by the data they were trained on; if the training data is not up-to-date or is biased, the conclusions may be misleading. Experiment by providing the LLM with cases from different time periods or jurisdictions to assess its inductive reasoning accuracy.

Testing Abductive Reasoning: Forming the Best Hypothesis Based on Evidence

Use Case: Developing Hypotheses for Complex Litigation Strategy

Scenario: In a complex litigation case with multiple possible interpretations of the evidence, you need to hypothesize the most likely explanation for what happened, which will shape your litigation strategy. Abductive reasoning helps you generate and evaluate possible scenarios to determine the best course of action.

Experimenting with LLMs: Present the LLM with a complex set of facts that could lead to multiple plausible legal theories. Ask it to propose the most likely explanation and suggest a corresponding legal strategy. For example, in a case involving a corporate merger gone wrong, the LLM could hypothesize whether the failure was due to a breach of fiduciary duty, a regulatory issue, or market conditions.

Strengths and Weaknesses: LLMs can generate creative hypotheses based on the data provided, which can be particularly useful in brainstorming sessions. However, they may also propose less plausible explanations that are based on patterns in the training data rather than a deep understanding of legal principles. You should test the LLM by providing it with scenarios where the correct hypothesis is counterintuitive or involves complex legal nuances to see how well it handles abductive reasoning.

Why You Need to Understand This

For legal professionals, understanding that LLMs are reasoning engines, not search engines, is critical to using these tools effectively. When we treat LLMs like search engines, we expect them to retrieve and present information exactly as it exists, without much interpretation. However, LLMs do more than just retrieveā€”they interpret, generalize, and sometimes even infer, which means their outputs must be critically evaluated.

By understanding how LLMs reason and where they might fall short, you can use these powerful tools more effectively and responsibly in your practice. This approach not only enhances efficiency but also ensures that the AIā€™s contributions are productive to your efforts.

Spotlight

ā˜¢ļø ā›”ļø Realtime deepfakes via Zoom

Deepfakes are now tied for third place as the greatest security concern. Password breaches still ranked the highest (64%), followed closely by ransomware (63%) and phishing/social engineering attacks and deepfakes (61%).

And this is why.

Watch the short video below. This is all done real-time just by using one picture of someone else. Can you imagine jumping on a Zoom call with a client, outside counsel, or someone else thinking it was them but in truth, it wasnā€™t? How could you tell? Voice maybe? But there is real-time AI voice cloning too. Put these together and you have a HUGE recipe for disaster.

AI Model Notables

ā–ŗ Grok-2 AI image creation tool launched on X with limited guardrails and numerous questions. See image below!!

ā–ŗ Google's AI image generator, Imagen 3, is now available, offering improved performance and capabilities. 

ā–ŗ Grammarly to roll out a new AI content detector tool. Here's how it works.

ā–ŗ Recently OpenAI released an update to its rather new ChatGPT-4o model. OpenAI has provided specifics on what improvement they made but it has been tested under the codename ā€œanonymous-chatbotā€ and reclaimed the top spot on LMSYS Arena.

News You Can Use:

āž­ Californiaā€™s controversial AI bill is on the verge of becoming law.

āž­ UC Berkeley to offer an AI-focused law degree.

āž­ Illinois becomes second state to ban unauthorized digital replicas created by generative AI (Tennessee was first).

āž­ Walmartā€™s CEO Doug McMillon just reported that the company is using generative AI to increase its productivity, updating 850 million product catalog entries 100 times faster than human-led methods.

āž­ The Consumer Financial Protection Bureau (CFPB is a unit of the Federal Reserve System): Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector.

āž­ San Francisco is suing 16 of the most popular AI "undressing" websites that allow users to create deepfake nudes (mostly of women and girls) without their consent.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8