085 | Anosognosia

Brainyacts #85

In today’s Brainyacts we:

  1. watch a judge over-correct for ChatGPT

  2. get breaking research from OpenAI on hallucinations

  3. follow a crypto leader’s call out of ChatGPT use

  4. get a camera that takes pictures based on AI + GPS, no lens! 

  5. get a perfect example of human hubris meets poor prompting

👋 A special Welcome! to NEW SUBSCRIBERS.
To reach previous posts, go here.

"Come On, Man!" Courtroom Edition - The Judge's AI Overreach

First for folks unfamiliar with the NFL and ESPN’s Monday Night Football -

"Come On, Man!" is a humorous segment from ESPN's Monday Night Countdown show, where hosts highlight questionable or absurd moments, errors, or decisions from the past week's American football games. It's akin to a "blooper reel," meant to entertain by showcasing these sports faux pas with a dose of good-natured ribbing. The phrase "Come On, Man!" is an exclamation of disbelief or disapproval used in response to these moments.

Now onto GPT-4’s own rendition of Come On, Man! for the Texas judge who issued a Mandatory Certification that any lawyer appearing in his court must file to affirm they did not use a ChatGPT-like tool in writing the filing or if they did, that they, you know, did what lawyers do . .. review it!

This follows last week’s unfortunate event where a lawyer used ChatGPT for legal research and wound up using cases that don’t exist. That lawyer, Steven Schwartz of Levidow, Levidow & Oberman, now faces a sanctions hearing on June 8th in NY.

"Come On, Man!"

Alright, folks, it's time for a new episode of 'Come On, Man!' courtroom edition. This time, we're turning our gaze to a judge who's a bit too worked up about the rise of AI in the legal field.

Picture this: our courtroom hero, a savvy attorney, has been harnessing the power of AI to streamline his work, drafting parts of his filings with the aid of ChatGPT. He's always diligent, of course, double-checking everything to ensure accuracy and legal soundness. But then, the judge throws a wrench in the works.

The judge issues a new rule, demanding that every attorney appearing before him must file a certificate stating that no part of their filing was drafted by AI—or if it was, that a human has checked it for accuracy. Now, our attorney's like, "Okay, a little extra paperwork, but no big deal, right?" Wrong.

The judge then takes it a step further, threatening to strike any filing if this AI certification isn't provided. He's got a bee in his bonnet about AI 'hallucinations' and 'bias,' ignoring the fact that the attorney is always the final line of defense, carefully reviewing and correcting any AI-drafted document.

Now, we're all here scratching our heads, asking: 'Come On, Man!' What's next? Do we need to certify that an intern, a paralegal, or even a secretary didn't draft the document? Are we going to start second-guessing spell-check next?

The key point the judge seems to be missing is that no matter who—or what—creates the initial draft, the responsibility always falls on the attorney to review, verify, and sign off on the filing. That's what they went to law school for, after all!

So, to our judge who's a bit too cautious about AI... Come On, Man! Let's keep the focus where it belongs—on the diligence and integrity of the attorney, not on who or what types the first draft.

OpenAI is solving its hallucination problem

Researchers have trained a model to be really good at solving math problems (this could have implications for other uses as well). But instead of just rewarding the model when it gets the final answer right, they reward it every time it takes a correct step towards the solution. This approach, called "process supervision", is better than just rewarding the correct answer, which is called "outcome supervision".

Process supervision not only makes the model better at solving problems, it also helps align the model's way of thinking with ours. It makes sure that every step the model takes in solving a problem is a step that we humans would approve of. This helps us understand how the model is thinking and reduces the risk of it making up things, a problem often called "hallucinations".

Usually, making AI safer might lead to a decrease in performance, something known as an "alignment tax". However, in this case, using process supervision actually improves the model's performance in solving math problems, meaning there's no such tax. This could encourage more people to use this safer method, which would be a good thing for aligning AI with our values.

BUT OpenAI is managing our expectations with their note at the end . . .

“It is unknown how broadly these results will generalize beyond the domain of math, and we consider it important for future work to explore the impact of process supervision in other domains. If these results generalize, we may find that process supervision gives us the best of both worlds – a method that is both more performant and more aligned than outcome supervision.”

Well, we will just have to see.

Or perhaps this model has simply evolved to a new affliction - instead of hallucinating, it might have Anosognosia. This is a medical condition where a person is unaware of their own illness or condition. In this sense, it's opposite to hallucination in that instead of perceiving something that isn't there, the person fails to perceive something that is there (their own illness). Yikes!

It takes a crypto leader to knock sense into ChatGPT laziness

"people just become too lazy and don’t do their own research and own those decisions."

As crypto embraces A.I., a major exchange scraps ChatGPT integration because ‘it’s very dangerous.’ More here

A camera that works without a lens, only GPS

Think of all the use cases (legal and illegal).

News you can Lose: 

Human hubris while prompting like sh!t

Perfect example of an exquisitely poor prompt where the human then claims victory over AI. Um, you just outed yourself as a total newbie and then you bragged about it.

Here is the article - the headline tells you who the person is.

And here is their “brilliant” prompt:

Obviously not a Brainyacts reader, eh?

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8