045 | Getting Deep

Brainyacts #45

I wonder if ChatGPT is having its Kleenex or Band-Aid moment?

A "generic trademark" or "proprietary eponym" occurs when a specific brand name becomes so well-known that it represents the entire product category. People use the brand name to refer to the generic product, regardless of the actual manufacturer.

There are many more LLMs and chat-based generative AI tools, including from big names like Google and Microsoft. But I am hearing people say “Bing’s ChatGPT” or “Google’s ChatGPT.”

Here are some other examples you might be familiar with:

  • Xerox

  • Google

  • Chapstick

  • Jacuzzi

  • Q-tips

  • Post-its

  • Hoover

  • Scotch Tape

Ok, time to dig in

A special Hola 👋 to my NEW SUBSCRIBERS! 
To read previous posts, go here.

Today we will:

  1. get deeper with Google

  2. love us some law school ChatGPT policy making

  3. consider a thoughtful/creative suggestion for US AI policy

  4. share some global policy think tanks and efforts

  5. do some ChatGPT myth busting

  6. view the generative AI landscape

  7. watch a Ted talk from one of Open AI’s founders from this week

  8. consider a profound meme

Deep Thoughts/Deep Pockets/DeepMind

Google/Alphabet is stepping up its game with a huge announcement. And because Google is so embedded in many of our daily lives AND Google’s GenAI rollout has been seen as lacking, I figured this deserved your immediate attention. One more tech heavy weight that powers so much of our lives is doubling down and pushing forward fast on its GenAI ambitions.

DeepMind and Google Research's Brain team are joining forces to create a new powerhouse called Google DeepMind. They are saying that this epic combo is gonna speed up our journey to a world where AI tackles humanity's greatest challenges, revolutionizes industries, and throws a big ol' party for diverse communities.

Why is this such a massive deal, you ask? Well, Google DeepMind is assembling the best of the best in AI, giving them the keys to the kingdom: computing power, infrastructure, and resources to unleash next-gen AI breakthroughs and products across Google and Alphabet.

Side note and a bit confusing, Google bought DeepMind in 2014. It is just that they were working separately from the Brain team. Ok, guess they realized that was not the best strategy.

More juicy details are coming soon in a town hall meeting, but until then, let's raise a toast to the thrilling adventure ahead with Google DeepMind!

🥳Policy Friday Happy Hour: Woo-Hoo

Just what you wanted to end your week. Some juicy delicious policy and regulatory news. But I promise, there are some cherries in here. So grab your sober-curious drink or a salty sugary fully charged margarita and bottoms-up!🍹

Law School AI Policy Making

The University of California, Berkeley School of Law has stepped into the AI arena, crafting a policy to guide students on the use of generative artificial intelligence like ChatGPT. While students can wield this digital wizardry for research and grammar checks, they're forbidden to use it for exams or assignments, and plagiarizing is strictly off-limits (is plagiarism ever not off-limits 🫤).

The policy aims to deter students from passing off AI-generated work as their own, with the school's honor code now taking AI into account. With finals around the corner, clarity on AI usage is essential to prevent any unintended violations.

As a pioneering policy, Berkeley Law leads the way, while other prestigious law schools have yet to weigh in. Although the policy sets the stage, individual professors can still deviate from the rules, providing written notice to students.

In a world where AI is becoming ubiquitous, Berkeley Law seeks a balanced approach to incorporating AI into the legal education landscape.

👨‍🏫 As a full-time law professor and active participant in the academic legal jungle, I find myself privy to the whispers and debates echoing through the hallowed halls of law schools (nothing here is specifically about the school I teach at as I interact with professors and administrators around the globe).

Gather 'round, folks, as I unveil the reactions and policy proposals of my esteemed colleagues on the topic of ChatGPT.

Some quiver in their boots, fearing the AI takeover of their very livelihoods, while others bury their heads in the sand, denying ChatGPT's very existence. A few dismiss it as mere frivolity, incapable of producing anything of substance.

But wait, there's more! 

Here is a list of unique personas I have created to represent the different approaches faculty, administrators and schools are taking.

  1. The Technophobe: Certain professors might propose a return to quills and parchment, eschewing not just ChatGPT, but all digital devices in a desperate attempt to restore the good old days.

  2. The Collaborator: Embracing AI with open arms, some innovative faculty members might suggest harnessing the power of ChatGPT to co-author articles, course materials, or even the occasional cheeky limerick.

  3. The AI Whisperer: In a bid to promote understanding, a forward-thinking dean might introduce a mandatory course on AI ethics, empowering students to discern appropriate AI use and navigate the legal implications.

  4. The Talent Scout: The realist among the faculty may recognize the inevitability of AI's growing influence and encourage students to develop proficiency in AI, grooming them as future leaders in AI-driven law.

  5. The Enforcer: To ensure fairness, a strict administrator might recommend installing AI detectors in classrooms and examination halls (via internet), akin to metal detectors at airports, lest any crafty students sneak ChatGPT in.

  6. The AI Law Pioneer: With an eye on the horizon, a visionary law professor may push for the creation of a specialized legal AI program, preparing the next generation for the brave new world where AI and law intersect.

So, fellow academics and legal aficionados, the gauntlet has been thrown by Berkeley. Will you or your school be a Technophobe or a Pioneer, a Collaborator or an Enforcer?

The time has come to choose your path in the unfolding saga of AI in law. Embrace the challenge, shape the future, and let us march fearlessly into the uncharted territory where law, ethics, and AI collide – for it is in this crucible of innovation that the legal landscape of tomorrow shall be forged.

Or we can just cling to status quo and hold on for dear life.

NIST to the Rescue: A Solution for US AI Policy

Ever heard of NIST? If not, you're not alone. As a part of the U.S. Department of Commerce, the National Institute of Standards and Technology (NIST) works behind the scenes to improve our daily lives through better technology and standards.

Anthropic, a startup focusing on safe and ethical AI development, suggests that NIST should lead U.S. AI policy efforts. Let's take a look at why.

What is NIST?

  • NIST is a government agency that develops and promotes measurements, standards, and technology for businesses, industries, and researchers.

  • They create guidelines and tools for various industries, such as cybersecurity, communications, healthcare, and manufacturing.

  • NIST's goal is to ensure everyday life runs smoothly and safely through better technology and standards.

Who is Anthropic?

  • Anthropic is a startup co-founded by former OpenAI employees, focusing on AI safety and ethics.

  • The company aims to create beneficial, reliable, and transparent AI systems.

  • Their work includes developing advanced and safer AI models, like Claude, a more advanced and safer AI assistant.

Why NIST for AI Policy?

  • Current AI governance ideas include new federal agencies, multinational legislative proposals, and temporary pauses on AI development, but NIST is a more practical solution.

  • NIST has experience in measuring AI systems and developing technical standards.

  • Recently, funding for AI-related programs at NIST has stagnated, but ambitious investment could lead to significant progress.

How Can NIST Improve AI Governance?

  • By building on fundamental measurement techniques and standardizing them across the field.

  • Developing community resources, like testbeds, to assess AI capabilities and risks.

  • Effectively evaluating AI systems for performance and risks, enabling smart AI regulation.

Benefits of NIST's Approach:

  • Enhancing AI safety through rigorous testing and identification of risks.

  • Increasing public trust in AI systems, as they are validated by an independent source.

  • Providing government confidence in AI systems' safety.

  • Promoting innovation by encouraging AI developers to build better technology.

  • Creating a market for system certification and incentivizing developers to participate.

NIST's involvement in AI governance could be a game-changer. By adopting a "portfolio approach," combining NIST's expertise with other tools like internal controls, audits, and regulatory frameworks, the U.S. could ensure a safer and more innovative AI landscape.

🤷 But something tells me AI is too sexy for other government functions and departments to relinquish to NIST. Everybody will want a piece of it. And that is why Anthropic’s suggestion will fail - it likely makes too much sense.

Global Policy Making

The rise of artificial intelligence, specifically GenAI recently, has left the world scrambling to establish a global rulebook or rulebooks of their own. As nations grapple with the complexities of AI and generative technology, differing approaches and the demand for specific oversight add fuel to the fire.

In the midst of a digital policy landscape where consensus is elusive, the race is on to strike a delicate balance between harnessing AI's potential and mitigating its risks. Can and should the world unite to create a global framework, or will the intricacies of AI rule-making keep us focused on our own paths?

Here is a brief list on some of the larger policy initiatives:

Common themes in these rulebooks include calls for greater transparency, stronger data protection rights, independent oversight, and limitations on "harmful" use cases.

Three main pain points of difference have emerged. Should countries take a top-down government-led approach? An industry/sector-led focus? Or a national/economic security-focused perspective? Each of these is questioning what the specific oversight would be and how it would be executed/monitored.

Use Case: Mythbusting

Being its Friday, let’s just cover off on some things I want you to keep in mind.

  1. OpenAI does not have an app for ChatGPT or OpenAI. You will find many of them on app stores, etc. Most are scams or skins. You can just open a web browser on your phone if you want to use it.

  2. Asking ChatGPT for its references or sources is not foolproof. Many so-called prompt pros have been telling folks that they should ask ChatGPT for its sources or references when it provides a response. This may or may not work. ChatGPT is not built to explicitly use or refer to specific reference. Bing Chat is however.

  3. ChatGPT cannot focus on its own. It views one session as one conversation. So if you prompt it for a meal plan and then ask it to summarize an article and then ask it for a social media post - it may blend all its responses into its last response - sometimes significantly and sometimes subtly. When you change topics, start a new session.

  4. ChatGPT is free to use but . . . While yes, it allows free use, it is by no means free. It may cost OpenAI up to $700,000/day to run. And it also consumes massive amounts of energy and water.

  5. ChatGPT does not understand what you are asking it. It is a series of algorithms using complex math and learning models to predict the next likely word (or token, to get technical). It has been trained to respond in a confident and persuasive manner.

  6. ChatGPT isn’t suppose to be good at math. ChatGPT is a language model, not a math model. It doesn’t recognize math as math only has how it relates to language and relationships it has learned from.

News you can Use: 

There is more to Generative AI than ChatGPT!

OpenAI at TED

For full TED talk from earlier today go here.

In The Meme-time: 

Will (can) we ever trust AI?

That's a wrap for today. Stay thirsty & see ya next time! If you want more, be sure to follow me on Twitter and LinkedIn.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research