046.5 | 🚨(Fixed) Talkin' Tokens

Brainyacts #46.5

🚨🚨Pardon the second email. To my horror, last night’s newsletter was impacted by a technical snafu - I am sure you noticed. To add insult to injury, I was also locked out of sending this correction for 12 hours. Wow! What a mess!

Here is what you should’ve read. Again, sorry for the extra email!

What’s on my mind this Saturday.

Prompt engineers are like hamsters in a wheel, not realizing the awesomeness they're sitting in. It's about time we switch from "how" (prompts) to "what" (use cases) and dive into this pool of opportunities to find some hidden gems. In this wild GenAI world, it's all about using these AI beasts to tackle problems that make us go "huh?"

With the right vibes, we can turn daydreams into real-deal solutions, leaving the prompt-obsessed folks eating our dust.

Ok, time to dig in

A special こんにちは (Konnichiwa) 👋 to my NEW SUBSCRIBERS! 
To read previous posts, go here.

Today we will:

  1. hit on Japan’s use of ChatGPT to replace non-existing humans

  2. give you inside scoop on Tokens - a GenAI fundamental

  3. share a superhero prompt you can use in a tight spot

  4. worry about tech overlords hoarding AI policy-making

  5. take a trip with ChatGPT

  6. get scared about China’s AI satellite takeover

  7. see a meme that hits home for far too many

Yokosuka City in Japan Embraces AI Sidekick ChatGPT to Fill the Gap Left by Missing Humans Amid Population Crisis

As the world panics over AI potentially replacing humans, Yokosuka City in Japan has found a unique solution by using AI chatbot ChatGPT to fill the gap left by its shrinking workforce. With the nation facing a population crisis, the city has turned to ChatGPT to help with government administration tasks and improve efficiency. By running a one-month trial for its 4,000 municipal employees, Yokosuka City is the first Japanese city to adopt the technology in this way, providing a prime example of AI harmoniously coexisting with humans.

Key Points

  • Instead of replacing humans, ChatGPT supports them by handling administrative tasks and enhancing workflow.

  • ChatGPT's implementation includes tasks like summarizing documents, checking spelling errors, and generating ideas.

  • Yokosuka City ensures no confidential or personal information is entered into ChatGPT.

  • While some governments have raised data privacy concerns, Yokosuka City focuses on the positive aspects of ChatGPT, demonstrating AI's potential for collaboration with humans.

  • OpenAI CEO Sam Altman met with Japanese Prime Minister Fumio Kishida to discuss ChatGPT's benefits and risks, emphasizing the harmonious relationship between AI and human workers.

Tokens:

🧱The Magic of Tokens: Unveiling the Building Blocks of AI

You might have heard or read about these things called tokens. Let’s spend a few minutes understanding them better.

These tiny building blocks play a crucial role in how generative AI models like ChatGPT work, and understanding them can make a world of difference for users like you.

🦸‍♀️Tokens: The Unsung Heroes of AI

Have you ever wondered how AI models understand and generate text? The secret lies in tokens! Tokens are individual pieces of language, as short as one character or as long as a word (or more).

For instance, the sentence "ChatGPT is fun" might be broken down into tokens like ["Chat", "G", "PT", " is", " fun"]. The AI model learns and generates text by predicting and understanding sequences of these tokens.

AI models, like ChatGPT, learn and generate text by predicting and understanding sequences of these tokens.

👁️Context Window: The AI's Field of Vision

A context window is the maximum number of tokens an AI model can process at once.

Imagine ChatGPT's prompt window or chat session screen; if a context window can hold 2048 tokens, the AI can only "see" and process text within this limit. Adding too much language to one window may result in errors, affecting the coherence and continuity of the generated text.

🤑The Price of Tokens: AI's Hidden Costs

You might be surprised to learn that tokens are the foundation of many AI models' business models.

Processing more tokens requires more computation, which increases the time, resources, and costs needed to generate a response. This makes understanding tokens essential for cost management, troubleshooting, quality improvement, and informed decision-making.

💰 Token Economics: A Quick Breakdown

How much do tokens cost?
OpenAI publishes the cost of tokens on its website.

For GPT4:

For GPT3.5 (turbo)- this is not the free version of ChatGPT which is GPT3.5 (legacy). This is the faster version of the free version that paid users get access to.

Let's say the GPT-4 model with an 8K context window charges $0.03 per 1,000 tokens for prompts and $0.06 per 1,000 tokens for results. If we assume a chat session maximizes the context window, we'd have:

  • Prompts (20%): 1,600 tokens

  • Results (80%): 6,400 tokens

  • Cost of prompts: 1,600 tokens x $0.03/1,000 tokens = $0.048

  • Cost of results: 6,400 tokens x $0.06/1,000 tokens = $0.384

  • Total cost per session: $0.048 + $0.384 = $0.432 (roughly $0.40)

Now $.40 may seem like a bargain to you. And perhaps it is; but aggregate all sessions. If a business is using GPT-4 and runs hundreds or thousands of sessions per day, the costs could grow rapidly.

Some estimate that there are 10 million ChatGPT sessions per day, that's approximately $4 million daily in token fees! It's easy to see how OpenAI's revenue could reach $1 billion by 2024. Keep in mind, this is a simplified example and not a precise financial projection.

🥷Tokens: A Key to AI Mastery

As AI enthusiasts, understanding tokens empowers us to make the most of AI models like ChatGPT. By appreciating tokens and their role in AI, we can optimize usage, manage costs, troubleshoot issues, improve response quality, and make informed decisions when choosing AI services. Embrace the magic of tokens and unlock the full potential of AI!

Here's a table of key terms, definitions, and other relevant information related to tokens that can be used as a quick reference guide:

Use Case: 

Picture this: you're in a high-stakes meeting or on a critical conference call when suddenly, a colleague drops a complex concept like a hot potato into the conversation.

Your heart races, sweat forms on your brow, and you scramble to comprehend this perplexing idea in record time. Fear not, dear friends!

This prompt is your intellectual superhero, swooping in to save the day.

No need to covertly Google in a panic; our trusty guide delivers the perfect blend of basic and advanced knowledge, all wrapped up in a speedy, easy-to-digest package. Say goodbye to frantic searches and hello to smooth, savvy comprehension.

Just copy-n-paste this bad boy into an email. Send it to yourself with the subject “super prompt” and it will also be a quick inbox search away!

▶︎▶︎PROMPT

You are a Professor and Scientist who specializes in applying lean six sigma to legal services.

Explain [Topic/Subject] in 2 main ways:
1 - Explain it in the simplest terms possible, as though you are explaining this to a complete beginner.
2 - Explain it using a set of detailed, ordinal steps (use 1, 2, 3, etc) or non-ordinal principles (use a, b, c, etc), depending on the most effective way to explain it.
3 - Separately outline some of the basic applications & some of the advanced applications for this topic.

▼▼RESPONSES

I am providing examples in both ChatGPT and GPT-4 so you can see the difference. Some of you only use the free version while others use the paid (GPT-4).

Topic #1: Lean Six Sigma for Legal Services

Here is the link to the ChatGPT response.
Here is the link to the GPT-4 response.

Topic #2: Business Design

Here is the link to the ChatGPT response.
Here is the link to the GPT-4 response.

Topic #3: Legal Operations

Here is the link to the ChatGPT response.
Here is the link to the GPT-4 response.

Or you could just ask the person what the concept or phrase means. 🤪

News you can Use: 

AI Overlords? Big Tech's Power Play Prompts Calls for Reinforcements

As Big Tech's grip on AI tightens, researchers at NYU call for increased scrutiny and regulation to curb the concentration of economic and political power. With AI development depending heavily on resources controlled by tech giants, the report emphasizes the need for strong competition and privacy regulations. Recommendations include data minimization, antitrust law enforcement, and regulations for large-scale AI models. The report argues that public interests, not tech industries, should define AI's future, urging regulators and the public to challenge the status quo and confront the tech industry's power concentration.

News you can Lose: 

Scary headline is a big nothing-burger

In The Meme-time: 

That's a wrap for today. Stay thirsty & see ya next time! If you want more, be sure to follow me on Twitter and LinkedIn.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research