• The Brainyacts
  • Posts
  • 239 | šŸ½ļø šŸ”Œ AI loves nuclear power

239 | šŸ½ļø šŸ”Œ AI loves nuclear power

Brainyacts #239

Itā€™s Tuesday. Last Friday I opened with this clip. While suspected at the time, it has been confirmed that the Tesla robots were being remotely controlled by humans - including their voices. Yes, this is disappointing but the reason was to demonstrate future capabilities. They should have been a bit more transparent though. But like most things in AI right now - leading with the promise, not the real, gets the attention!

Onward šŸ‘‡

In todayā€™s Brainyacts:

  1. AI and Energy Consumption

  2. Googleā€™s next PDF-to-Podcast tools (not NotebookLM)

  3. OpenAI studies its own bias and other AI model news

  4. Expert witness uses ChatGPT - judge not amused and more news you can use

    šŸ‘‹ to all subscribers!

To read previous editions, click here.

Lead Memo

šŸ”Œ šŸŖ« AI and Energy Consumption (going nuclear) by the year 2030

ā˜¢ļø Today, Google announced it was ordering 6 to 7 new nuclear plants in order to powers its data centers.

šŸ”Œ Open AI strikes a deal for its own data center powered by renewable energy.

ā˜¢ļø Last month Microsoft relit Three Mile Islandā€™s nuclear plant in order to power Microsoftā€™s AI training demands.

The role of power consumption in training AI models is an increasingly important topic as the development of more advanced AI systems continues to accelerate. The creation of powerful models, such as those in the GPT series, requires vast amounts of computational resources, which in turn consume significant energy. As AI technology evolves, the rate of power usage and the efficiency of hardware become critical factors in determining the feasibility of training larger and more complex models.

At the core of AI training is the need for large-scale computations, measured in floating-point operations (FLOP). FLOPs represent the speed at which a computer can crunch numbers. Itā€™s similar to how a carā€™s engine might be measured in horsepowerā€”FLOPs measure a computerā€™s ā€œbrain powerā€ for performing complex math, particularly with floating-point numbers (numbers with decimals). Letā€™s say youā€™re doing a simple calculation like adding 2 + 2. In AI training, thatā€™s just one floating-point operation. But AI models are working with far larger numbers and datasets, so think of the system needing to perform this kind of calculation millions or billions of times every second. And the more FLOP a model requires, the more computational powerā€”and thus energyā€”it consumes.

Over the past few years, AI labs have been scaling up their training resources at an astonishing rate, with the amount of compute used in training expanding at approximately 4 times per year. This rapid growth in computational power outpaces even some of the most significant technological advancements in recent history, including the adoption of mobile phones and the expansion of solar energy capacity.

One stark example of how power consumption impacts AI training can be seen by comparing GPT-2, released in 2019, and GPT-4, which came out in 2023. GPT-2 was a major breakthrough at the time, capable of generating coherent text, but its abilities were limited compared to what we expect from modern models. By contrast, GPT-4 is far more sophisticated, capable of solving complex problems, engaging in nuanced conversations, and generating text that rivals human reasoning in certain areas. This leap in capability is closely tied to the scale of the computational power used in training GPT-4, which is orders of magnitude larger than what was used for GPT-2.

Training GPT-2 required a fraction of the resources used for GPT-4. The increased scale of training GPT-4 necessitated not only more computational hardwareā€”like GPUsā€”but also vastly more electricity to power those systems. AI models like GPT-4 often involve billions or even trillions of parameters, requiring longer training times and more energy to compute the vast amounts of data needed to achieve high levels of accuracy. In fact, GPT-4 likely consumed thousands of times more FLOP than GPT-2 during training, which demonstrates how important the availability of energy is to advancing AI capabilities.

Looking ahead, by 2030, the training of AI models may reach up to 2e29 FLOP, which is many times larger than what is required for GPT-4.

Meeting the power demands to support such large training runs will be one of the primary challenges. It is expected that by 2030, AI models may require data centers capable of drawing between 1 and 5 gigawatts (GW) of powerā€”comparable to the energy consumption of entire cities.

Power constraints are one of the key factors that could limit the continued expansion of AI model training. To manage these demands, companies are exploring various solutions, such as on-site power generation (including the use of nuclear and solar energy) and geographically distributed training across multiple data centers to spread out the energy load. However, as the scale of AI models continues to grow, the infrastructure needed to support them must expand in parallel. Power supply, bandwidth, and chip manufacturing capabilities will need to evolve to keep pace with the increasing computational requirements.

Despite these challenges, improvements in hardware efficiency are expected to alleviate some of the power consumption. GPUs, the hardware typically used in AI training, are becoming more power-efficient, meaning they can perform more calculations per watt of electricity consumed. Additionally, advancements in techniques like 8-bit precision (FP8), which allows AI models to train using less power without compromising accuracy, are likely to become standard by 2030.

There are other constraints and ingredients to scaling AI training. For those, check out this article.

Spotlight

šŸŽ™ļøšŸ“‘ Googleā€™s Illuminate turns research papers in to micro audiobooks/podcasts

This video gives a quick overview of Googleā€™s latest experimental tool called Illuminate, which transforms research papers into audio content. Itā€™s similar to Googleā€™s Notebook LM, but instead of turning documents into a podcast-like experience, it creates more of an audiobook or ā€œmicro-audiobookā€ for research papers. The video walks viewers through how to use Illuminate, focusing on how it works with academic PDFs.

Key Points:

ā€¢ Illuminate by Google: A new tool that converts research papers into audio files, similar to an audiobook. Unlike Notebook LM, this focuses specifically on PDFs, mostly from research papers.

ā€¢ How to use it: Upload a PDF from a specific library, and Illuminate will create an audio version of it within a few minutes. Itā€™s currently designed for research papers only.

ā€¢ Great for researchers: It provides a quicker way to digest dense academic papers. For those who frequently read academic or legal documents, this can be a helpful tool to get a high-level summary or refresh their understanding.

ā€¢ Transcript feature: Users can also view, copy, and paste the transcript of the audio. This is especially helpful for researchers who need to reference or pull quotes from the material.

ā€¢ Google Labs: The tool is still in the experimental phase, available through Google Labs, where other cool AI-driven tools can be found.

ā€¢ Future potential: While itā€™s currently limited to research papers, the speaker hints that this feature might soon be available for a broader range of documents.

Itā€™s a simple and easy-to-use tool for anyone whoā€™s diving into academic work and wants a new way to consume research.

AI Model Notables

ā–ŗ OpenAI releases study on whether using your name in your prompts can lead to responses that reflect harmful stereotypes.

If you want to listen to a short podcast about this paper, I asked NotebookLM to make one. Give it a listen;

ā–ŗ Full list of 39 US AI startups that have raised $100M or more in 2024

ā–ŗ OpenAI versus Open AI - does the blank space matter?

ā–ŗ Remember SocialAI, the social media app that creates a world of only AI-generated users for your to interact with? Well the founder is joining Meta. This will be interesting.

ā–ŗ Did you see the Norther Lights recently? If not, Meta has some AI generated images for you and boy did they p!ss people off!

ā–ŗ Amazonā€™s new AI guides can help shoppers find what they need

News You Can Use:

āž­ Expert witness used Copilot to make up fake damages, irking judge.

āž­ The New York Times sent Perplexity a cease-and-desist letter accusing the AI search startup of wrongfully using its copyrighted content.

āž­ Beware of this sophisticated Gmail AI-driven scam that is going around

āž­ In a recent study evaluating how chatbots make loan suggestions for mortgage applications, researchers at Pennsylvaniaā€™s Lehigh University found something stark: there was clear racial bias at play.

āž­ A groundbreaking study reveals how artificial intelligence has identified over 160,000 new RNA virus species, dramatically expanding our understanding of viral diversity.

Was this newsletter useful? Help me to improve!

With your feedback, I can improve the letter. Click on a link to vote:

Login or Subscribe to participate in polls.

Who is the author, Josh Kubicki?

Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.

DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8