- The Brainyacts
- Posts
- 239 | š½ļø š AI loves nuclear power
239 | š½ļø š AI loves nuclear power
Brainyacts #239
Itās Tuesday. Last Friday I opened with this clip. While suspected at the time, it has been confirmed that the Tesla robots were being remotely controlled by humans - including their voices. Yes, this is disappointing but the reason was to demonstrate future capabilities. They should have been a bit more transparent though. But like most things in AI right now - leading with the promise, not the real, gets the attention!
Onward š
In todayās Brainyacts:
AI and Energy Consumption
Googleās next PDF-to-Podcast tools (not NotebookLM)
OpenAI studies its own bias and other AI model news
Expert witness uses ChatGPT - judge not amused and more news you can use
š to all subscribers!
To read previous editions, click here.
Lead Memo
š šŖ« AI and Energy Consumption (going nuclear) by the year 2030
ā¢ļø Today, Google announced it was ordering 6 to 7 new nuclear plants in order to powers its data centers.
š Open AI strikes a deal for its own data center powered by renewable energy.
ā¢ļø Last month Microsoft relit Three Mile Islandās nuclear plant in order to power Microsoftās AI training demands.
The role of power consumption in training AI models is an increasingly important topic as the development of more advanced AI systems continues to accelerate. The creation of powerful models, such as those in the GPT series, requires vast amounts of computational resources, which in turn consume significant energy. As AI technology evolves, the rate of power usage and the efficiency of hardware become critical factors in determining the feasibility of training larger and more complex models.
At the core of AI training is the need for large-scale computations, measured in floating-point operations (FLOP). FLOPs represent the speed at which a computer can crunch numbers. Itās similar to how a carās engine might be measured in horsepowerāFLOPs measure a computerās ābrain powerā for performing complex math, particularly with floating-point numbers (numbers with decimals). Letās say youāre doing a simple calculation like adding 2 + 2. In AI training, thatās just one floating-point operation. But AI models are working with far larger numbers and datasets, so think of the system needing to perform this kind of calculation millions or billions of times every second. And the more FLOP a model requires, the more computational powerāand thus energyāit consumes.
Over the past few years, AI labs have been scaling up their training resources at an astonishing rate, with the amount of compute used in training expanding at approximately 4 times per year. This rapid growth in computational power outpaces even some of the most significant technological advancements in recent history, including the adoption of mobile phones and the expansion of solar energy capacity.
One stark example of how power consumption impacts AI training can be seen by comparing GPT-2, released in 2019, and GPT-4, which came out in 2023. GPT-2 was a major breakthrough at the time, capable of generating coherent text, but its abilities were limited compared to what we expect from modern models. By contrast, GPT-4 is far more sophisticated, capable of solving complex problems, engaging in nuanced conversations, and generating text that rivals human reasoning in certain areas. This leap in capability is closely tied to the scale of the computational power used in training GPT-4, which is orders of magnitude larger than what was used for GPT-2.
Training GPT-2 required a fraction of the resources used for GPT-4. The increased scale of training GPT-4 necessitated not only more computational hardwareālike GPUsābut also vastly more electricity to power those systems. AI models like GPT-4 often involve billions or even trillions of parameters, requiring longer training times and more energy to compute the vast amounts of data needed to achieve high levels of accuracy. In fact, GPT-4 likely consumed thousands of times more FLOP than GPT-2 during training, which demonstrates how important the availability of energy is to advancing AI capabilities.
Looking ahead, by 2030, the training of AI models may reach up to 2e29 FLOP, which is many times larger than what is required for GPT-4.
Meeting the power demands to support such large training runs will be one of the primary challenges. It is expected that by 2030, AI models may require data centers capable of drawing between 1 and 5 gigawatts (GW) of powerācomparable to the energy consumption of entire cities.
Power constraints are one of the key factors that could limit the continued expansion of AI model training. To manage these demands, companies are exploring various solutions, such as on-site power generation (including the use of nuclear and solar energy) and geographically distributed training across multiple data centers to spread out the energy load. However, as the scale of AI models continues to grow, the infrastructure needed to support them must expand in parallel. Power supply, bandwidth, and chip manufacturing capabilities will need to evolve to keep pace with the increasing computational requirements.
Despite these challenges, improvements in hardware efficiency are expected to alleviate some of the power consumption. GPUs, the hardware typically used in AI training, are becoming more power-efficient, meaning they can perform more calculations per watt of electricity consumed. Additionally, advancements in techniques like 8-bit precision (FP8), which allows AI models to train using less power without compromising accuracy, are likely to become standard by 2030.
There are other constraints and ingredients to scaling AI training. For those, check out this article.
Spotlight
šļøš Googleās Illuminate turns research papers in to micro audiobooks/podcasts
This video gives a quick overview of Googleās latest experimental tool called Illuminate, which transforms research papers into audio content. Itās similar to Googleās Notebook LM, but instead of turning documents into a podcast-like experience, it creates more of an audiobook or āmicro-audiobookā for research papers. The video walks viewers through how to use Illuminate, focusing on how it works with academic PDFs.
Key Points:
ā¢ Illuminate by Google: A new tool that converts research papers into audio files, similar to an audiobook. Unlike Notebook LM, this focuses specifically on PDFs, mostly from research papers.
ā¢ How to use it: Upload a PDF from a specific library, and Illuminate will create an audio version of it within a few minutes. Itās currently designed for research papers only.
ā¢ Great for researchers: It provides a quicker way to digest dense academic papers. For those who frequently read academic or legal documents, this can be a helpful tool to get a high-level summary or refresh their understanding.
ā¢ Transcript feature: Users can also view, copy, and paste the transcript of the audio. This is especially helpful for researchers who need to reference or pull quotes from the material.
ā¢ Google Labs: The tool is still in the experimental phase, available through Google Labs, where other cool AI-driven tools can be found.
ā¢ Future potential: While itās currently limited to research papers, the speaker hints that this feature might soon be available for a broader range of documents.
Itās a simple and easy-to-use tool for anyone whoās diving into academic work and wants a new way to consume research.
AI Model Notables
āŗ OpenAI releases study on whether using your name in your prompts can lead to responses that reflect harmful stereotypes.
If you want to listen to a short podcast about this paper, I asked NotebookLM to make one. Give it a listen;
āŗ Full list of 39 US AI startups that have raised $100M or more in 2024
āŗ OpenAI versus Open AI - does the blank space matter?
āŗ Remember SocialAI, the social media app that creates a world of only AI-generated users for your to interact with? Well the founder is joining Meta. This will be interesting.
āŗ Did you see the Norther Lights recently? If not, Meta has some AI generated images for you and boy did they p!ss people off!
āŗ Amazonās new AI guides can help shoppers find what they need
News You Can Use:
ā AI at the center of Googleās defense against DOJ antitrust charges.
ā Expert witness used Copilot to make up fake damages, irking judge.
ā The New York Times sent Perplexity a cease-and-desist letter accusing the AI search startup of wrongfully using its copyrighted content.
ā Beware of this sophisticated Gmail AI-driven scam that is going around
ā In a recent study evaluating how chatbots make loan suggestions for mortgage applications, researchers at Pennsylvaniaās Lehigh University found something stark: there was clear racial bias at play.
ā A groundbreaking study reveals how artificial intelligence has identified over 160,000 new RNA virus species, dramatically expanding our understanding of viral diversity.
Was this newsletter useful? Help me to improve!With your feedback, I can improve the letter. Click on a link to vote: |
Who is the author, Josh Kubicki?
Some of you know me. Others do not. Here is a short intro. I am a lawyer, entrepreneur, and teacher. I have transformed legal practices and built multi-million dollar businesses. Not a theorist, I am an applied researcher and former Chief Strategy Officer, recognized by Fast Company and Bloomberg Law for my unique work. Through this newsletter, I offer you pragmatic insights into leveraging AI to inform and improve your daily life in legal services.
DISCLAIMER: None of this is legal advice. This newsletter is strictly educational and is not legal advice or a solicitation to buy or sell any assets or to make any legal decisions. Please /be careful and do your own research.8