Is there anyway to make it use less at it gets more advanced or will there be huge power plants just dedicated to AI all over the world soon?
Is there anyway to make it use less at it gets more advanced or will there be huge power plants just dedicated to AI all over the world soon?
It’s mostly the training/machine learning that is power hungry.
AI is essentially a giant equation that is generated via machine learning. You give it a prompt with an expected answer, it gets run through the equation, and you get an output. That output gets an error score based on how far it is from the expected answer. The variables of the equation are then modified so that the prompt will lead to a better output (one with a lower error).
The issue is that current AI models have billions of variables and will be trained on billions of prompts. Each variable will be tuned based on each prompt. That’s billions to the power of billions of calculations. It takes a while. AI researchers are of course looking for ways to speed up this process, but so far it’s mostly come down to dividing up these billions of calculations over millions of computers. Powering millions of computers is where the energy costs come from.
Unless AI models can be trained in a way that doesn’t require running a billion squared calculations, they’re only going to get more power hungry.
Would AI inferencing, or training be better suited to a quantum computer? I recall thouse not being great at conventional math, but massively accelerates computations that sounded similar to machine learning.
My understanding of quantum computers is that they’re great a brute forcing stuff, but machine learning is just a lot of calculations, not brute forcing.
If you want to know the square root of 25, you don’t need to brute force it. There’s a direct way to calculate the answer and traditional computers can do it just fine. It’s still going to take a long time if you need to calculate the square root of a billion numbers.
That’s basically machine learning. The individual calculations aren’t difficult, there’s just a lot to calculate. However, if you have 2 computers doing the calculations, it’ll take half the time. It’ll take even less time if you fill a data center with a cluster of 100,000 GPUs.
This is a pretty great explanation/simplification.
I’ll add that because the calculations rely on floating point math in many cases, graphics chips do most of the heavy processing since they were already designed for this pipeline in mind with video games.
That means there’s a lot of power hungry graphics chips running in these data centers. It’s also why NVidia stock is so insane.
It’s kinda interesting how the most power-consuming uses of graphics chips — crypto and AI/ML — have nothing to do with graphics.
(Except for AI-generated graphics, I suppose)