In addition to hardware costs, power generation and delivery and cooling requirements will be among the main constraints for massive AI data centers in the coming years. X, xAI, SpaceX, and Tesla CEO Elon Musk argues that over the next four to five years, running large-scale AI systems in orbit could become far more economical than doing the same work on Earth.

That’s primarily due to ‘free’ solar power and relatively easy cooling. Jensen Huang agrees about the challenges ahead of gigawatt or terawatt-class AI data centers, but says that space data centers are a dream for now.

Terawatt-class AI datacenter is impossible on Earth

U.S. generates around 490 GW of continuous power output these days (note that Musk says ‘per year,’ but what he means is continous power output at a given time), so using the lion’s share of it on AI is impossible. Anything approaching a terawatt of steady AI-related demand is unattainable within Earth-based grids, according to Musk.

“ There is no way you are building power plants at that level: if you take it up to say, a [1 TW of continuous power], impossible,” said Musk. You have to do that in space. There is just no way to do a terawatt [of continuous power on] Earth. In space, you have got continuous solar, you actually do not need batteries because it is always sunny in space  and the solar panels actually become cheaper because you do not need glass or framing and the cooling is just radiative.”

While Musk may be right about issues with generating enough power for AI on Earth and the fact that space could be a better fit for massive AI compute deployments, many challenges remain with putting AI clusters into space, which is why Jensen Huang calls it a dream for now.

“That’s the dream,” Huang exclaimed.

Google Preferred Source

Follow Tom’s Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.