Elon Musk: 230K AI GPUs train Grok at Colossus 1: 550K GB200, GB300s at Colossus 2 coming soon

Elon Musk says that 230,000+ AI GPUs are training Grok for xAI and its Colossus 1, while Colossus 2's first batch of 550,000 GB200, GB300s coming soon.

Elon Musk: 230K AI GPUs train Grok at Colossus 1: 550K GB200, GB300s at Colossus 2 coming soon
Comment IconFacebook IconX IconReddit Icon
Gaming Editor
Published
2-minute read time
TL;DR: Elon Musk's xAI is investing up to $2 trillion to build Colossus 2, a supercomputer with 550,000 NVIDIA GB200 and GB300 AI GPUs, aiming for 50 million H100-equivalent GPUs and 200 exaFLOPs compute power within five years. This massive AI hardware expansion outpaces global supercomputers.

The AI industry is reportedly preparing to spend "trillions of dollars" securing AI hardware, with Elon Musk's xAI planning to acquire the compute power equivalent to 50 million NVIDIA H100 AI GPUs.

In a new post on X, Musk said: "230k GPUs, including 30k GB200s, are operational for training Grok @xAI in a single supercluster called Colossus 1 (inference is done by our cloud providers). At Colossus 2, the first batch of 550k GB200s & GB300s, also for training, start going online in a few weeks. As Jensen Huang has stated, @xAI is unmatched in speed. It's not even close".

xAI's massive AI supercomputer cluster is called the Colossus 2, and it'll be online in the coming weeks, powered by NVIDIA GB200 and GB300 AI servers, with a total count of 550,000 units. Just on these numbers alone, that means xAI has spent around $2 trillion getting Colossus online, which is a colossal (pun intended) amount of money.

Elon says that the goal is to get up to 50 million H100-equivalent AI compute power online in the next 5 years, where depending on the price -- and we're sure xAI is getting a discount from NVIDIA because of the mass AI hardware orders -- this would be worth somewhere between $1.5 trillion and $2 trillion. As for the AI compute power, we're talking about 200 exaFLOPs, which is 20x more compute power than the world's fastest supercomputer.

Elon isn't the only one salivating over AI hardware, with OpenAI boss Sam Altman also announcing plans to secure over 100 million AI chips in the coming years, as well as Meta boss Mark Zuckerberg who is spending big money getting GW-scale AI clusters up and operational.