The race to create the world's most powerful artificial intelligence system has certainly heated up, and by the looks of things, it isn't stopping anytime soon, with multiple tech companies flocking to NVIDIA for its powerful workstation GPUs used to train the AI models.
It was only a few months ago Musk confirmed it was building a massive AI factory with Dell and Supermico, and within these huge server farms will be NVIDIA GPUs that will be used to train xAI's model Grok. To "catch up" to the competition and make Grok a viable AI solution that is as good, if not better than leading models from OpenAI and Microsoft, Musk plans on throwing 100,000 Hopper-based GPUs into a server.
The SpaceX founder explained via a post on X that xAI contracted 24,000 H100 (Hopper) GPUs from Oracle, which are being used to train Grok 2. However, Musk said xAI will move forward with its 100,000 H100 system by itself as that will result in the "fastest time to completion". Moreover, Musk wrote the decision to go ahead with the 100,000 GPU system internally was the company's "fundamental competitiveness depends on being faster than any other AI company."
"Oracle is a great company and there is another company that shows promise also involved in that OpenAI GB200 cluster, but, when our fate depends on being the fastest by far, we must have our own hands on the steering wheel, rather than be a backseat driver," wrote Musk
Musk has previously outlined plans on constructing a 300,000 GPU system using NVIDIA's Blackwell chips, which, if created at current prices, would be a conservative $9 billion investment.