Elon Musk has talked about training his next-gen Grok 3 AI chatbot, saying it will require an insane 100,000 NVIDIA H100 AI GPUs.
During a recent X spaces chat, the SpaceX and Tesla boss said that Grok 2 used around 20,000 NVIDIA H100 AI GPUs to train, but the new Grok 3 training will require a monster 100,000 separate NVIDIA H100 AI GPUs, a mammoth amount of AI compute power.
Musk said that the upcoming Grok model and beyond will require 100,000 NVIDIA H100 AI GPUs, so we can expect Grok 4 to require an unimaginable amount of AI GPU compute power. NVIDIA has now announced its new Blackwell B200 AI GPU, which I'm sure Elon has been eyeing off... which will be pumped out later this year, and flood the market in 2025 with brute force AI GPU performance.
- Read more: Tesla will spend billions of dollars on NVIDIA AI GPUs this year
- Read more: Tesla's Dojo AI supercomputer boss leaves, former Apple exec now leads Dojo
- Read more: Tesla's insane new Dojo D1 AI chip, a full transcript of its unveiling
- Read more: NVIDIA commands 90% of AI GPU market, competitors 'years from catching up'
- Read more: NVIDIA AI GPU shipments expected to surge 150% year-over-year in 2024
Earlier this year, Tesla said it would be sending billions of dollars buying NVIDIA AI GPUs and AMD AI GPUs, so these numbers will radically change throughout the year as Tesla scoops up more AI silicon from NVIDIA. The recent $500 million investment into the Dojo Supercomputer is "only equivalent" to 10,000 x NVIDIA H100 AI GPUs, said Musk in January 2024, adding, "Tesla will spend more than that on NVIDIA hardware this year. The table stakes for being competitive in AI are at least several billion dollars per year at this point".