Tesla's entire future is AI, so it makes sense that the Elon Musk-led company would be spending billions of dollars securing AI GPUs for its new Dojo AI supercomputer.
NVIDIA already commands an estimated 90% of the AI GPU market, but Tesla has said that it will be not only using NVIDIA's various AI GPUs but also AMD AI GPUs but didn't specify which of AMD's AI GPUs it would be purchasing. Tesla boss Elon Musk did state that it is readying $500 million to spend on an equivalent of 10,000 x NVIDIA H100 AI GPU order from NVIDIA.
Elon Musk posted on X: "The governor is correct that this is a Dojo Supercomputer, but $500M, while obviously a large sum of money, is only equivalent to a 10K H100 system from NVIDIA. Tesla will spend more than that on NVIDIA hardware this year. The table stakes for being competitive in AI are at least several billion dollars per year at this point".
When the Tesla boss was asked if they would be buying AI GPUs from AMD, he simply replied with "Yes".
- Read more: Tesla's Dojo AI supercomputer boss leaves, former Apple exec now leads Dojo
- Read more: Tesla's insane new Dojo D1 AI chip, a full transcript of its unveiling
- Read more: NVIDIA commands 90% of AI GPU market, competitors 'years from catching up'
- Read more: NVIDIA AI GPU shipments expected to surge 150% year-over-year in 2024
Tesla's future Dojo AI supercomputer is going to require a huge amount of AI GPU horsepower, with the company marking its $500 million project for the New York Gigafactory. This is just the start, as the company will be spending more on AI GPUs as the months fly past, and it will spend "several billion dollars" on AI GPUs in the future.
NVIDIA currently has its Hopper H100 AI GPU dominating the AI GPU space worldwide, but it has a beefed-up H200 AI GPU coming out in the next few months, before its next-gen Blackwell B100 AI GPU is unleashed. Tesla has the money behind it and is already a leader in the AI market with its Model 3, Model S, Model Y, and Model X electric vehicles, all with class-leading AI abilities.
AMD just launched its new Instinct MI300X AI accelerator, which has a huge 192GB of HBM3 memory with up to a whopping 5.2TB/sec of memory bandwidth at its disposal. AMD has the upper hand for now, but NVIDIA is still far out and ahead the champion of the AI GPU race so far... and that's just with H100, let alone H200 and then B100 later this year.