Elon Musk's xAI startup has promised to expand its Colossus AI supercomputer by 10x its computational power, with over 1 million GPUs in total... yes, 1,000,000+ AI GPUs.
xAI built Colossus in just 3 months earlier this year, with the world's largest AI supercomputer featuring over 100,000 x NVIDIA IA GPUs operating at once, training Grok for X, and more. The expansion project has already started, with the first step being increasing the size of the facility in Memphis, Tennessee.
NVIDIA, Dell, and Supermicro are going to establish operations in Memphis in order to support xAI's extension of Colossus, while FT reports that the chamber of commerce saying that it would establish an "xAI special operations team" in order to "provide round-the-clock concierge service to the company".
- Read more: Elon Musk has priority access to NVIDIA GB200 in January 2025, costs $1.08B
- Read more: Elon Musk's xAI to double Colossus AI supercomputer to 200K NVIDIA Hopper AI GPUs
The expansion up to over 1 million AI GPUs isn't going to be cheap, not by a long shot, as the new B200 and GB200 chips are expensive, and Elon was just provided with priority access to NVIDIA GB200 AI chips in January 2025 at a cost of $1.08 billion. But, for 1 million+ AI GPUs would be magnitudes more expensive: tens of billions of dollars.
It's not just the raw AI GPU costs, but the associated costs of expanding xAI's new Colossus supercomputer in terms of building it, powering it, and cooling the vast array of AI servers.
- Read more: xAI Colossus AI supercomputer with 100K x NVIDIA H100 AI GPUs in-depth look
- Read more: Elon Musk asks if Tesla should invest another $5 billion into xAI startup
- Read more: Elon Musk turns on xAI's new AI supercomputer: 100K x NVIDIA H100 at 4:20am
- Read more: Elon Musk to buy $10B+ worth of NVIDIA B200 AI GPUs for xAI supercomputer
- Read more: Elon Musk's new Memphis Supercluster uses gigantic portable power generators, grid isn't enough