As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Elon Musk has just powered on xAI's new supercomputer, powered by 100,000 x NVIDIA H100 AI GPUs worth up to $4 billion at its Memphis Supercluster, the "most powerful AI training cluster in the world".
Elon Musk took to X, posting: "Nice work by xAI team, X team, NVIDIA and supporting companies getting Memphis Supercluster training started at ~4:20am local time. With 100K liquid-cooled H100s on a single RDMA fabric, it's the most powerful AI training cluster in the world!"
NVIDIA's current-gen Hopper H100 80GB AI GPUs cost between $30,000 and $40,000 per AI GPU, so Elon Musk's investment with xAI and its $6 billion raised in May 2024 at a valuation of $24 billion, sees Musk's AI startup investing somewhere between 50% and 67% of its fundraising in purchasing NVIDIA's leading H100 AI GPUs.
- Read more: Dell is building an AI factory with NVIDIA AI GPUs to train Grok for xAI and Elon Musk
- Read more: Elon Musk to buy over $10 billion worth of NVIDIA's new B200 AI GPUs for xAI supercomputer
- Read more: NVIDIA CEO proud to announce Dell's new AI factory is powered by Blackwell AI GPUs
- Read more: NVIDIA, Foxconn expect results this year for AI factories, smart manufacturing, AI smart EVs
- Read more: NVIDIA's next-gen B200 AI GPU uses 1000W of power per GPU, drops in 2025, confirmed by Dell
- Read more: Dell teases an Arm-based processor from NVIDIA inside of a next-gen AI PC in 2025
Earlier this month, Musk said his next-gen Grok 3 AI chatbot would be trained on 100,000 x NVIDIA H100 AI GPUs, and now it's online... at 4:20am of all times, good ol' Elon. You love to see it.