OpenAI CEO Sam Altman took to X to confirm that the AI firm will have over 1 million GPUs online by the end of the year, which is an impressive statistic to visualize. However, as you try to picture what 1 million cutting-edge GPUs looks like, Sam Altman added that he'd much rather see 100 million GPUs go online.
After saying that he's "very proud of the team" for reaching the 1 million GPU milestone, he joked that they "better get to work figuring out how to 100x that." To put this figure into perspective, xAI's headline-grabbing Grok 4 model is powered by around 200,000 NVIDIA H100 GPUs, which suggests OpenAI is working with five times the GPU power as xAI.
The 100 million GPU figure is not currently feasible, as earlier this year, Sam Altman announced that OpenAI was delaying the release of its GPT 4.5 model because it was "out of GPUs." Which is a good problem to have if you're NVIDIA, as companies like OpenAI, xAI, Microsoft, and others are buying up GPUs as quickly as they can be produced.
Although the 100 million GPU figure is widely regarded as a joke, as it would require an incredible amount of power and physical space to be realized, this hasn't stopped people from speculating about the cost of just the hardware alone. According to The AI Investor on X, 100 million NVIDIA Blackwell GPUs for around $30,000 each would cost around $3 trillion - assuming NVIDIA doesn't give OpenAI a discount for buying so many at once.
This insatiable desire for GPU hardware is one of the reasons companies like Oracle, Google, AMD, Microsoft, and others are creating their own AI chips. Although there hasn't been any announcement, Sam Altman has hinted that OpenAI could be on the path to developing its own AI chips. In a world where 1 million GPUs isn't enough and you're looking to scale that by a factor of 100X, this would make a lot of sense.



