AMD has received an inquiry to build an insane supercomputer that would house an incredible 1.2 million data center AI GPUs, with the company receiving inquiries from "unknown clients" for the crazy number of AI accelerators.
In a recent interview with The Next Platform, AMD's EVP and GP of the Datacenter Solutions Group, Forrest Norrod, revealed that AMD has had inquiries from "unknown clients" that require an insane amount of AI accelerators, confirming the news of the huge AI supercomputer.
1.2 million AI GPUs is a gargantuan amount of AI processing power, with the world's current largest supercomputer -- Frontier -- featuring around 38,000 GPUs. This means that 1.2 million AI accelerators would be a mind-boggling 30x in GPU horsepower (and that's just from the GPUs, let alone the CPUs).
- TPM: "What's the biggest AI training cluster that somebody is serious about - you don't have to name names. Has somebody come to you and said with MI500, I need 1.2 million GPUs or whatever".
- Forrest Norrod: "It's in that range? Yes".
- TPM: "You can't just say "it's in that range." What's the biggest actual number?"
- Forrest Norrod: "I am dead serious, it is in that range".
- TPM: "For one machine".
- Forrest Norrod: "Yes, I'm talking about one machine".
- TPM: "It boggles the mind a little bit, you know?"
- Forrest Norrod: "I understand that. The scale of what's being contemplated is mind blowing. Now, will all of that come to pass? I don't know. But there are public reports of very sober people are contemplating spending tens of billions of dollars or even a hundred billion dollars on training clusters".