TensorWave is a high-end cloud service provider (CSP) that uses AMD AI hardware, and has just announced that it is working on making the world's largest GPU clusters built on AMD Instinct MI300X, MI325X, and MI350X AI accelerators.
In a statement, TensorWave CEO Darrick Horton said that the company is working towards building the world's largest AMD AI GPU cluster, in full he said: "with our 1 Gigawatt of capacity, we will massively scale deployments in 2025 and build the world's largest AMD GPU clusters, powered by MI300X, MI325X, and MI350, and beyond".
Horton continued: "these clusters will be the first to leverage Ultra Ethernet fabrics and will offer truly compelling performance with unmatched scale and efficiency".
TensorWave's plans for its new AMD GPU clusters will have up to 1 Gigawatt of power consumption, so we should expect to see some powerful compute performance from future projects in TensorWave's roadmap. One of the nuggets of info that we did get from TensorWave is that they will be using the new "Ultra Ethernet" inter-connectivity standard, which is meant to be a better implementation inside of huge AI clusters.
- Read more: AMD introduces El Capitan: the world's fastest supercomputer pumping 1.742 exaflops of power
- Read more: AMD announces 10.10 'Advancing AI 2024' event: EPYC Turin, Instinct MI325X, Ryzen AI 300 PRO
- Read more: AMD details Instinct MI300X MCM GPU: 192GB of HBM3 out now, MI325X with 288GB HBM3E in October
- Read more: AMD teases Instinct MI325X refresh in Q4, MI350 'CDNA 4' in 2025, MI400 'CDNA Next' in 2026