NVIDIA has unleashed its next-gen Hopper GPU architecture and with it are new GPUs, HBM3 memory, the fastest silicon on the planet and now NVIDIA Eos: the world's fastest AI supercomputer.

The new NVIDIA "Eos" supercomputer will be the world's fastest AI supercomputer when it is turned on later this year, packing 576 x DGX H100 systems with a total of 4608 x DGX H100 GPUs. NVIDIA is expecting its new Eos supercomputer to have a huge 18.4 exaflops of AI computing performance: this is 4x faster than the AI processing of the Fugaku supercomputer in Japan.
The Fugaku supercomputer in Japan is the world's fastest system for AI processing right now, with NVIDIA's new Eos supercomputer expected to act as a "blueprint for advanced AI infrastructure from NVIDIA, as well as its OEM and cloud partners".
- Read more: NVIDIA reveals next-gen Hopper GPU architecture, H100 GPU announced
- Read more: NVIDIA can sustain the world's internet traffic with 20 x H100 GPUs
- Read more: NVIDIA Hopper GPU is up to 40x faster with new DPX instructions
- Read more: NVIDIA announces new DGX H100 system: 8 x Hopper-based H100 GPUs
- Read more: NVIDIA is turning data centers into 'AI factories' with Hopper GPU
As for the NVIDIA H100 GPU itself, it's based on the new Hopper GPU architecture and made on TSMC's new 4N process node with 80 billion transistors, and ultra-fast HBM3 memory with up to 3TB/sec of memory bandwidth.

Inside of each DGX H100 system you'll find 8 x NVIDIA H100 GPUs connected as a single GPU through NVIDIA NVLink, with each of the DGX H100 GPUs pushing 32 petaflops of AI performance at new FP8 precision, which is a 6x increase of the previous generation.
8 x H100 GPUs per DGX H100 system, and the new Eos supercomputer has 576 of those DGX H100 systems in total and each of those systems have 8 x DGX H100 GPUs bringing the total amount of GPUs inside of the Eos supercomputer to 4608 x DGX H100 GPUs.
An astonishingly powerful amount of AI-super-boosting GPU silicon.