NVIDIA announces H200 AI GPU: up to 141GB of HBM3e memory with 4.8TB/sec bandwidth

NVIDIA announces its new H200 Hopper GPU with the world's fastest HBM3e memory from Micron, will power the new Exaflop Jupiter supercomputer.

Published
Updated
2 minutes & 27 seconds read time

NVIDIA's new H200 Hopper AI GPU was just announced -- while at the same time as teasing its next-gen B100 Blackwell GPU -- with the new H200 AI GPU coming in 2024 with the world's fastest HBM3e memory from Micron. NVIDIA also announced that its Grace Hopper Superchips would power the new Exaflop Jupiter supercomputer.

NVIDIA's new H200 AI GPU (source: NVIDIA)

NVIDIA's new H200 AI GPU (source: NVIDIA)

The new NVIDIA H200 GPUs feature Micron's latest HBM3e memory, with capacities of up to 141GB per GPU with up to 4.8TB/sec of memory bandwidth. This is 1.8x more memory capacity than the HBM3 memory on H100, and up to 1.4x more HBM memory bandwidth over H100. NVIDIA uses either 4x or 8 x H200 GPUs for its new HGX H200 servers, so you're looking at a huge 1.1TB of HBM3e memory with up to 4.8TB/sec of memory bandwidth. Impressive numbers, NVIDIA.

NVIDIA also promises that its new H200 AI GPUs will be compatible with its existing HGX H100 systems, which makes it easy for customers to upgrade their systems. NVIDIA has partners like ASUS, ASRock Rack, Dell, Eviden, GIGABYTE, Hewlett Packard Enterprise, Ingrasys, Lenovo, QCT, Wiwynn, Supermicro, and Wistron. NVIDIA will have updated solutions when the H200 GPUs are made available in Q2 2024.

NVIDIA's current H100 AI GPU is limited to 80GB of HBM3 memory, so the upgrade to 141GB of HBM3e memory at up to 4.8TB/sec of memory bandwidth is quite the upgrade. NVIDIA's current H100 AI GPU was the world's first with HBM3 memory, and now HBM3e memory is powering the H200 AI GPU.

NVIDIA didn't just announce its new H200 AI GPU and tease its next-gen B100 Blackwell AI GPU but also announced a major supercomputing contract win: NVIDIA's Grace Hopper Superchips (GH200) will be powering the Jupiter supercomputer, which is located at the Forschungszentrum Jülich facility in Germany, and acts as part of the EuroHPC Joint Undertaking and contracted to Eviden and ParTec. The new Jupiter supercomputer will be used for Material Science, Climate Research, Drug Discovery, and more.

NVIDIA announces H200 AI GPU: up to 141GB of HBM3e memory with 4.8TB/sec bandwidth 203

Inside the new Jupiter supercomputer, it defines a new class of supercomputers that are designed to propel AI for scientific discovery, with a huge 93 ExaFLOPS of AI performance thanks to its huge 24,000 x GH200 Grace Hopper Superchips, 1.0 EF delivered HPC performance, Quantum-2 InfiniBand, 1.2PB/sec of aggregate bandwidth, with 18.2mW of power consumption. It should handle Alan Wake 2 maxed out at 16K 360FPS, that's for sure.

Buy at Amazon

NVIDIA H100 80 GB Graphic Card PCIe HBM2e Memory 350W

TodayYesterday7 days ago30 days ago
Buy at Newegg
$368.00$368.99$397.99
$30099.99$29949.95$30099.99
* Prices last scanned on 5/7/2024 at 8:46 pm CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission.
NEWS SOURCE:wccftech.com

Anthony joined the TweakTown team in 2010 and has since reviewed 100s of graphics cards. Anthony is a long time PC enthusiast with a passion of hate for games built around consoles. FPS gaming since the pre-Quake days, where you were insulted if you used a mouse to aim, he has been addicted to gaming and hardware ever since. Working in IT retail for 10 years gave him great experience with custom-built PCs. His addiction to GPU tech is unwavering and has recently taken a keen interest in artificial intelligence (AI) hardware.

Newsletter Subscription

Related Tags