NVIDIA upgrades Vera Rubin HBM4 bandwidth by 10% in order to stay ahead of AMD Instinct MI455X

NVIDIA has been slowly upgrading the specs on its next-gen Vera Rubin AI chips, bumping up HBM4 specs to ensure it beats AMD's new Instinct MI455X AI chip.

NVIDIA upgrades Vera Rubin HBM4 bandwidth by 10% in order to stay ahead of AMD Instinct MI455X
Comment IconFacebook IconX IconReddit Icon
Gaming Editor
Published
2-minute read time
TL;DR: NVIDIA updated its Vera Rubin NVL72 AI server at CES 2026, boosting HBM4 memory bandwidth by 10% to 22.2TB/sec, surpassing AMD's Instinct MI455X. This enhancement, driven by competitive pressure, leverages faster 8-Hi HBM4 stacks to deliver superior AI acceleration performance.

NVIDIA revised its Vera Rubin VR200 NVL72 AI server spec at CES 2026 a couple of weeks ago, increasing the HBM4 memory bandwidth by 10% to ensure it beats AMD's upcoming Instinct MI455X AI accelerator.

In a new post on X from @SemiAnalysis, the NVIDIA Vera Rubin NVL72 AI server specifications now see HBM4 memory bandwidth at 22.2TB/sec, which is a 10% increase from the specs disclosed by NVIDIA at GTC 2025 last year.

The original 13TB/sec of memory bandwidth from the HBM4 memory was impressive from the original specs unveiled for Vera Rubin NVL144 last year, with NVIDIA originally asking for 9Gbps bandwidth per pin from HBM4, and then a faster 10-11Gbps... but why is that? Why did NVIDIA upgrade the HBM4 bandwidth on Vera Rubin?

NVIDIA is using 8-Hi HBM4 stacks for Vera Rubin, which is why it has been pushing for HBM4 specifications to exceed JEDEC's rating, asking HBM makers to increase pin speeds to 11Gbps. On the other hand, AMD is using 12-Hi HBM4 stacks, resulting in 19.6TB/sec of memory bandwidth... NVIDIA needed that 10% higher HBM4 bandwidth, now at 22.2TB/sec for its upcoming Vera Rubin NVL72 AI servers.

Thank you to our major CES 2026 sponsors!
ASRockGIGABYTEKIOXIAMSIPatriot MemoryXPG

For more CES 2026 news coverage, check out our hub for the latest stories.