Alert
TT Show Episode 55 - Arrow Lake, GeForce RTX 5070, and Google's Pixel smartphone tracking

Rambus details HBM4 memory controller: up to 10Gb/s, 2.56TB/sec bandwidth, 64GB per stack

Rambus details its new HBM4 memory controller: up to 10Gb/s speeds, 2.56TB/sec of memory bandwidth, and 64GB capacities per stack for next-gen AI GPUs.

Rambus details HBM4 memory controller: up to 10Gb/s, 2.56TB/sec bandwidth, 64GB per stack
Published
2 minutes & 30 seconds read time

Rambus has provided more details on its upcoming HBM4 memory controller, which offers some huge upgrades over current HBM3 and HBM3 memory controllers.

Rambus details HBM4 memory controller: up to 10Gb/s, 2.56TB/sec bandwidth, 64GB per stack 501

JEDEC is still finalizing the HBM4 memory specifications, with Rambus teasing its next-gen HBM4 memory controller that will be prepared for next-gen AI and data center markets, continuing to expand the capabilities of existing HBM DRAM designs.

Rambus' new HBM4 controller will pump over 6.4Gb/s speeds per pin, which is faster than the first-gen HBM3 and has more bandwidth than faster HBM3E memory using the same 16-Hi stack and 64GB max capacity design. HBM4 starting bandwidth is at 1638GB/sec (1.63TB/sec) which is 33% faster than HBM3E and 2x faster than HBM3.

HBM3E memory operates at 9.6Gb/s speeds with up to 1.229TB/sec of memory bandwidth per stack, while HBM4 memory will offer up to 10Gb/s speeds and a much bigger 2.56TB/sec of bandwidth per HBM interface. This is a 2x increase over the just-launched HBM3E, but the full capabilities of HBM4 memory won't be realized for a while yet (NVIDIA's next-gen Rubin R100 will use HBM4 in 2026).

Rambus details HBM4 memory controller: up to 10Gb/s, 2.56TB/sec bandwidth, 64GB per stack 502

Rambus talked about some of the other features of HBM4, which include ECC, RMW (Read-Modify-Write), Error Scrubbing, and more.

South Korean memory giant SK hynix is the only company mass-producing new 12-layer HBM3E memory with up to 36GB capacities and 9.6Gbps speeds, but next-gen HBM4 memory from SK hynix is expected to tape out next month, while Samsung is gearing into HBM4 mass production before the end of 2025, with tape out expected in Q4 2024.

Rambus details HBM4 memory controller: up to 10Gb/s, 2.56TB/sec bandwidth, 64GB per stack 503

We're expecting the next-gen NVIDIA Rubin R100 AI GPUs to use a 4x reticle design (compared to Blackwell with 3.3x reticle design) and made on TSMC's bleeding-edge CoWoS-L packaging technology on the new N3 process node. TSMC recently talked about up to 5.5x reticle size chips arriving in 2026, featuring a 100 x 100mm substrate that would handle 12 HBM sites, versus 8 HBM sites on current-gen 80 x 80mm packages.

TSMC will be shifting to a new SoIC design that will allow larger than 8x reticle size on a bigger 120 x 120mm package configuration, but as Wccftech points out, these are still being planned, so we can probably expect somewhere around the 4x reticle size for Rubin R100 AI GPUs.

Photo of the product for sale

NVIDIA H100 Hopper PCIe 80GB

TodayYesterday7 days ago30 days ago
$28029.99$28029.99$28019.99
-
--$41789.00
* Prices last scanned on 10/14/2024 at 2:11 am CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission from any sales.
NEWS SOURCE:wccftech.com

Anthony joined the TweakTown team in 2010 and has since reviewed 100s of graphics cards. Anthony is a long time PC enthusiast with a passion of hate for games built around consoles. FPS gaming since the pre-Quake days, where you were insulted if you used a mouse to aim, he has been addicted to gaming and hardware ever since. Working in IT retail for 10 years gave him great experience with custom-built PCs. His addiction to GPU tech is unwavering and has recently taken a keen interest in artificial intelligence (AI) hardware.

Newsletter Subscription

Join the daily TweakTown Newsletter for a special insider look into new content and what is happening behind the scenes.

Related Topics

Newsletter Subscription