As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
The JEDEC Solid State Technology Association has published its highly anticipated High Bandwidth Memory (HBM) DRAM standard: HBM4.

HBM4 has been designed as an "evolutionary" step beyond the previous HBM3 standard, with the new "JESD270-4 HBM4" to further enhance data processing rates while maintaining essential features like higher bandwidth, power efficiency, and increased capacity per die and/or stack.
The advancements coming with HBM4 are vital for applications that requires efficient handling of large datasets and complex calculations, including generative artificial intelligence (AI), high-performance computing (HPC), high-end graphics cards, and servers. HBM4 includes multiple improvements over the HBM3 standard, including:
- Increased Bandwidth: With transfer speeds up to 8 Gb/s across a 2048-bit interface, HBM4 boosts total bandwidth up to 2 TB/s.
- Doubled Channels: HBM4 doubles the number of independent channels per stack, from 16 channels (HBM3) to 32 channels with 2 pseudo-channels per channel. This provides designers with more flexibility and independent ways to access the cube.
- Power Efficiency: JESD270-4 supports vendor specific VDDQ (0.7V, 0.75V, 0.8V or 0.9V) and VDDC (1.0V or 1.05V) levels, resulting in lower power consumption and improved energy efficiency.
- Compatibility and Flexibility: The HBM4 interface definition ensures backwards compatibility with existing HBM3 controllers, allowing for seamless integration and flexibility in various applications and allowing a single controller to work with both HBM3 and HBM4 if needed.
- Directed Refresh Management (DRFM): HBM4 incorporates Directed Refresh Management (DRFM) for improved row-hammer mitigation and Reliability, Availability, and Serviceability (RAS).
- Capacity: HBM4 supports 4-high, 8-high, 12-high and 16-high DRAM stack configurations with 24 Gb or 32 Gb die densities, providing for a higher cube density of 64GB (32 Gb 16-High).
Barry Wagner, Director of Technical Marketing at NVIDIA and JEDEC HBM Subcommittee Chair, said: "High performance computing platforms are evolving rapidly and require innovation in memory bandwidth and capacity. Developed in collaboration with technology industry leaders, HBM4 is designed to drive a leap forward in efficient, high performance computing for AI and other accelerated applications".
Mian Quddus, Chairman of the JEDEC Board of Directors, added: "JEDEC members are dedicated to developing the standards needed to support the technology of the future. The HBM Subcommittee's efforts to continuously improve the HBM standard hold the potential to drive significant advancements in a wide variety of applications".
Boyd Phelps, Senior Vice President and General Manager of the Silicon Solutions Group at Cadence, said: "The tremendous growth in AI model sizes demands higher memory bandwidth to improve the efficiency of AI hardware systems with heterogeneous compute architectures, ensuring rapid and seamless data movement at a large scale. The HBM4 standard addresses this need for higher bandwidth with significant enhancements. Through our collaboration with JEDEC and ecosystem partners, Cadence is facilitating this transition by delivering the industry's highest-performing HBM4 memory subsystem with the lowest power and area".
Nikhil Jayaram, VP, Google Cloud Silicon, said: "Memory bandwidth is one of the key pillars of performance for AI computing systems. JEDEC HBM4 represents the big step in bandwidth that Google needs for next generation training and inference systems. We look forward to the advances in AI that HBM4-based systems will enable and to collaborate with JEDEC to extend HBM into the future".