Samsung accelerates HBM4E process, aims for 3.25TB/sec bandwidth ready for NVIDIA Rubin AI GPUs

Samsung Electronics is aiming for a goal of over 3TB/sec bandwidth for its next-gen HBM4E memory, aims for up to 3.25TB/sec of memory bandwidth.

Samsung accelerates HBM4E process, aims for 3.25TB/sec bandwidth ready for NVIDIA Rubin AI GPUs
Comment IconFacebook IconX IconReddit Icon
Gaming Editor
Published
2-minute read time
TL;DR: Samsung Electronics advances its next-gen HBM4E memory with 13Gbps per-pin speeds, delivering up to 3.25TB/sec bandwidth-over 2.5 times faster than HBM3E-and doubling power efficiency. Targeted for 2027 mass production, this breakthrough supports NVIDIA's demand for high-bandwidth memory in AI and next-gen GPUs.

Samsung Electronics has just boosted the development of its next-gen HBM4E memory, aiming for up to 3.25TB/sec of memory bandwidth, over 2.5x the bandwidth of its current HBM3E chips.

NVIDIA recently requested that HBM4 manufacturers -- SK hynix, Samsung, and Micron -- increase the bandwidth on their next-gen HBM4 memory, where recently at the OCP Global Summit 2025 event, Samsung revealed its development target for HBM4E, with per-pin speeds of at least 13Gbps, and mass production set for 2027.

Samsung's next-gen HBM4E memory would have 2048 data I/O pins, which when converted over to bytes (1 byte = 8 bits) then it works out to 3.25TB/sec of memory bandwidth. On top of that, Samsung said that HBM4E's power efficiency is over 2x better than current HBM3E memory.

If we rewind just to earlier this year in January, when Samsung raised its then HBM4E memory bandwidth targets from 8Gbps to 10Gbps, but then NVIDIA has stepped in and asked for even higher bandwidth, ready for its next-gen Vera Rubin AI chips, and now we're at 13Gbps for Samsung HBM4E memory.

JEDEC specifications for HBM4 sit at 8Gbps per-pin, but NVIDIA demanded speeds above 10Gbps from its HBM manufacturers, with SK hynix and Samsung both hitting 11Gbps. HBM4E will cross that threshold with 13Gbps, and Samsung seems quite confident in it, another great sign for its semiconductor business, Samsung Foundry.

Before HBM4 is fabbed onto Rubin AI GPUs, early demonstrations of higher-than-expected bandwidth had industry observers predicting HBM4E performance targets would be raised. Samsung is now the first to do so, teasing 13Gbps per-pin HBM4E memory is coming.