SK hynix is expediting its HBM roadmap that includes HBM4 and HBM4E memory, which were originally planned for mass production in 2026 and 2027, respectively.
Now, the timelines have been moved up, and HBM4 is set for mass production in 2025, while HBM4E will enter mass production in 2026, according to sources from Business Korea. The adjustments "align" with NVIDIA's accelerated AI accelerator release cycle, which has shortened from two years to one year.
NVIDIA's current-gen Hopper GPU architecture with the H100 and beefed-up H200 are already dominating, with the new Blackwell GPU architecture with the B100 and B200 AI GPUs sporting faster HBM3E memory. But, then NVIDIA teased its next-gen Rubin R100 AI GPU, which will feature HBM4 memory and drop in Q4 2025.
An industry insider told Business Korea: "SK Hynix is speeding up the implementation of its next-generation HBM roadmap, including HBM4 and HBM4E. The company has moved up its plans to mass-produce HBM4 in 2024 and HBM4E in 2026 by one year each".
- Read more: SK hynix says its ultra-next-gen HBM4E in 2026, ready for the world of next-gen AI GPUs
- Read more: NVIDIA's next-gen R100 AI GPU: TSMC 3nm with CoWoS-L packaging, HBM4 in Q4 2025
- Read more: SK hynix says most of its HBM for 2025 is sold out already, 16-Hi HBM4 coming in 2028
- Read more: SK hynix and TSMC to work together with HBM4, next-gen semiconductor packaging tech
- Read more: Samsung and SK hynix to use new 1c DRAM on next-gen HBM4 memory for future-gen AI GPUs
The next-generation HBM4 offers a huge 40% increase in bandwidth, and a reduced power consumption of a rather incredible 70% to HBM3E, the fastest memory in the world. HBM4 density will be 1.3x higher, with all of these advancements combined, the leap in performance and efficiency is a key driver in NVIDIA's continued AI GPU dominance.