Micron has just announced it has started volume production of its bleeding-edge HBM3e memory, with the company's HBM3e known good stack dies (KGSDs) shipping as part of NVIDIA's upcoming H200 AI GPU.
NVIDIA's new beefed-up H200 AI GPU will feature up to 141GB of ultra-fast HBM3e memory from Micron, which will be from its mass-produced 24GB 8-Hi HBM3e memory, with data transfer rates of 9.2GT/s and a peak memory bandwidth of over 1.2TB/sec per GPU. This is a 44% increase in memory bandwidth over HBM3, which provides the extra AI grunt the H200 has over the H100 AI GPU and its regular HBM3 memory.
The 141GB of HBM3e memory on the NVIDIA H200 AI GPU will feature up to 4.8TB/sec of memory bandwidth, which is up from the 80GB of HBM3 memory and up to 3.35TB/sec of memory bandwidth on the H100 AI GPU.
- Read more: NVIDIA H200 AI GPU: up to 141GB of HBM3e memory with 4.8TB/sec bandwidth
- Read more: NVIDIA splashes huge money on securing HBM3e memory for H200 and B100 AI GPUs
Sumit Sadana, executive vice president and chief business officer at Micron Technology, said in the press release: "Micron is delivering a trifecta with this HBM3E milestone: time-to-market leadership, best-in-class industry performance, and a differentiated power efficiency profile".
Sadana continued: "AI workloads are heavily reliant on memory bandwidth and capacity, and Micron is very well-positioned to support the significant AI growth ahead through our industry-leading HBM3E and HBM4 roadmap, as well as our full portfolio of DRAM and NAND solutions for AI applications".
HBM3E: Fueling the AI Revolution
As the demand for AI continues to surge, the need for memory solutions to keep pace with expanded workloads is critical. Micron's HBM3E solution addresses this challenge head-on with:
- Superior Performance: With pin speed greater than 9.2 gigabits per second (Gb/s), Micron's HBM3E delivers more than 1.2 terabytes per second (TB/s) of memory bandwidth, enabling lightning-fast data access for AI accelerators, supercomputers, and data centers.
- Exceptional Efficiency: Micron's HBM3E leads the industry with ~30% lower power consumption compared to competitive offerings. To support increasing demand and usage of AI, HBM3E offers maximum throughput with the lowest levels of power consumption to improve important data center operational expense metrics.
- Seamless Scalability: With 24 GB of capacity today, Micron's HBM3E allows data centers to seamlessly scale their AI applications. Whether for training massive neural networks or accelerating inferencing tasks, Micron's solution provides the necessary memory bandwidth.