Supercomputing 2014: In the world of HPC (High-Performance Computing) the bleeding edge is always the preferred route to realize insane computational power. HMC (Hybrid Memory Cubes) are the next big thing, and offer plenty of performance advantages over existing DRAM. The current generation of HMC technology sips power and provides more density and performance than existing memory technology. With 15 times the performance, 90 percent less space, and 70 percent less power consumption, it is easy to see why industry leaders are touting the advantages of HMC. The key to HMC adoption, as with any new technology, lies in the committees that establish industry-standard interface specifications.
The HMCC (Hybrid Memory Cube Consortium) was founded by Micron, Altera, Open-Silicon, Samsung and Xilinx in 2011 and has grown to more than 150 members. At Supercomputing 2014 the HMCC has announced the finalization and public availability of the HMCC 2.0 specification.
The new specification increases speed from 15Gb/s to 30Gb/s and migrates the associated channel model from short reach (SR) to very short reach (VSR). VSR is key to the eventual fusion of HMC into the CPU. The path to faster data processing, as with storage, involves getting closer to the CPU. The future integration into the CPU will expand upon the tremendous performance advantages of HMC. Imagine an L2 cache with 100 times the capacity.
HMC combines high-speed logic and DRAM layers into one 3D package that utilizes TSV (Through-Silicon Via) technology. The technology is speeding its way into HPC applications soon, and Intel recently announced integration of HMC into their Xeon Phi Knights Landing co-processors.