AMD is going to completely own the CPU game with its Ryzen processors, offering some nice technology features that even Intel doesn't have on their flagship processors - but Intel is preparing for a new deep neural network aimed CPU architecture, known as Lake Crest.
Lake Crest have been made with DNN (Deep Neural Network) workloads in mind, so it will better compete against the GPU-based offerings from AMD and NVIDIA.
Intel acquired deep learning startup Nervana for $350 million last year, so the catalyst of this partnership is the new Lake Crest architecture. Intel's VP Datacenter Group and GM for AI solutions, Naveen Rao explains: "We have developed the Nervana hardware especially with regard to deep learning workloads. In this area, two operations are often used: matrix multiplication and convolution".
The next generation Lake Crest CPU will operate as a Xeon co-processor, but is designed to increase AI workloads in a big way thanks to Intel's new "Flexpoint" architecture, which will be used inside the arithmetic nodes of the new Lake Crest CPU. Intel's new super-power can increase arithmetic operations on the Lake Crest CPU by up to 10x, as well as offering MCM (Multi Chip Module) design.
Better yet, it will have 32GB of HBM2 available, with a huge 8Tbps of memory bandwidth across the entire CPU.
Intel will be using "proprietary inter-chip links" which the company says are "up to 20x faster than PCIe".
Diane Bryant, Executive Vice President and GM of the Data Center Group at Intel explains: " We expect the Intel Nervana platform to produce breakthrough performance and dramatic reductions in the time to train complex neural networks. Before the end of the decade, Intel will deliver a 100-fold increase in performance that will turbocharge the pace of innovation in the emerging deep learning space".