IBM has just unveiled its new Telum II processor and Spyre AI accelerator, which it plans to use inside of its new IBM Z mainframe systems powering AI workloads.
The company provided details of the architecture of its new Telum II processor and Spyre AI accelerator, which are designed for AI workloads on the next-gen IBM Z mainframes. The new mainframes will accelerate traditional AI workloads, as well as LLMs using a brand new ensemble method of AI.
IBM's new Telum II processor features 8 high-performance cores running at 5.5GHz, with 36MB L2 cache per core and a 40% increase in on-chip cache capacity for a total of 360MB. The virtual level-4 cache of 2.88GB per processor drawer provides a 40% increase over the previous generation. The integrated AI accelerator allows for low-latency, high-throughput in-transaction AI inferencing, for example enhancing fraud detection during financial transactions, and provides a fourfold increase in compute capacity per chip over the previous generation.
The new I/O Acceleration Unit DPU is integrated into the Telum II chip. It is designed to improve data handling with a 50% increased I/O density. This advancement enhances the overall efficiency and scalability of IBM Z, making it well suited to handle the large-scale AI workloads and data-intensive applications of today's businesses.
Spyre Accelerator: A purpose-built enterprise-grade accelerator offering scalable capabilities for complex AI models and generative AI use cases is being showcased. It features up to 1TB of memory, built to work in tandem across the eight cards of a regular IO drawer, to support AI model workloads across the mainframe while designed to consume no more than 75W per card. Each chip will have 32 compute cores supporting int4, int8, fp8, and fp16 datatypes for both low-latency and high-throughput AI applications