Intel is ramping up into the release of its next-gen Xeon "Emerald Rapids" CPU family, with some new information teasing the flagship 64-core, 128-thread Xeon 8592+ processor.

The new Intel Xeon 8592+ processor was spotted in its engineering sample form, with a CPU base clock of 1.9GHz and a boost clock of 3.9GHz, but with this being an ES processor, we should expect higher clocks when the processor is in retail form. We have a base TDP of 350W, but it can be adjusted to 420W while peak power will reach 500W. The pre-release sample seems to have a 922W peak power, but I hope we don't see that number in retail form.
Intel's new Emerald Rapids-based Xeon 8592+ processor will have 448MB of combined cache (120MB L2 cache + 320MB of L3 cache), which is a near doubling over the 232MB of cache on the previous-gen flagship Xeon 8490H "Sapphire Rapids" CPU.
The new 64-core, 128-thread processor in the Xeon 8592+ CPU leak was joined by the Xeon 8558U processor that hit Geekbench recently, rolling out from Intel as a new 48-core, 96-thread Xeon "Emerald Rapids" processor. We could expect a base clock of 2.0GHz, with 356MB of combined cache, over 150MB+ more cache than its Sapphire Rapids counterpart.

As for the benefits of the new 5th Generation Xeon Scalable platform, we are to expect up to 3x LLC cache, higher DDR5 memory support, CXL Type 3 support, and a double-digit increase in terms of performance per watt. Intel will also have a larger core count with its new Emerald Rapids processors versus the previous limits of Sapphire Rapids.

Intel is promising energy-efficient compute performance, with an optimized performance mode that delivers up to 17% general purpose performance-per-watt, which offloads CPU cores with Intel Accelerator Engines for power efficiency.
The world of AI is, of course, front and center: with AI Everywhere with the Best CPUs for AI being pushed by Intel. We are to expect a "huge" performance boost across Inference and Training AI workloads, Intel Advanced Matrix Extensions (AMX) for built-in AI acceleration, and out-of-the-box deployment with Optimized SW stacks. Intel has AI covered with Emeralds Rapids.
Sounds good, right?
Well, the elephant in the room would have to be the fact that AMD has its just-announced Ryzen Threadripper and Ryzen Threadripper PRO 7000 series CPUs... as well as the core-busting EPYC processors, AMD has more cores, more threads, more clocks, and they're gaining momentum in the server, data center, and AI worlds.
- Read more: AMD Ryzen Threadripper PRO 7000 WX series CPUs top out at 96 cores, 192 threads Zen 4 for $9999
- Read more: Intel's next-gen Emerald Rapids CPU: 64 cores, 80 PCIe 5.0 lanes, and more
- Read more: AMD EPYC 9684X 'Genoa-X' CPU: 96C/192T, Zen 4 on 5nm, 1.1GB of cache
- Read more: AMD EPYC Bergamo CPUs: Zen 4 + 128 cores, 256 threads on 5nm TSMC
AMD has its flagship Ryzen Threadripper PRO 7995WX processors with 96 cores and 192 threads of Zen 4-powered CPU performance available for HEDT platforms... on your desk. The server, data center, and AI side of things can scale even higher with AMD offering its EPYC family of processors led by the AMD EPYC "Bergamo" CPU that has 128 cores and 256 threads of CPU power. Intel is beat before they've even launched their new Xeon "Emerald Rapids" processors.