Dell PowerEdge XE9712: NVIDIA GB200 NVL72-based AI GPU cluster for LLM training, inference

Dell's new PowerEdge XE9712 with NVIDIA GB200 NVL72 AI server: the future of high-performance dense acceleration for real-time AI inference.

Dell PowerEdge XE9712: NVIDIA GB200 NVL72-based AI GPU cluster for LLM training, inference
Comment IconFacebook IconX IconReddit Icon
Gaming Editor
Published
2 minutes & 30 seconds read time

Dell has just unleashed its new PowerEdge XE9712 with NVIDIA GB200 NVL72 AI servers, with 30x faster real-time LLM performance over the H100 AI GPU.

Dell PowerEdge XE9712: NVIDIA GB200 NVL72-based AI GPU cluster for LLM training, inference 605

Dell Technologies' new AI Factory with NVIDIA sees the GB200 NVL72 AI server cabinet with 30x faster real-time LLM performance, lighting-fast connectivity with 72 x B200 AI GPUs connected and acting as one with NVLink technology. Dell points out that the liquid-cooled system maximizes your datacenter power utilization, while rapid deployment will see your AI cluster at-scale, with a "white glove experience" adds Dell.

We have 25x more efficiency than Hopper H100, 8K for LLM training with the highest performance delta at 8K+ GPU clusters, and 30x faster real-time trillion-parameter LLM inference compared to the H100 AI GPU.

Arthur Lewis, president, Infrastructure Solutions Group, Dell Technologies said: "Today's data centers can't keep up with the demands of AI, requiring high density compute and liquid cooling innovations with modular, flexible and efficient designs. These new systems deliver the performance needed for organizations to remain competitive in the fast-evolving AI landscape".

Dell explains that part of the Dell AI Factory with NVIDIA includes its new Dell PowerEdge XE9712 offers "high-performance, dense acceleration for LLM training and real-time inferencing of large-scale AI deployments. Designed for industry-leading GPU density with NVIDIA GB200 NVL72".

Dell continues: "This platform connects up to 36 NVIDIA Grace CPUs with 72 NVIDIA Blackwell GPUs in a rack-scale design. The 72 GPU NVLink domain acts as a single GPU for up to 30x faster real-time trillion-parameter LLM inferencing. The liquid-cooled NVIDIA GB200 NVL72 is up to 25x more efficient than the air-cooled NVIDIA H100-powered systems".

The company adds that "building on the success of the XE9680 with 8-way HGX GPUs, the XE9712 offers faster LLM performance with 72 GPUs acting as one in a single rack. It will be deployed as super PODs at scale, complete with full networking between racks, supported by Dell's turnkey rack-scale deployment services, supply chain, and logistics".

  • 25x more efficient than the H100
  • 30x faster real-time trillion parameter LLM inference vs. the H100
  • Liquid-cooled architecture ensures efficient heat management, enabling higher performance and faster data processing
Photo of the NVIDIA H100 Hopper PCIe 80GB
Best Deals: NVIDIA H100 Hopper PCIe 80GB
Country flag Today 7 days ago 30 days ago
-
- $27988 USD
Buy
$39879 USD $39879 USD
Buy
-
- $27988 USD
Buy
-
- $27988 USD
Buy
-
- $27988 USD
Buy
* Prices last scanned on 3/13/2025 at 5:36 am CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission from any sales.
NEWS SOURCE:dell.com

Gaming Editor

Email IconX IconLinkedIn Icon

Anthony joined the TweakTown team in 2010 and has since reviewed 100s of graphics cards. Anthony is a long time PC enthusiast with a passion of hate for games built around consoles. FPS gaming since the pre-Quake days, where you were insulted if you used a mouse to aim, he has been addicted to gaming and hardware ever since. Working in IT retail for 10 years gave him great experience with custom-built PCs. His addiction to GPU tech is unwavering and has recently taken a keen interest in artificial intelligence (AI) hardware.

Related Topics

Newsletter Subscription