The ASUS AI POD, which features the NVIDIA GB200 NVL72 platform, has reached its latest milestone. Production is ramping up for a scheduled shipping date of March 2025. This thing is an absolute beast for AI performance and cutting-edge deep learning, as it's built with the NVIDIA GB200 Grace Blackwell Superchip and fifth-generation NVIDIA NVLink interconnect technology.
When broken down, you've got 26 NVIDIA GRACE CPUs, 72 NVIDIA Blackwell Tensor Core GPUs, and 5th Gen NVIDIA NVLink Switches for low-latency communication with a capacity of 14.4 TB/s. Each of the 72 Blackwell GPUs includes two dies connected by a 10 terabytes per second (TB/s) chip-to-chip interconnect to offer 30X the LLM Inference of the NVIDIA H100 Tensor Core GPU and 4X the LLM Training performance of the H100.
The ASUS AI POD also supports liquid-to-air and liquid-to-liquid cooling, and it's powerful enough to train a trillion-parameter language model and perform AI inference in real-time. The cooling includes CPU/GPU cold plates and coolant distribution to increase power efficiency.
The cabinet-sized AI powerhouse is designed for data centers and those investing seriously in AI training and inference at scale. ASUS calls it a proof of concept (POC) and is speaking to "innovators who are eager to harness the full potential of AI computing" to get their own ASUS AI POD free of charge. For those interested in participating in the ASUS AI POD Online Test Program - you can fill out this form.
ASUS offers a full-service solution, from design to building to implementation, testing, and deployment on-site. It promises seamless integration with cloud applications and full end-to-end service. For more information on the ASUS AI POD, visit the official product page.