OpenAI has just received one of the first engineering builds of the NVIDIA DGX B200 AI server, posting a picture of their new delivery on X:
Inside, the NVIDIA DGX B200 is a unified AI platform for training, fine-tuning, and inference using NVIDIA's new Blackwell B200 AI GPUs. Each DGX B200 system has 8 x B200 AI GPUs with up to 1.4TB of HBM3 memory and up to 64TB/sec of memory bandwidth. NVIDIA's new DGX B200 AI server can pump out 72 petaFLOPS of training performance, and 144 petaFLOPS of inference performance.
OpenAI Sam Altman is well aware of the advancements of NVIDIA's new Blackwell GPU architecture, recently saying: "Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We're excited to continue working with NVIDIA to enhance AI compute".
- Read more: NVIDIA 'halting developing' of GB200 NVL36x2 AI servers
- Read more: Taiwan preps for GB200 NVL36 AI servers in September, NVL72 in October
- Read more: NVIDIA hits major roadblocks with Blackwell AI GPU: revised B200A coming
- Read more: NVIDIA's new Blackwell AI GPUs have 'major issues' which requires redesign
- Read more: NVIDIA's next-gen Blackwell AI GPUs delayed, 'design flaws' are to blame
- Read more: NVIDIA to make $210B revenue from Blackwell GB200 AI servers in 2025 alone
- Read more: NVIDIA places new orders with TSMC for more GB200, B100, B200 AI chips
- Read more: Foxconn is the sole supplier of NVLink switches for next-gen GB200 AI servers
- Read more: NVIDIA GB200 AI servers led by Foxconn with 40% and Quanta with 30%
- Read more: NVIDIA's next-gen GB200 AI servers to ship in 'small quantities' in Q4 2024
- Read more: NVIDIA's new GB200 Superchip costs up to $70,000: full B200 NVL72 AI costs $3M