NVIDIA has confirmed its beefed-up Blackwell Ultra and next-gen Vera Rubin AI architectures are on track, lining up with recent reports that we'll get a huge info dump on the company's new AI GPUs at GTC 2025 in a few weeks time.

During the company's recent earnings call, NVIDIA CEO Jensen Huang confirmed with analysts that the recent GB200 AI server yield rates won't affect the company's annual release cadence, but the analysts also asked Jensen how NVIDIA would manage Blackwell Ultra and Rubin at similar release periods.
Jensen said: "Yes. Blackwell Ultra is second half. As you know, the first Blackwell was we had a hiccup that probably cost us a couple of months. We're fully recovered, of course. The team did an amazing job recovering and all of our supply chain partners and just so many people helped us recover at the speed of light".
- Read more: SK hynix hits 70% yield on HBM4 12-Hi for NVIDIA's next-gen Rubin AI GPUs
- Read more: NVIDIA's next-gen Rubin AI GPU rumored for 2H 2025: more AI domination
- Read more: NVIDIA's next-gen Rubin AI GPU pushed up 6 months ahead of schedule with HBM4
He continued: "And so, now we've successfully ramped up production of Blackwell. But that doesn't stop the next train. The next train is on an annual rhythm and Blackwell Ultra with new networking, new memories, and of course, new processors, and all of that is coming online".
Jensen talked about the company's next-gen Vera Rubin AI architecture: "And the click after that is called Vera Rubin and all of our partners are getting up to speed on the transition of that and so preparing for that transition. And again, we're going to provide a big, huge step-up".
- Read more: AI servers in future: 'rack density' of 1000kW+ w/ NVIDIA Rubin Ultra AI GPUs
- Read more: SK hynix says NVIDIA CEO wants HBM4 chips 6 months early
- Read more: NVIDIA's next-gen GB300 AI platform in mid-2025: more perf than GB200, fully liquid-cooled
NVIDIA's next-gen Rubin AI GPUs will use the new bleeding-edge HBM4 memory standard, which SK hynix, Samsung, and Micron are all hard at work now. HBM3E is being used on B200 and GB200, while B300 and GB300 will also use HBM3E but bump up the memory capacity per GPU.