NVIDIA's next-generation Rubin AI GPU architecture release rumored to be pulled up by 6 months, TSMC 3nm process expected, with ultra-fast next-gen HBM4 memory.
The new Rubin AI GPU architecture is the successor to the Blackwell GPU architecture, which is being used in the current fleet of B200 and GB200 chips, as well as the future GB300 series AI GPU that we're hearing more and more about lately. In a new report from UDN, we're hearing that NVIDIA is already working with supply chain partners in Taiwan on the Rubin AI GPU architecture and its new R100-powered AI servers.
Rubin was originally scheduled for 2026, but sources of UDN say that the company has launched the development of Rubin early, so that the AI boom can continue from one AI GPU chip to another (Blackwell to Rubin, and so on).
- Read more: AI servers in future: 'rack density' of 1000kW+ w/ NVIDIA Rubin Ultra AI GPUs
- Read more: SK hynix says NVIDIA CEO wants HBM4 chips 6 months early
- Read more: NVIDIA's next-gen GB300 AI platform in mid-2025: more perf than GB200, fully liquid-cooled
TSMC is a key partner of the triangular alliance between NVIDIA + SK hynix + TSMC, with the company expected to expand its CoWoS advanced packaging capacity in 2026 to handle the large Rubin chip demand. TSMC plans to increase CoWoS production capacity to 80,000 pieces per month by Q4 2025 in preparation of Rubin.
- Read more: TSMC's 3nm project list grows: AMD MI350 series, NVIDIA Rubin AI GPU
- Read more: SK hynix HBM4 tape out in October: ready for NVIDIA's next-gen Rubin AI GPU
- Read more: NVIDIA's next-gen Rubin, Rubin Ultra, Blackwell Ultra AI GPUs: also Vera CPUs
- Read more: NVIDIA's next-gen Vera Rubin AI GPU rumored for mid-2025, compete with AMD Instinct MI400X