AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory

AMD's new Instinct MI300X accelerator will be unveiled soon, takes the AI accelerator fight to NVIDIA against its H100 and H200 Hopper AI GPUs.

Published
Updated
4 minutes & 2 seconds read time

AMD will be unveiling its new Instinct MI300X and MI300A accelerators at its upcoming "Advancing AI" event penciled in for December 6, with some truly awesome specs, including up to 192GB of HBM3 memory.

AMD's new Instinct MI300X AI accelerator

AMD's new Instinct MI300X AI accelerator

NVIDIA's just-announced H200 Hopper AI GPU features an increased 141GB of HBM3e memory from Micron, up from the 80GB HBM3 memory used on its industry-leading AI performance monster H100 AI GPU. AMD will trump that with a whopping 192GB of HBM3 memory, which should prove to be some interesting performance battles between NVIDIA and AMD, given that AI workloads require lots of fast on-board memory.

AMD will be using a chiplet design for its MI300X and MI300A accelerators, compared to the monolithic H100 and H200 Hopper AI GPUs, using advanced packaging technologies over at TSMC (Taiwan Semiconductor Manufacturing Company). AMD's new Instinct MI300X will feature a blend of 5nm and 6nm IPs with a huge 153 billion transistors.

AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 301AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 302
AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 303AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 304

Inside, the new AMD Instinct MI300X accelerator has a main interposer that is laid out with a passive die that houses the interconnect layer using a next-gen Infinity Fabric solution. The interposer itself features a total of 28 dies with 8 x HBM3 packages, 16 dummy dies sit between the HBM packages, and 4 active dies, with each of the active dies featuring 2 compute dies.

Each of the GCDs (Graphics Compute Dies) is based on the CDNA 3 GPU architecture, with each of the GCDs featuring 40 Compute Units each, totaling 2560 GPU cores. AMD will have 8 x GCDs, which, when combined, we're looking at 320 Compute Units and a total of 20,480 GPU cores.

AMD will have 50% more HBM3 capacity than its current Instinct MI250X accelerator, which can contain up to 128GB of HBM3 memory in total. Still, the new Instinct MI300X accelerator bumps that up to a huge 192GB of HBM3 memory. AMD is using 8 x HBM3 stacks, with each of the stacks being 12-Hi, with 16Gb ICs with 2GB capacity per IC, or 24GB of HBM3 per stack. 8 x HBM3 stacks of 24GB = 192GB of HBM3 memory total.

AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 305AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 306
AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 307AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 308

We have a gigantic 5.2TB/sec of memory bandwidth from the Instinct MI300X, up from the 4.8TB/sec of memory bandwidth that NVIDIA's new H200 AI GPU is capable of with its 141GB/sec of HBM3e memory.

We are looking at 750W TDP on the Instinct MI300X accelerator, a 50% power bump over the Instinct MI250X and 50W more than NVIDIA's new H200 GPU.

AMD Instinct MI300 AI accelerator highlights (MI300X + MI300A):

  • First Integrated CPU+GPU Package
  • Aiming Exascale Supercomputer Market
  • AMD MI300A (Integrated CPU + GPU)
  • AMD MI300X (GPU Only)
  • 153 Billion Transistors
  • Up To 24 Zen 4 Cores
  • CDNA 3 GPU Architecture
  • Up To 192 GB HBM3 Memory
  • Up To 8 Chiplets + 8 Memory Stacks (5nm + 6nm process)
AMD's new Instinct MI300X AI accelerator with CDNA 3 dies

AMD's new Instinct MI300X AI accelerator with CDNA 3 dies

The second of the new Instinct MI300 accelerators is the Instinct MI300A accelerator, with one of the active dies has 2 x CDNA 3 GCDs cut out and replaced with 3 x Zen 4 CCDs that have their own separate pool of cache and core IPs. There are 8 cores and 16 threads per CCD, for a total of 24 cores and 48 threads on the active die. There's also 24MB of L2 cache (1MB per core) and a separate pool of cache (32MB per CCD).

AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 401
AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 402AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 403
AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 404AMD Instinct MI300X AI accelerator teased: 153 billion transistors, 192GB of HBM3 memory 405

AMD will be working with ecosystem enablers and partners for its new MI300 AI accelerators in exciting 8-way configurations, with SXM designs connecting to the motherboard with mezzanine connectors. It all looks absolutely delicious in SXM designs, but there'll also be PCIe form factors, too.

Buy at Amazon

AMD Instinct Mi50 16GB of HBM2 Memory

TodayYesterday7 days ago30 days ago
Buy at Newegg
$339.99$339.99$354.99
$649.95$649.95$578.00
* Prices last scanned on 6/18/2024 at 10:43 am CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission.
NEWS SOURCE:wccftech.com

Anthony joined the TweakTown team in 2010 and has since reviewed 100s of graphics cards. Anthony is a long time PC enthusiast with a passion of hate for games built around consoles. FPS gaming since the pre-Quake days, where you were insulted if you used a mouse to aim, he has been addicted to gaming and hardware ever since. Working in IT retail for 10 years gave him great experience with custom-built PCs. His addiction to GPU tech is unwavering and has recently taken a keen interest in artificial intelligence (AI) hardware.

Newsletter Subscription

Related Tags