This is what 2.5D aka CoWoS advanced packaging looks like: GPU logic die, HBM, interposer

This is what 2.5D advanced packaging (CoWoS) looks like, with ASE showing off an incredible model in Taiwan of how the components are bound together.

This is what 2.5D aka CoWoS advanced packaging looks like: GPU logic die, HBM, interposer
Comment IconFacebook IconX IconReddit Icon
Gaming Editor
Published
4 minutes & 15 seconds read time

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

One of the most amazing things that human civilization has created is the silicon chip, with semiconductor technology not just about the 'CPU' or the 'GPU' anymore... but rather advanced packaging technology is absolutely bleeding-edge, and now there's an awesome way to visualize just how amazing it is. Check this out:

ASE showed off a rather awesome model in Taiwan recently, which demonstrates the various components of advanced packaging, and how it is bound together through CoWoS (Chip On Wafer On Silicon). The center piece is the XPU or GPU logic die (this does the calculations) while surrounding that chip are multiple layers of HBM (High Bandwidth Memory) which is made by SK hynix, Samsung, and Micron.

All of this delicious semiconductor tech is packaged together with microbumps onto the copper colored RDL, while underneath the silver-colored component is the silicon interposer. After that, everything is placed onto the substrate itself, which the likes of TSMC and Samsung working towards "radically new" semiconductor packaging technology called panel-level packaging, you can read more about that in the links below:

In a breakdown of the 2.5D aka CoWoS advanced packaging technique, Clark Tang explains that the complex process of advanced packaging is done to create a combined chip that is able to access different capabilities at a very fast rate. Most people think that it's just the GPU doing the work -- but the HBM memory is just as, if not more important than an ultra-fast AI chip -- just like super-fast GDDR6, GDDR6X, and upcoming GDDR7X is to consumer graphics cards. The faster, the wider bus, the better for high-res, high FPS gaming.

HBM3 is the leading memory for AI GPUs right now, with HBM3E debuting inside of NVIDIA's beefed-up Hopper H200 AI GPU, and its new Blackwell AI GPUs. HBM4 and HBM4E will debut in 2025 and 2026, with NVIDIA's next-gen Rubin R100 AI GPU.

AI workloads froth over high memory bandwidth, with constraints on the system occurring when how fast the XPU can read and write the AI calculations to memory.