Panmnesia's new 'CXL Protocol' will have AI GPUs using memory from DRAM, SSDs with low latency

Panmnesia is a KAIST startup, using its cutting-edge IP enabling external memory on AI GPUs using the CXL protocol over PCIe, a game-changer.

1 minute & 26 seconds read time

Panmnesia is a company you probably haven't heard of until today, but the KAIST startup has unveiled its cutting-edge IP that enables adding external memory to AI GPUs over the CXL protocol over PCIe, which enables new levels of memory capacity for AI workloads.

Panmnesia's new 'CXL Protocol' will have AI GPUs using memory from DRAM, SSDs with low latency 20

The current fleets of AI GPUs and AI accelerators use their on-board memory -- usually super-fast HBM -- but this is limited to smaller quantities like 80GB on the current NVIDIA Hopper H100 AI GPU. AMD and NVIDIA's next-gen AI chip offerings will usher in up to 141GB HBM3E (H200 AI GPU from NVIDIA) and up to 192GB HBM3E (B200 AI GPU from NVIDIA, and Instinct MI300X from AMD).

But now, Panmnesia's new CXL IP will let GPUs access memory from DRAM and SSDs, expanding the memory capacity from its built-in HBM memory... very nifty. The South Korean Institute (KAIST) startup bridges the connectivity with CXL over PCIe links, which means mass adoption is easy with this new tech. Regular AI accelerators don't have the subsystems required to connect with and use CXL for memory expansion directly, relying on solutions like UVM (Unified Virtual Memory) which is slower, defeating the purpose completely... which is where Panmnesia's new IP comes into play.

Panmnesia benchmarked its own "CXL-Opt" solution against prototypes from Samsung and Meta, which they've labeled as "CXL-Proto". CXL-Opt has much lower round-trip latency, which is the time taken for data to travel from the GPU to the memory, and then back again. Panmnesia's new CXL-Pro enjoyed a two-digit nanosecond latency, versus the 250ns of latency of its competitors. CXL-Opt's execution time is also far less than UVM, as it hits IPC performance improvements of 3.22x over UVM. Impressive.

Panmnesia's new 'CXL Protocol' will have AI GPUs using memory from DRAM, SSDs with low latency 21

The new CXL-Pro solution from Panmnesia could make for big waves in the AI GPU and AI accelerator market, acting as a solution between stacking HBM memory chips and moving towards a far more efficient solution. Panmnesia is one of the first with its new CXL IP, so it'll be interesting to see how we go from here.

Buy at Amazon

NVIDIA H100 80 GB Graphic Card PCIe HBM2e Memory 350W (NVIDIA H100 80 GB)

TodayYesterday7 days ago30 days ago
Buy at Newegg
* Prices last scanned on 7/18/2024 at 12:39 am CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission.

Anthony joined the TweakTown team in 2010 and has since reviewed 100s of graphics cards. Anthony is a long time PC enthusiast with a passion of hate for games built around consoles. FPS gaming since the pre-Quake days, where you were insulted if you used a mouse to aim, he has been addicted to gaming and hardware ever since. Working in IT retail for 10 years gave him great experience with custom-built PCs. His addiction to GPU tech is unwavering and has recently taken a keen interest in artificial intelligence (AI) hardware.

Newsletter Subscription

Related Tags