Giveaway: Win an ASRock Z890 Taichi Lite Motherboard

Meta's next-gen in-house AI chip is made on TSMC's 5nm process, with LPDDR5 RAM, not HBM

Meta's next-gen MTIA AI processor is made on TSMC 5nm, up to 1.35GHz frequency, PCIe Gen5 x8 interface, to fight NVIDIA in the cloud business.

Meta's next-gen in-house AI chip is made on TSMC's 5nm process, with LPDDR5 RAM, not HBM
Comment IconFacebook IconX IconReddit Icon
Gaming Editor
Published
Updated
1 minute & 45 seconds read time

Meta has just teased its next-gen AI chip -- MTIA -- which is an upgrade over its current MTIA v1 chip. The new MTIA chip is made on TSMC's newer 5nm process node, with the original MTIA chip made on 7nm.

Meta's next-gen in-house AI chip is made on TSMC's 5nm process, with LPDDR5 RAM, not HBM 602

The new Meta Training and Inference Accelerator (MTIA) chip is "fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity" that will be used for the unique requirements of Meta. We've seen the best AI GPUs on the planet using HBM memory -- with HBM3 used on NVIDIA's Hopper H100 and AMD Instinct MI300 series AI chips -- with Meta using low-power DRAM memory (LPDDR5) instead of server DRAM or LPDDR5 memory.

The social networking giant created its MTIA chip was the company's first-generation AI inference accelerator that the company designed in-house for Meta's AI workload in mind. The company says that their deep learning recommendation models are "improving a variety of experiences across our products".

Meta's long-term goal and its AI inference processor journey are to provide the most efficient architecture for Meta's unique workloads. The company adds that as AI workloads become increasingly important for Meta's products and services, the efficiency of its MTIA chips will improve its ability to provide the best experiences for its users across the planet.

Meta's next-gen in-house AI chip is made on TSMC's 5nm process, with LPDDR5 RAM, not HBM 601

Meta explains on its website for MTIA: "This chip's architecture is fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity for serving ranking and recommendation models. In inference we need to be able to provide relatively high utilization, even when our batch sizes are relatively low. By focusing on providing outsized SRAM capacity, relative to typical GPUs, we can provide high utilization in cases where batch sizes are limited and provide enough compute when we experience larger amounts of potential concurrent work".

"This accelerator consists of an 8x8 grid of processing elements (PEs). These PEs provide significantly increased dense compute performance (3.5x over MTIA v1) and sparse compute performance (7x improvement). This comes partly from improvements in the architecture associated with pipelining of sparse compute. It also comes from how we feed the PE grid: We have tripled the size of the local PE storage, doubled the on-chip SRAM and increased its bandwidth by 3.5X, and doubled the capacity of LPDDR5".

Photo of the ASUS Dual GeForce RTX 4060 Ti White OC Edition 8GB
Best Deals: ASUS Dual GeForce RTX 4060 Ti White OC Edition 8GB
Today7 days ago30 days ago
$547 USD$547 USD
$499 USD$515 USD
$547 USD$547 USD
--
£767.63-
$547 USD$547 USD
Check PriceCheck Price
* Prices last scanned 4/20/2026 at 10:48 am CDT - prices may be inaccurate. As an Amazon Associate, we earn from qualifying purchases. We earn affiliate commission from any Newegg or PCCG sales.
News Source:ai.meta.com

Gaming Editor

Email IconX IconLinkedIn Icon

Anthony joined TweakTown in 2010 and has since reviewed 100s of tech products. Anthony is a long time PC enthusiast with a passion of hate for games built around consoles. FPS gaming since the pre-Quake days, where you were insulted if you used a mouse to aim, he has been addicted to gaming and hardware ever since. Working in IT retail for 10 years gave him great experience with custom-built PCs. His addiction to GPU tech is unwavering and has recently taken a keen interest in artificial intelligence (AI) hardware.

Follow TweakTown on Google News
Newsletter Subscription