Ex-Google chip designers launch MatX startup: will develop AI chips specifically for LLMs

MatX is a new startup founded by ex-Google chip designers, will build a next-generation AI processor specifically designed for gigantic LLMs.

1 minute & 57 seconds read time

A couple of ex-Google chip designers have left the US search giant, forming a new MatX startup to build AI processors specifically designed for LLMs (Large Language Models).

Ex-Google chip designers launch MatX startup: will develop AI chips specifically for LLMs 9016

Mike Gunter and Reiner Pope used to work at Google, forming MatX, which has one objective: design next-generation silicon specifically for processing the data needed to fuel large language models (LLMs). LLMs are the basis in which the generative AI world sits on, with the likes of ChatGPT from OpenAI, Gemini from Google, and other LLM-powered generative AI platforms.

Gunter used to focus on designing hardware like chips to run AI software, while Pope wrote the AI software itself for Google. Google has been working hard at building its own in-house AI processors with Tensor Core Processors, first designed before LLMs became a thing and were too generic for the tasks at the time.

Pope said: "We were trying to make large language models run faster at Google and made some progress, but it was kind of hard. Inside of Google, there were lots of people who wanted changes to the chips for all sorts of things, and it was difficult to focus just on LLMs. We chose to leave for that reason".

NVIDIA has been absolutely dominating the AI silicon market for a while now, with its current-gen Hopper H100 and upcoming H200 AI GPUs doing extremely well, while the next-generation Blackwell B200 AI GPU has been announced and will be unleashed later this year.

NVIDIA's fleet of AI GPUs are fantastic at handling oodles and oodles of small tasks, and the company bet big on the future of CUDA and AI software over 10 years ago. The company breaks up the real estate on its GPUs to handle various computing jobs, including vast amounts of data around the chip, including HBM memory. Some of NVIDIA's design choices can also be stuck with the "past eras" of computing rather than the AI boom, and they have performance tradeoffs.

The new MatX founders think that extra real estate on the GPU adds unneeded costs and massive complexity in the new era of AI. MatX is doing things differently, where they'll be designing silicon with one single large processing core aimed at multiplying numbers as quickly as possible -- which is what LLMs require -- and the company is betting big on this future.

Pope said: "NVIDIA is a really strong product and clearly the right product for most companies, but we think we can do a lot better".

Buy at Amazon

NVIDIA H100 80 GB Graphic Card PCIe HBM2e Memory 350W

TodayYesterday7 days ago30 days ago
Buy at Newegg
* Prices last scanned on 4/24/2024 at 2:45 am CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission.
NEWS SOURCE:bloomberg.com

Anthony joined the TweakTown team in 2010 and has since reviewed 100s of graphics cards. Anthony is a long time PC enthusiast with a passion of hate for games built around consoles. FPS gaming since the pre-Quake days, where you were insulted if you used a mouse to aim, he has been addicted to gaming and hardware ever since. Working in IT retail for 10 years gave him great experience with custom-built PCs. His addiction to GPU tech is unwavering and has recently taken a keen interest in artificial intelligence (AI) hardware.

Newsletter Subscription

Related Tags