Huawei begins deliveries of CloudMatrix 384 AI clusters in China: 10 companies now using them

Huawei starts delivering its new CloudMatrix 384 AI cluster to clients in China, competes with NVIDIA's GB200 NVL72 AI server with Ascend 910C AI chips.

Huawei begins deliveries of CloudMatrix 384 AI clusters in China: 10 companies now using them
Comment IconFacebook IconX IconReddit Icon
Gaming Editor
Published
1 minute & 30 seconds read time
TL;DR: Huawei has launched its CloudMatrix 384 AI clusters in China, powered by 384 Ascend 910C AI chips, delivering exceptional bandwidth of 1.2 petabytes per second. Despite consuming 3.9 times more power than NVIDIA's GB200 NVL72 servers, CloudMatrix offers scalable AI performance tailored for China's data centers.

Huawei has started delivering its new CloudMatrix 384 AI clusters to customers in China, powered by its Ascend 910C AI chips.

Huawei begins deliveries of CloudMatrix 384 AI clusters in China: 10 companies now using them 16

In a new report from the Financial Times, we're learning that 10 different clients have now adopted Huawei's new CloudMatrix 384 AI servers into their data center portfolios. We don't know which Chinese companies are using Huawei's new AI servers, but they are reportedly primary customers of Huawei's product offerings.

Huawei's new CloudMatrix 384 "CM384" AI cluster is powered by 384 Huawei Ascend 910C AI chips connected in an "all-to-all topology" configuration. Huawei is outweighing the architectural flaws of its AI chips by using 5x more of them than NVIDIA uses with GB200 inside of NVL72 servers. This is why the company doesn't care about the costs, performance inefficiency, scalability ratios, and more.

There's also far more bandwidth on CloudMatrix, with up to 12209TB/sec (1.2 petabytes per second, which is insane) compared to "just" 576TB/sec from NVIDIA GB200 NVL72. We might have far bigger pools of super-fast HBM, but when it comes to performance per watt, across multiple AI workloads the Ascend 910C-powered CloudMatrix AI server uses around 3.9x MORE power than NVIDIA's bleeding-edge GB200 NVL72 AI server... but China doesn't need to worry about power costs or infrastructure, it can just GO.

The power consumption numbers are scary, with CloudMatrix using 3.9x more power... I can't imagine the cost of running this in a country (like where I am in Australia, and why we aren't seeing AI server infrastructure roll-outs because electricity is so expensive here) where it's 3x or 5x or even 10x more expensive.

Photo of the NVIDIA A2000 Graphics Card
Best Deals: NVIDIA A2000 Graphics Card
Today7 days ago30 days ago
$547.35 USD$575.53 USD
$743.99 USD$749 USD
$1317.22 CAD$1045.85 CAD
$915.61 CAD$915.61 CAD
£700-
$547.35 USD$575.53 USD
Check PriceCheck Price
* Prices last scanned 5/2/2026 at 5:05 pm CDT - prices may be inaccurate. As an Amazon Associate, we earn from qualifying purchases. We earn affiliate commission from any Newegg or PCCG sales.
News Source:wccftech.com

Gaming Editor

Email IconX IconLinkedIn Icon

Anthony joined TweakTown in 2010 and has since reviewed 100s of tech products. Anthony is a long time PC enthusiast with a passion of hate for games built around consoles. FPS gaming since the pre-Quake days, where you were insulted if you used a mouse to aim, he has been addicted to gaming and hardware ever since. Working in IT retail for 10 years gave him great experience with custom-built PCs. His addiction to GPU tech is unwavering and has recently taken a keen interest in artificial intelligence (AI) hardware.

Follow TweakTown on Google News
Newsletter Subscription