Live from Computex Taipei 2025 - Stay updated with the latest news and product reveals!

Artificial Intelligence - Page 3

Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 3

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

NVIDIA could release RTX 5090 SUPER with 64GB GDDR7, after RTX PRO 6000 Blackwell PCB teased

Anthony Garreffa | May 2, 2025 1:01 AM CDT

It was only yesterday that we got our first look at the PCB of NVIDIA's new RTX PRO 6000 Max-Q Blackwell GPU (limited to 300W of power, but still rocks 96GB GDDR7) but now we have the PCB from the RTX PRO 6000 Blackwell GPU with its max 600W TDP.

NVIDIA could release RTX 5090 SUPER with 64GB GDDR7, after RTX PRO 6000 Blackwell PCB teased

The PCB design of NVIDIA's new RTX PRO 6000 Blackwell GPU was posted on Chiphell, and since the new workstation cards haven't been launched yet, these are some really early shots of the PCB. NVIDIA's new RTX PRO 6000 Blackwell features a compact PCB that is split into 3 parts: the motherboard for the GPU and GDDR7 memory, and the PCIe interface board, with the only missing part here being the display connector board.

NVIDIA has placed the GDDR7 memory on both sides of the PCB, with each side featuring 48GB GDDR7 for a full 96GB using 3GB GDDR7 memory modules.

Continue reading: NVIDIA could release RTX 5090 SUPER with 64GB GDDR7, after RTX PRO 6000 Blackwell PCB teased (full post)

NVIDIA denies plan to spin off future operations in China, US export controls causing headaches

Anthony Garreffa | May 1, 2025 10:10 PM CDT

NVIDIA is denying reports that it would be spinning off its Chinese operations so it acted as a standalone business, which would side-step strengthening US export controls.

NVIDIA denies plan to spin off future operations in China, US export controls causing headaches

There have been rumors and reports of this for the last few days, but NVIDIA has officially said: "There is no basis whatsoever for any of these claims. It is irresponsible to publish baseless claims and speculation as fact".

The original post was on X by leaker @Jukanlosreve and later reported by the likes of DigiTimes, that NVIDIA CEO Jensen Huang was considering establishing a joint venture in mainland China to maintain the operations of the company's leading CUDA computing platform, as well as other business interests in the company.

Continue reading: NVIDIA denies plan to spin off future operations in China, US export controls causing headaches (full post)

Microsoft CEO: 20-30% of our code is written by AI, explains a lot of things in Windows lately

Anthony Garreffa | May 1, 2025 7:49 PM CDT

We haven't stopped hearing about AI for a couple of years now, but recently Microsoft CEO Satya Nadella admitted that as much as 30% of the company's new code is written by AI.

Microsoft CEO: 20-30% of our code is written by AI, explains a lot of things in Windows lately

Microsoft CEO Satya Nadella revealed the news during the recent LlamaCon, which is Meta's conference focusing on generative AI tools, where when sitting across from Facebook founder and CEO Mark Zuckerberg when he said: "code reviews are very high. In fact the agents we have for reviewing code, that usage has increased, and so I would say maybe 20, 30 percent of the code that is inside of our repos today and in some of our projects are probably all written by software".

Nadella also asked Zuckerberg how much of Meta's code was being written by AI, to which Zuckerberg said he didn't know the exact figure off the top of his head, but noted that Meta is building an AI model that can in-turn build future versions of Meta's in-house Llama family of AI models.

Continue reading: Microsoft CEO: 20-30% of our code is written by AI, explains a lot of things in Windows lately (full post)

Huawei begins deliveries of CloudMatrix 384 AI clusters in China: 10 companies now using them

Anthony Garreffa | Apr 30, 2025 8:14 AM CDT

Huawei has started delivering its new CloudMatrix 384 AI clusters to customers in China, powered by its Ascend 910C AI chips.

Huawei begins deliveries of CloudMatrix 384 AI clusters in China: 10 companies now using them

In a new report from the Financial Times, we're learning that 10 different clients have now adopted Huawei's new CloudMatrix 384 AI servers into their data center portfolios. We don't know which Chinese companies are using Huawei's new AI servers, but they are reportedly primary customers of Huawei's product offerings.

Huawei's new CloudMatrix 384 "CM384" AI cluster is powered by 384 Huawei Ascend 910C AI chips connected in an "all-to-all topology" configuration. Huawei is outweighing the architectural flaws of its AI chips by using 5x more of them than NVIDIA uses with GB200 inside of NVL72 servers. This is why the company doesn't care about the costs, performance inefficiency, scalability ratios, and more.

Continue reading: Huawei begins deliveries of CloudMatrix 384 AI clusters in China: 10 companies now using them (full post)

Microsoft's controversial Recall AI feature is finally available on all Copilot+ PCs

Kosta Andreadis | Apr 29, 2025 1:01 AM CDT

When Microsoft unveiled its new AI-powered range of Copilot+ PCs, one feature that drew immediate criticism was a new AI tool for Windows called Recall. Of course, Windows and criticism go hand in hand, but when it came to Recall - a tool that took screenshots of your PC that could then be used to navigate through your PC's usage history - it was a little different.

Microsoft's controversial Recall AI feature is finally available on all Copilot+ PCs

Early versions of the technology captured sensitive information like banking details and passwords and then put everything in an indexed database that could be searched, raising immediate security and privacy concerns. That's the old Recall; the new and improved version has been rebuilt with security in mind and is now available for all Copilot+ PC users.

"Recall is an opt-in experience with a rich set of privacy controls to filter content and customize what gets saved for you to find later," Navjot Virk, Microsoft Corporate Vice President, Windows Experiences, writes. "We've implemented extensive security considerations, such as Windows Hello sign-in, data encryption, and isolation in Recall to help keep your data safe and secure."

Continue reading: Microsoft's controversial Recall AI feature is finally available on all Copilot+ PCs (full post)

NVIDIA's beefed-up B300 AI chip production pulled forward: TSMC N4P, CoWoS-L advanced packaging

Anthony Garreffa | Apr 27, 2025 10:10 PM CDT

NVIDIA's beefed-up B300 AI chip production has been reportedly pulled forward to May, and will be fabbed on TSMC's new 5nm (N4P) process node and will use CoWoS-L advanced semiconductor packaging.

NVIDIA's beefed-up B300 AI chip production pulled forward: TSMC N4P, CoWoS-L advanced packaging

In a new post by Ctee, we're hearing that NVIDIA's new B300 AI chips will use Bianca compute boards with 1 x CPU and 2 x GPUs, with B300 to enter mass production before the end of the year. Analysts estimate that NVIDIA's new B300 AI GPU will boost the related supply chains including TSMC, Machtech, Inventec, Chipset, and assemly plants Quanta, Wistron, and Foxconn.

NVIDIA's new B300 using TSMC's new 5nm family (N4P) process node was used to fill the production capacity gap left wide open by the now-banned H20 AI GPU in China. Ctee reports that the supply chain noted a shipment of AP8 from Nanya Advanced Packaging in early April, in order to follow the CoWoS-L advanced packaging to be used for B300.

Continue reading: NVIDIA's beefed-up B300 AI chip production pulled forward: TSMC N4P, CoWoS-L advanced packaging (full post)

Huawei's next-gen Ascend 910D AI GPU teased: rivals NVIDIA's previous-gen Hopper H100 in China

Anthony Garreffa | Apr 27, 2025 9:35 PM CDT

Huawei is working on its next-gen Ascend 910D AI GPU, which is said to offer AI performance matching NVIDIA's previous-gen Hopper H100 AI GPU for the Chinese market.

Huawei's next-gen Ascend 910D AI GPU teased: rivals NVIDIA's previous-gen Hopper H100 in China

In a new report from Bloomberg, we're learning that Huawei's new Ascend 910D AI chip will have its first batch of samples out in late May, with development "still at an early stage". Huawei hopes that its new Ascend 910D will be more powerful than NVIDIA's previous-gen Hopper H100, which was released in 2022, and has been succeeded by Blackwell B200, and soon B300 and B300 Ultra AI GPUs.

NVIDIA's custom made-for-China H20 AI GPU was recently banned from being sold in China with new US export restrictions, with the company taking a $5.5 billion hit on its Q1 2025 revenue because of it. China can't get its hands on even lower-end AI GPUs, so it is having to rely on homegrown solutions, with Huawei and its new Ascend 910D.

Continue reading: Huawei's next-gen Ascend 910D AI GPU teased: rivals NVIDIA's previous-gen Hopper H100 in China (full post)

SK hynix showcases world's first HBM4: 16-Hi stacks, 2TB/sec memory bandwidth, TSMC logic die

Anthony Garreffa | Apr 27, 2025 6:21 PM CDT

SK hynix showed off its next-gen HBM4 memory at TSMC's recent North American Technology Symposium, with up to 16-Hi stacks and 2TB/sec memory bandwidth per stack, ready for NVIDIA's next-gen Vera Rubin AI hardware.

SK hynix showcases world's first HBM4: 16-Hi stacks, 2TB/sec memory bandwidth, TSMC logic die

SK hynix showed off both 12-Hi and 16-Hi stacks of HBM4 memory, which feature a capacity of up to 48GB, up to 2TB/sec memory bandwidth, and I/O speeds rated at 8Gbps, with the South Korean memory leader announcing mass production for 2H 2025, and into AI GPUs by the end of this year, flooding the market with HBM4-powered AI GPUs like NVIDIA's next-gen Vera Rubin in 2026.

We will see SK hynix's world-leading HBM4 memory chips inside of NVIDIA's upcoming GB300 "Blackwell Ultra" AI GPUs, with the company planning to shift fully into the arms of HBM4 memory starting with Vera Rubin later this year. SK hynix also pointed out that they've managed the high number of layers through using Advanced MR-MUF and TSV technologies.

Continue reading: SK hynix showcases world's first HBM4: 16-Hi stacks, 2TB/sec memory bandwidth, TSMC logic die (full post)

DeepSeek's next-gen R2 AI model rumors: 97% lower costs than GPT-4, trained on Huawei AI chips

Anthony Garreffa | Apr 27, 2025 10:08 AM CDT

Chinese AI firm DeepSeek is cooking up its next-gen R2 AI model, which is said to be 97% cheaper to train than GPT-4, and it has been fully trained on Huawei AI GPUs.

DeepSeek's next-gen R2 AI model rumors: 97% lower costs than GPT-4, trained on Huawei AI chips

In a new post on X by @deedydas has the hype train for DeepSeek R2 rocking and rolling, claiming that the new R2 model is going to adopt a hybrid MoE (Mixture of Experts) architecture, which is meant to be an advanced version of the existing MoE implementation, which should provide advanced gating mechanisms, or a combination of MoE + dense layers to optimize high-end AAI workloads.

DeepSeek R2 is set to double the parameters of R1, with 1.2 trillion parameters at the ready, and it's reportedly a whopping 97.3% cheaper to train than GPT 4o with the unit cost per token lower than 97.3% compared to GPT-4 at $0.07/M input token and 0.27/M output token. This means DeepSeek R2 is going to be uber-cheap for enterprise use, as it'll be the most cost-efficient AI model on the market.

Continue reading: DeepSeek's next-gen R2 AI model rumors: 97% lower costs than GPT-4, trained on Huawei AI chips (full post)

Intel surprised with weak AI PC demand, results in increased Raptor Lake CPU demand

Anthony Garreffa | Apr 27, 2025 7:58 AM CDT

In what comes as absolutely no surprise, Intel has said that its ai PC processors aren't selling anywhere near the numbers they were expecting, creating a shortage of production capacity for older CPUs.

Intel surprised with weak AI PC demand, results in increased Raptor Lake CPU demand

Intel says that customers are buying less expensive, previous-generation Raptor Lake CPUs instead of the new AI PC-ready Meteor Lake and Lunar Lake processors inside of new laptops. During its recent earnings call, Intel said that it is currently facing production capacity issues for its in-house Intel 7 process node, and that it expects this shorage to "persist for the forseeable future".

Intel's current-generation processors are fabbed using newer process nodes from TSMC, and not using the older-gen Intel 7 process node, so the unexpected surge in demand in the lower-end, non-AI PC-based processors (Raptor Lake) is a strange problem for Intel to have, considering the insane marketing push for AI PC over the last 12+ months.

Continue reading: Intel surprised with weak AI PC demand, results in increased Raptor Lake CPU demand (full post)

Newsletter Subscription