Artificial Intelligence - Page 5
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 5
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
European AI cloud company secured tax breaks on NVIDIA AI GPUs, used them for crypto mining
A criminal investigation that saw raids of a European AI cloud company is focusing on whether the company used its $586 million worth of NVIDIA AI GPUs for crypto mining, after getting massive tax breaks on the funds to buy those AI chips.
In a new report from Bloomberg, the report says that European prosecutors are looking into Northern Data and its purchase of GPUs for its site in northern Sweden. Authorities are investigating whether Northern Data, which is backed by stablecoin issuer Tether Holdings SA, obtained a tax break by claiming that the AI chips were being used for AI, when instead they were being used for cryptocurrency mining.
A few years ago, Sweden embraced the idea of cryptocurrency mining, but turned that idea around in 2023 and removed tax incentives for companies that were establishing mining operations, leaving them in place for data centers. After that, Swedish tax authorities have been investigating several crypto miners for allegedly providing misleading information to benefit from tax incentives, according to sources of Bloomberg.
NVIDIA CEO Jensen Huang says China is 'nanoseconds behind' the US in chip development
China is just "nanoseconds behind" the United States in chip development according to NVIDIA CEO Jensen Huang.
US companies like NVIDIA have been working to compete in China for a while now, which benefits both Beijing and Washington, as Chinese companies have been working around the clock to be "NVIDIA-free". The US government should allow its technology industry to compete around the world -- including China -- to "proliferate the technology around the world" so that it can "maximize America's economic success and geopolitical influence" says Jensen.
The NVIDIA founder said that China is "nanoseconds behind" the US, adding "so we've got to compete". Jensen said during a podcast hosted by tech investors Brad Gerstner and Bill Gurley: "This is a vibrant, entrepreneurial, hi-tech, modern industry".
AMD's next-gen Instinct MI450X 'forced' NVIDIA to increase TGP, memory bandwidth on Rubin GPUs
AMD's next-gen Instinct MI450X AI accelerator has reportedly "forced" NVIDIA to make changes to its Rubin VR200 AI GPU, according to the latest reports.
In a new post on X from SemiAnalysis, we're hearing rumors in order for NVIDIA's new Rubin AI GPUs to maintain a lead over AMD's upcoming Instinct MI450X series AI chips, VR200 Rubin had its HBM4 memory bandwidth increased to 20TB/sec per GPU (from 13GB/sec per GPU). Rubin went from 5TB/sec per GPU behind MI450X in memory bandwidth, to just ahead with 0.4GB/sec per GPU more bandwidth.
Not only that, VR200 Rubin was previously an 1800W TGP design, but two months ago it was bumped up to 2300W TGP, closer to Instinct MI450X which has a higher 2500W TGP. These new AI GPUs are thirsty... very, very thirsty.
NVIDIA CEO is the AI GPU Godfather: Amazon and Google tell Jensen when they're making AI chips
Amazon and Google will give NVIDIA CEO Jensen Huang a call before they announce any new in-house AI chip efforts, as Jensen doesn't like being surprised by his competitors.
In a new article from The Information, it's reported that when companies like Amazon and Google have new AI chip announcements, they'll give Jensen a heads-up ahead of time, as he doesn't like being blindsided. These companies can't survive without access to NVIDIA GPUs, so they play ball with Jensen as there is no alternative, he's almost like the AI GPU Godfather.
The Information's report explains: "at the center of it all is Huang, to whom other leaders in the industry show unusual forms of deference. For example, when Amazon and Google have news to announce about their in-house AI chip efforts -- which they're developing to lessen their dependence on NVIDIA -- they've learned it's best to first give a heads up to Huang, say several people involved in these communications".
NVIDIA CEO on sovereign AI for countries: 'no one needs atomic bombs, everyone needs AI'
NVIDIA CEO Jensen Huang says that developing AI infrastructure is "absolutely necessary" for nations to win the AI race, adding that "nobody needs atomic bombs, everyone needs AI".
Throughout the year, we've seen multiple nations either double down or go all-in on creating AI supercomputers and datacenters, with NVIDIA's dominant AI GPUs filling them all, including throughout the Middle East in countries like Saudi Arabia, the UAE, and Europe.
In a new interview with BG2, NVIDIA CEO Jensen Huang said that building AI infrastructure would become a necessity for nations, and that the importance of AI is so big that it's probably even bigger than building nuclear bombs when you factor in the long-term potential and impact of AI.
Get ready for more AI in Windows 11 apps as Microsoft pushes out Windows ML for developers
Microsoft is pushing forward with plans to help those developing software for Windows 11 incorporate AI into their products with the release of Windows ML.
As The Verge noticed, Microsoft just published a blog post announcing the general availability of Windows ML to app developers.
Microsoft explains: "Windows ML is the built-in AI inferencing runtime optimized for on-device model inference and streamlined model dependency management across CPUs, GPUs and NPUs."
Micron begins shipping industry's fastest HBM4 at 11Gbps, to partner with TSMC for future HBM4E
Micron has confirmed it has started shipping the industry's fastest 11Gbps HBM4 DRAM to its customers, while teasing it will partner with TSMC for its next-gen HBM4E memory.
In its recent earnings call for Q4 and FY2025, the US-based company teased some key developments in its DRAM and NAND flash businesses. Firstly, Micron posted $11.32 billion in revenue compared to $9.3 billion in the previous quarter, while full-year revenues grew to $37.38 billion up from $25.11 billion.
Micron announced it had produced and shipped its first samples of its bleeding-edge HBM4 memory, with over 11Gbps pin speed and up to 2.8TB/sec of bandwidth. The company says its new HBM4 memory should outperform all of the competition -- SK hynix and Samsung, really -- in terms of performance and efficiency.
OpenAI and NVIDIA's new AI project requires 4-5 million GPUs in one project alone
OpenAI and NVIDIA have a new $100B+ deal that requires up to 10 gigawatts of NVIDIA AI systems and up to 5 million GPUs for a single project, alongside its Stargate AI supercomputer project.
During a recent interview with CNBC between NVIDIA and OpenAI, we get some greater insight into the $100 billion deal and upwards of 4-5 million AI GPUs. NVIDIA CEO Jensen Huang explained: "This new project we're talking about, 10-gigawatts, or roughly, 4 million or 5 million GPUs, that's approximately, in one project, what we shipped all year this year, and twice as much as last year, twice as much as the year before that... This is a giant project".
There was a DeepSeek moment that the world hasn't felt yet according to OpenAI CEO Sam Altman, who said that the models at this point are "actually quite capable for things far beyond what most people use them for in ChatGPT" and that the world is "just catching up with that".
NVIDIA announces plans to 'co-optimize' hardware roadmap with OpenAI's software
NVIDIA and OpenAI have announced a new strategic partnership that involves NVIDIA investing $100 billion into the ChatGPT-creator to power the next generation of AI models it will be building.
In a new press release on the OpenAI website, it is stated that a new letter of intent to build "at least 10 gigawatts of NVIDIA systems for OpenAI's next-generation AI infrastructure". The new partnership will assist in the deployment of new datacenters and power capacity, with the first phase of the plan to come online in the second half of 2026. Notably, these new systems will be using NVIDIA's Vera Rubin platform, the company's upcoming next-generation GPU architecture.
In a nutshell, NVIDIA and OpenAI have partnered to facilitate the development of superintelligence AI and, ultimately, artificial general intelligence, which will be achieved through AI factory growth. According to the press release, NVIDIA and OpenAI will "co-optimize" their roadmaps for OpenAI's model and infrastructure software, along with NVIDIA's upcoming hardware and software.
Microsoft announces world's biggest AI datacenter with hundreds of thousands of NVIDIA GPUs
Microsoft has announced that it has built the "world's most powerful AI datacenter" and the largest and "most sophisticated" AI factory that it's built to date. Called Fairwater, the facility is located in Wisconsin, US, and Microsoft has plans to construct identical Fairwater data centers across the country.
"Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times," Microsoft CEO Satya Nadella writes on social media. "It will deliver 10x the performance of the world's fastest supercomputer today, enabling AI training and inference workloads at a level never before seen."
To give you a sense of scale, the Fairwater data center spans a massive 315 acres, comprising three large buildings that offer 1.2 million square feet of data center space. Fairwater is distinct from most data centers in that it's designed to function as a single, massive AI supercomputer, utilizing interconnected NVIDIA GB200 servers and the latest NVLink and NVSwitch technologies, which offer bandwidth measured in terabytes per second.
Samsung finally passes NVIDIA's strict HBM3E 12-Hi qualification tests: 10,000 units on the way
Samsung Electronics has finally passed NVIDIA's strict HBM3E 12-Hi memory qualification tests for use on its AI GPUs, with the South Korean memory giant ready to supply 10,000 units.
In a new report from AlphaEconomy picked up by insider @Jukanrosleve on X, NVIDIA recently signed a supply contract for its HBM3E 12-Hi memory to NVIDIA, where the contract will see Samsung supply around 10,000 units of its qualified HBM3 12-Hi product. Samsung commented, saying that everything is "progressing as scheduled".
In previous rumors, Samsung's new HBM3E 12-Hi memory supply was confirmed, but this seems more solid and now a contract is in place, after fellow South Korean memory rival SK hynix has been exclusively supplying NVIDIA with all of the high-end HBM3 and HBM3E memory it needed.
Tesla rumored to make an AI chip for its EVs on the Intel 18A process node
Intel could be partnering up with Tesla in the near future to make a chip for the carmaker on its new Intel 18A process node according to whispers and rumors online.
In a new post on X from Wccftech reporter @MuhammadZuhair Intel has already disclosed some details that they're moving into the custom silicon business, and since Elon loves partners right along his "area of business", all signs point to Intel.
After a large $16.5 billion deal for Samsung to fab its new Tesla AI6 chips at its new semiconductor plant in Texas, this news could be even bigger for both companies, but more so Intel. Zuhair said: "I won't speculate much and can't disclose many details, but an $INTC and $TSLA partnership on the 18A node isn't a far-fetched guess. Intel has already disclosed that they are tapping into the custom silicon business, and since Elon loves having partners right alongside his 'area of business', Intel could be a key partner".
NVIDIA rumored to be the first customer for TSMC's most advanced A16 process node in 2026
NVIDIA could very well be the first customer for TSMC's most advanced, next-generation A16 process node in 2026, as it "feels heat from AMD" which is using TSMC's newest nodes for its dominant CPUs coming to market.
In a new report from Taiwanese media outlet Ctee picked up by @DanNystedt on X, we're hearing that NVIDIA could be the first customer for TSMC's next-gen A16 process node in 2H 2026. Most of the new 2nm chips coming off the production lines at TSMC will be for smartphones -- mostly Apple and MediaTek -- but AMD will have the first 2nm AI chip, which has prompted NVIDIA to consider the A16 node with backside power delivery (BSPD).
If NVIDIA does indeed use TSMC's new A16 node, it would be the first time that an AI chip was first to use the very latest TSMC process technology, knocking off smartphones from that claim.
NVIDIA GB300 AI server orders are so big they're 'unimaginable' says Quanta Computer AI boss
NVIDIA's orders for its beefed-up Blackwell Ultra GB300 AI servers are so big that they're "unimaginable" according to the head of Quanta Computer's AI server business.
In a new tweet from @DanNystedt on X on a story from UDN, we're learning that Quanta Computer's AI server business boss, Mike Yang, said GB300 AI server orders are mammoth, and that AI server shipments will peak in Q4 2025, while Q3 remains a transition period between new old (GB200) and new (GB300) AI products.
Yang said: "the current orders are unimaginable" in a UDN report. In response to the strong AI server demand, Quanta is looking to the United States, Thailand, Mexico, and other places to expand production capacity, as well as recruit a "large number of talents to alleviate the long-term shortage of talents".
SK hynix finishes HBM4 development, ready for mass production: 10Gbps per pin, above 8Gbps spec
SK hynix has confirmed it has successfully completed the development and finished preparation of its next-gen HBM4 memory, ready for ultra-high-performance AI, and will enter mass production for the world's first time.
The South Korean firm has said that it's successfully completed development and based on this technological achievement, SK hynix has prepared HBM4 mass production to "lead the AI era". We will see SK hynix HBM4 memory inside of next-gen AI chips like NVIDIA's upcoming Rubin AI GPUs in 2026.
Joohwan Cho, Head of HBM Development at SK hynix, who has led the development, said: "Completion of HBM4 development will be a new milestone for the industry. By supplying the product that meets customer needs in performance, power efficiency and reliability in a timely manner, the company will fulfill time to market and maintain competitive position".
OpenAI backs AI animated film Critterz to show Hollywood how its tech makes moviemaking cheaper
Critterz is a new feature-length animated film currently in production (via The Wall Street Journal), featuring cute forest creatures who embark on an adventure after their simple village life is disrupted. At first glance, it appears to be another Pixar-inspired computer-generated animated film, albeit one aiming to make its debut at the prestigious Cannes Film Festival next year. It's also an animated film created using OpenAI's generative AI tools.
And with OpenAI backing the development of Critterz, its reported budget of less than $30 million aims to demonstrate that generative AI can be used to create feature-length films faster and more cost-effectively than traditional Hollywood productions.
Critterz creator Chad Nelson is teaming up with OpenAI, alongside London and Los Angeles-based production companies, to fully animate Critterz in nine months, instead of the typical three years or so it takes to develop, animate, and release a traditional animated film. This nine-month schedule aligns with the next Cannes Film Festival, scheduled to take place in May 2026.
Broadcom secures $10 billion ASIC contract, with Apple and xAI next in line for new AI chips
Broadcom has secured a huge $10 billion custom ASIC contract from a major new customer that is outside of the core hyperscale cloud service provider (CSP) segment, with Apple and xAI also next in line behind them.
In a new report from Digitimes picked up by @Jukanrosleve on X, we're hearing that further orders from TikTok parent company ByteDance, Apple, and Elon Musk's xAI are already in the pipeline, with a development analysis seeing it as reinforcing Broadcom's position as a "credible challenger" to NVIDIA.
According to industry sources, OpenAI's new custom ASIC will enter mass production in 2026, and will position the AI startup as Broadcom's fourth confirmed large-scale ASIC customer. Neither Broadcom nor OpenAI have commented on the deal yet, but insiders that are familiar with Broadcom's roadmap have confirmed the order is indeed genuine.
NVIDIA CFO talks $5B AI GPU revenue in China, GB300 ramp up, next-gen Vera Rubin AI GPU demand
NVIDIA CFO Colette Kress has discussed the ongoing geopolitical issues between the US and China, its H20 AI GPU, new Blackwell Ultra GB300 system, and next-gen Vera Rubin AI platform at the recent Goldman Sachs Communacopia + Technology Conference.
Kress said that the geopolitical issues between the United States and China are affecting NVIDIA's ability to recognize revenues made from its H20 AI GPU to China, where the NVIDIA CFO said that the company has received H20 licenses from the Trump administration, and could account for around $5 billion in revenue from H20 AI GPU sales in Q3 2025.
Kress started out her talk with NVIDIA data center revenues and its growth, despite removing its H20 AI GPU sales from the mix, where the NVIDIA CFO said if we look at NVIDIA's revenues -- including data center and networking -- there's a 12% sequential, or quarter-over-quarter growth in Q2 2025.
RTX 5090 and RTX PRO 6000 GPU have a new bug: need a full system reboot after virtualization
NVIDIA's higher-end GeForce RTX 5090 and RTX PRO 6000 cards have hit a new bug after running virtualization for a few days, which requires a full system reboot to get them back online again.
CloudRift is a GPU cloud for developers, reporting crashing issues with both the RTX 5090 and RTX PRO 6000 cards saying that after a "few days" of VM usage, the cards were completely unresponsive. The GPUs can no longer be accessed unless the node system is rebooted, but thankfully it's only happening to the RTX 5090 and RTX PRO 6000, as the RTX 4090, Hopper H100, and Blackwell B200 aren't affected, for now.
What's happening exactly? The GPU gets assigned to a VM environment using the device driver VFIO, and after the Functional Level Reset (FLR), the GPU is completely unresponsive. After the GPU becomes unresponsive, it results in a kernel "soft lock" which puts the host and client environments under a deadlock. In order to get out of that deadlock the machine has to be rebooted, which isn't an easy thing to do for CloudRift, as they have a big volume of guest machines.
GIGABYTE's new AI TOP CXL card: add up to 512GB memory to TRX50 and W790 AI TOP motherboards
GIGABYTE has just revealed its new AI TOP CXL R5X4 memory expansion card, letting AI TOP motherboard owners add another 512GB of RAM to their system. Check it out:
The new GIGABYTE AI TOP CXL R5X4 won't work on every motherboard, instead it will only work on two motherboards: the TRX 50 AI TOP and W790 AI TOP, with GIGABYTE reminding users that they should contact the company before making the purchase to make sure.
The AI TOP CXL R5X4 uses the regular PCIe 5.0 x16 interface, supporting CXL 2.0/1.1 operation, with four DDR5 DIMM slots for registered ECC memory modules, for a total of up to 512GB (128GB x 4 sticks). GIGABYTE's new AI TOP CXL R5X4 measures 12.0 x 25.4cm and features a 16-layer HDI PCB.





















