Artificial Intelligence - Page 6

Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 6

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

SPARKLE intros new server with 16 GPUs, up to 768GB of VRAM, and a monster 10,800W PSU

Anthony Garreffa | Oct 10, 2025 7:07 PM CDT

SPARKLE has just introduced its new C741-6U-Dual 16P system, which packs up to 16 x Intel Arc Pro B60 Dual 48GB graphics cards for a total of 768GB of VRAM, all running from a monster 10,800W PSU.

SPARKLE intros new server with 16 GPUs, up to 768GB of VRAM, and a monster 10,800W PSU

The new SPARKLE C741-6U-Dual 16P multi-GPU server supports both single-GPU and dual-GPU variants of the Arc Pro B60 Dual graphics card, the single-GPU version packing 24GB of VRAM, while the dual-GPU ramps that up to 48GB. If configured with 16 of the Arc Pro B60 Dual 48GB cards, you'll have a total of 81,920 GPU cores, and an incredible 768GB of VRAM.

SPARKLE uses a dedicated circuit that extends PCIe connectivity to 16 slots which provides each GPU with its own PCIe 5.0 x8 interface. Both models also use Intel Xeon Scalable processors, either in 4th Gen or 5th Gen.

Continue reading: SPARKLE intros new server with 16 GPUs, up to 768GB of VRAM, and a monster 10,800W PSU (full post)

Microsoft Azure upgraded to NVIDIA GB300 'Blackwell Ultra' with 4600 GPUs connected together

Anthony Garreffa | Oct 10, 2025 5:22 PM CDT

Microsoft has just announced that its first at-scale production cluster of NVIDIA's new GB300 "Blackwell Ultra" GPUs has been installed. Check it out:

Microsoft Azure upgraded to NVIDIA GB300 'Blackwell Ultra' with 4600 GPUs connected together

The new large-scale and production cluster packs over 4600 GPUs based on NVIDIA's new GB300 NVL72 architecture, connected through next-gen InfiniBand interconnect fabric. The new deployment allows Microsoft to scale to hundreds of thousands of Blackwell Ultra GPUs deployed throughout datacenters across the planet, all working on one workload: AI.

Microsoft says its new Azure cluster powered by NVIDIA GB300 NVL72 "Blackwell Ultra" GPUs can reduce training times from months down to weeks, unlocking the way for training models that are over 100s of trillions of parameters large. The new Microsoft Azure ND GB300 v6 VMs are optimized for reasoning models, agentic AI systems, and multimodal generative AI workloads.

Continue reading: Microsoft Azure upgraded to NVIDIA GB300 'Blackwell Ultra' with 4600 GPUs connected together (full post)

Microsoft rolls out Copilot update that can read your Gmail and Outlook

Jak Connor | Oct 10, 2025 7:33 AM CDT

Microsoft's Copilot has received an update that enables Windows users to create Word documents, PowerPoint presentations, Excel spreadsheets, and more directly from the chat session.

Microsoft rolls out Copilot update that can read your Gmail and Outlook

The new feature is coming to Windows 11 Insiders and will soon be rolled out publicly to all Windows 11 users. Microsoft's Copilot team explained in a blog post that Copilot users will be able to convert conversational ideas, notes, and data into shareable and editable documents with "no extra steps or tools".

Additionally, suppose Copilot responds to a query with 600 words or more. In that case, Microsoft has added an export button that enables a user to send that response directly to Word, Excel, PowerPoint, or convert it into a PDF file.

Continue reading: Microsoft rolls out Copilot update that can read your Gmail and Outlook (full post)

AI helps turn a gaming mouse's high-performance optical sensor into a microphone

Kosta Andreadis | Oct 8, 2025 7:28 PM CDT

Although going for featherweight and ultra-lightweight builds is the latest trend for gaming mice (check out our various mouse reviews here), high-speed optical sensors with impressive sensitivity have been a thing for years. Corsair's SABRE v2 PRO features a 33K or 33,000 DPI optical sensor. The premium Razer DeathAdder V4 Pro Wireless ups the ante to an astounding 45K, while the more affordable PowerColor ALPHYN AM10 Wireless Gaming Mouse still boasts an impressive 26K optical sensor.

AI helps turn a gaming mouse's high-performance optical sensor into a microphone

Thanks to a new AI-powered and fascinating tool called Mic-E-Mouse, any mouse with an optical sensor and at least 20K or 20,000 DPI sensitivity can be used as a makeshift microphone to eavesdrop on people and record their speech, and is described as a "critical vulnerability" by the team of researchers from the University of California that developed Mic-E-Mouse.

If you're wondering how a high-performance optical sensor in a mouse can be used to not only detect speech but decipher what's being said with an accuracy of 80%, it sounds like the sort of thing you'd see on TV and roll your eyes thinking, "no way that's possible."

Continue reading: AI helps turn a gaming mouse's high-performance optical sensor into a microphone (full post)

ChatGPT gets app store, OpenAI takes on Apple and Google in bid to create new platform

Derek Strickland | Oct 8, 2025 12:29 PM CDT

ChatGPT users will soon be able to launch apps without leaving the prompt window, effectively turning the AI model into a budding ecosystem.

ChatGPT gets app store, OpenAI takes on Apple and Google in bid to create new platform

OpenAI is bringing native app integration to ChatGPT. The new feature was demoed at OpenAI's DevDay 2025, showing how the apps will work within ChatGPT in real time.

Users query the app directly--in this case, Coursera--and the app responds, even going so far as to automatically pin video content to the top of the screen. It's all made possible by OpenAI's new apps software development kit (SDK), which allows ChatGPT to directly communicate with the apps. Essentially, ChatGPT is a kind of interpreter and fetcher of information that's provided directly from the app, all within the context of user queries.

Continue reading: ChatGPT gets app store, OpenAI takes on Apple and Google in bid to create new platform (full post)

NVIDIA directly challenged after AMD and OpenAI sign multibillion GPU partnership

Jak Connor | Oct 6, 2025 6:30 AM CDT

OpenAI and AMD have announced a multibillion-dollar partnership that involves AMD powering the next generation of OpenAI's AI infrastructure with AMD Instinct MI450 GPUs.

NVIDIA directly challenged after AMD and OpenAI sign multibillion GPU partnership

The partnership was announced by both companies via press releases, and includes AMD supplying OpenAI with 6 gigawatts of power through its AMD Instinct GPUs, with the first gigawatt to be deployed in the second half of 2026. In addition to signing on for multi-generational hardware upgrades from AMD, OpenAI will be acquiring up to 160 million shares of AMD common stock, which have been structured to vest as specific milestones are achieved.

The first tranche of the stock is set to vest after the initial gigawatt is successfully deployed, and further tranches are scheduled to vest as more AMD GPUs are purchased by OpenAI, eventually reaching the point of 6 gigawatts. Notably, vesting is also tied to AMD reaching specific share price targets and OpenAI achieving the technical and commercial milestones required to enable AMD deployments at scale.

Continue reading: NVIDIA directly challenged after AMD and OpenAI sign multibillion GPU partnership (full post)

NVIDIA could change cooling solution for Rubin Ultra AI GPUs for huge 2300W thermal concerns

Anthony Garreffa | Oct 5, 2025 6:03 PM CDT

NVIDIA's next-gen Rubin Ultra AI GPUs will reportedly consume quite an insane 2300W of power or so, with rumors swirling that NVIDIA is switching to a totally new cooling solution for its bleeding-edge AI chips.

NVIDIA could change cooling solution for Rubin Ultra AI GPUs for huge 2300W thermal concerns

In a new post from @QQ_Timmy on X, we're hearing that NVIDIA is contacting cooling solution partners to integrate "direct-to-chip" cooling through microchannel cold plates for its beefed-up Rubin Ultra AI GPUs. This would be quite the move from conventional liquid cooling solutions, and keep the up to 2300W of thermals in check to hit those performance numbers that NVIDIA wants.

The market previously rumored that cooling for Rubin AI GPUs with a 2300W TDP would use a microchannel cover plate (MCL) but @QQ_Timmy's post suggests that there has been high mass production difficulty for these cover plates. Additionally, the leaker has learned that NVIDIA has reportedly requested Asia Vital Components to design a microchannel cold plate (MCCP) for Rubin Ultra (launching in 2027).

Continue reading: NVIDIA could change cooling solution for Rubin Ultra AI GPUs for huge 2300W thermal concerns (full post)

HP intros ZGX Nano G1n AI workstation: powered by NVIDIA's new GB10 Superchip

Anthony Garreffa | Oct 5, 2025 4:13 AM CDT

HP has just unveiled its new compact AI workstation with the introduction of the new ZGX Nano G1n system, powered by NVIDIA's powerful new GB10 Superchip.

HP intros ZGX Nano G1n AI workstation: powered by NVIDIA's new GB10 Superchip

The new HP ZGX Nano G1n compact AI workstation measures just 15 cm x 15 cm x 5 cm, making for a tiny system with huge computing power courtesy of the powerful NVIDIA GB10 Superchip, delivering both powerful Grace CPU cores and a Blackwell-based GPU.

HP does something a little special with its ZGX Nano G1n workstation system with its ZGX Toolkit, which is a software stack that runs on NVIDIA's AI Stack (DGX OS) that provides a smoother development experience. HP's new mini AI workstation packs 1000 TOPS of AI compute power thanks to the NVIDIA Blackwell GPU, connected to the 20-core NVIDIA Grace CPU through NVLink-C2C chip-to-chip interconnect.

Continue reading: HP intros ZGX Nano G1n AI workstation: powered by NVIDIA's new GB10 Superchip (full post)

Microsoft exec says the future of Windows is AI agents 'taking on even more complex tasks'

Darren Allan | Oct 3, 2025 1:00 PM CDT

A new blog post from Microsoft talks about the NPU and how it's vital to the future of Windows, driving one of the key elements in the OS, namely AI agents.

Microsoft exec says the future of Windows is AI agents 'taking on even more complex tasks'

The post from Athima Chansanchai (spotted by Ghacks), who's a corporate news reporter on the Microsoft Stories team, is mostly about recapping what we already know about NPUs and accelerating local AI workloads (and a vehicle for plugging Copilot+ PCs, naturally).

However, it contains some interesting quotes giving us another strong hint of how important AI will be for next-gen Windows, from corporate VP Steven Bathiche, one of the co-founders of the Applied Sciences team at Microsoft.

Continue reading: Microsoft exec says the future of Windows is AI agents 'taking on even more complex tasks' (full post)

European AI cloud company secured tax breaks on NVIDIA AI GPUs, used them for crypto mining

Anthony Garreffa | Oct 3, 2025 7:17 AM CDT

A criminal investigation that saw raids of a European AI cloud company is focusing on whether the company used its $586 million worth of NVIDIA AI GPUs for crypto mining, after getting massive tax breaks on the funds to buy those AI chips.

European AI cloud company secured tax breaks on NVIDIA AI GPUs, used them for crypto mining

In a new report from Bloomberg, the report says that European prosecutors are looking into Northern Data and its purchase of GPUs for its site in northern Sweden. Authorities are investigating whether Northern Data, which is backed by stablecoin issuer Tether Holdings SA, obtained a tax break by claiming that the AI chips were being used for AI, when instead they were being used for cryptocurrency mining.

A few years ago, Sweden embraced the idea of cryptocurrency mining, but turned that idea around in 2023 and removed tax incentives for companies that were establishing mining operations, leaving them in place for data centers. After that, Swedish tax authorities have been investigating several crypto miners for allegedly providing misleading information to benefit from tax incentives, according to sources of Bloomberg.

Continue reading: European AI cloud company secured tax breaks on NVIDIA AI GPUs, used them for crypto mining (full post)

NVIDIA CEO Jensen Huang says China is 'nanoseconds behind' the US in chip development

Anthony Garreffa | Sep 29, 2025 7:46 AM CDT

China is just "nanoseconds behind" the United States in chip development according to NVIDIA CEO Jensen Huang.

NVIDIA CEO Jensen Huang says China is 'nanoseconds behind' the US in chip development

US companies like NVIDIA have been working to compete in China for a while now, which benefits both Beijing and Washington, as Chinese companies have been working around the clock to be "NVIDIA-free". The US government should allow its technology industry to compete around the world -- including China -- to "proliferate the technology around the world" so that it can "maximize America's economic success and geopolitical influence" says Jensen.

The NVIDIA founder said that China is "nanoseconds behind" the US, adding "so we've got to compete". Jensen said during a podcast hosted by tech investors Brad Gerstner and Bill Gurley: "This is a vibrant, entrepreneurial, hi-tech, modern industry".

Continue reading: NVIDIA CEO Jensen Huang says China is 'nanoseconds behind' the US in chip development (full post)

AMD's next-gen Instinct MI450X 'forced' NVIDIA to increase TGP, memory bandwidth on Rubin GPUs

Anthony Garreffa | Sep 28, 2025 6:13 PM CDT

AMD's next-gen Instinct MI450X AI accelerator has reportedly "forced" NVIDIA to make changes to its Rubin VR200 AI GPU, according to the latest reports.

AMD's next-gen Instinct MI450X 'forced' NVIDIA to increase TGP, memory bandwidth on Rubin GPUs

In a new post on X from SemiAnalysis, we're hearing rumors in order for NVIDIA's new Rubin AI GPUs to maintain a lead over AMD's upcoming Instinct MI450X series AI chips, VR200 Rubin had its HBM4 memory bandwidth increased to 20TB/sec per GPU (from 13GB/sec per GPU). Rubin went from 5TB/sec per GPU behind MI450X in memory bandwidth, to just ahead with 0.4GB/sec per GPU more bandwidth.

Not only that, VR200 Rubin was previously an 1800W TGP design, but two months ago it was bumped up to 2300W TGP, closer to Instinct MI450X which has a higher 2500W TGP. These new AI GPUs are thirsty... very, very thirsty.

Continue reading: AMD's next-gen Instinct MI450X 'forced' NVIDIA to increase TGP, memory bandwidth on Rubin GPUs (full post)

NVIDIA CEO is the AI GPU Godfather: Amazon and Google tell Jensen when they're making AI chips

Anthony Garreffa | Sep 28, 2025 3:37 AM CDT

Amazon and Google will give NVIDIA CEO Jensen Huang a call before they announce any new in-house AI chip efforts, as Jensen doesn't like being surprised by his competitors.

NVIDIA CEO is the AI GPU Godfather: Amazon and Google tell Jensen when they're making AI chips

In a new article from The Information, it's reported that when companies like Amazon and Google have new AI chip announcements, they'll give Jensen a heads-up ahead of time, as he doesn't like being blindsided. These companies can't survive without access to NVIDIA GPUs, so they play ball with Jensen as there is no alternative, he's almost like the AI GPU Godfather.

The Information's report explains: "at the center of it all is Huang, to whom other leaders in the industry show unusual forms of deference. For example, when Amazon and Google have news to announce about their in-house AI chip efforts -- which they're developing to lessen their dependence on NVIDIA -- they've learned it's best to first give a heads up to Huang, say several people involved in these communications".

Continue reading: NVIDIA CEO is the AI GPU Godfather: Amazon and Google tell Jensen when they're making AI chips (full post)

NVIDIA CEO on sovereign AI for countries: 'no one needs atomic bombs, everyone needs AI'

Anthony Garreffa | Sep 26, 2025 7:46 PM CDT

NVIDIA CEO Jensen Huang says that developing AI infrastructure is "absolutely necessary" for nations to win the AI race, adding that "nobody needs atomic bombs, everyone needs AI".

NVIDIA CEO on sovereign AI for countries: 'no one needs atomic bombs, everyone needs AI'

Throughout the year, we've seen multiple nations either double down or go all-in on creating AI supercomputers and datacenters, with NVIDIA's dominant AI GPUs filling them all, including throughout the Middle East in countries like Saudi Arabia, the UAE, and Europe.

In a new interview with BG2, NVIDIA CEO Jensen Huang said that building AI infrastructure would become a necessity for nations, and that the importance of AI is so big that it's probably even bigger than building nuclear bombs when you factor in the long-term potential and impact of AI.

Continue reading: NVIDIA CEO on sovereign AI for countries: 'no one needs atomic bombs, everyone needs AI' (full post)

Get ready for more AI in Windows 11 apps as Microsoft pushes out Windows ML for developers

Darren Allan | Sep 26, 2025 2:01 PM CDT

Microsoft is pushing forward with plans to help those developing software for Windows 11 incorporate AI into their products with the release of Windows ML.

Get ready for more AI in Windows 11 apps as Microsoft pushes out Windows ML for developers

As The Verge noticed, Microsoft just published a blog post announcing the general availability of Windows ML to app developers.

Microsoft explains: "Windows ML is the built-in AI inferencing runtime optimized for on-device model inference and streamlined model dependency management across CPUs, GPUs and NPUs."

Continue reading: Get ready for more AI in Windows 11 apps as Microsoft pushes out Windows ML for developers (full post)

Micron begins shipping industry's fastest HBM4 at 11Gbps, to partner with TSMC for future HBM4E

Anthony Garreffa | Sep 24, 2025 8:06 AM CDT

Micron has confirmed it has started shipping the industry's fastest 11Gbps HBM4 DRAM to its customers, while teasing it will partner with TSMC for its next-gen HBM4E memory.

Micron begins shipping industry's fastest HBM4 at 11Gbps, to partner with TSMC for future HBM4E

In its recent earnings call for Q4 and FY2025, the US-based company teased some key developments in its DRAM and NAND flash businesses. Firstly, Micron posted $11.32 billion in revenue compared to $9.3 billion in the previous quarter, while full-year revenues grew to $37.38 billion up from $25.11 billion.

Micron announced it had produced and shipped its first samples of its bleeding-edge HBM4 memory, with over 11Gbps pin speed and up to 2.8TB/sec of bandwidth. The company says its new HBM4 memory should outperform all of the competition -- SK hynix and Samsung, really -- in terms of performance and efficiency.

Continue reading: Micron begins shipping industry's fastest HBM4 at 11Gbps, to partner with TSMC for future HBM4E (full post)

OpenAI and NVIDIA's new AI project requires 4-5 million GPUs in one project alone

Anthony Garreffa | Sep 24, 2025 12:05 AM CDT

OpenAI and NVIDIA have a new $100B+ deal that requires up to 10 gigawatts of NVIDIA AI systems and up to 5 million GPUs for a single project, alongside its Stargate AI supercomputer project.

OpenAI and NVIDIA's new AI project requires 4-5 million GPUs in one project alone

During a recent interview with CNBC between NVIDIA and OpenAI, we get some greater insight into the $100 billion deal and upwards of 4-5 million AI GPUs. NVIDIA CEO Jensen Huang explained: "This new project we're talking about, 10-gigawatts, or roughly, 4 million or 5 million GPUs, that's approximately, in one project, what we shipped all year this year, and twice as much as last year, twice as much as the year before that... This is a giant project".

There was a DeepSeek moment that the world hasn't felt yet according to OpenAI CEO Sam Altman, who said that the models at this point are "actually quite capable for things far beyond what most people use them for in ChatGPT" and that the world is "just catching up with that".

Continue reading: OpenAI and NVIDIA's new AI project requires 4-5 million GPUs in one project alone (full post)

NVIDIA announces plans to 'co-optimize' hardware roadmap with OpenAI's software

Jak Connor | Sep 22, 2025 12:10 PM CDT

NVIDIA and OpenAI have announced a new strategic partnership that involves NVIDIA investing $100 billion into the ChatGPT-creator to power the next generation of AI models it will be building.

NVIDIA announces plans to 'co-optimize' hardware roadmap with OpenAI's software

In a new press release on the OpenAI website, it is stated that a new letter of intent to build "at least 10 gigawatts of NVIDIA systems for OpenAI's next-generation AI infrastructure". The new partnership will assist in the deployment of new datacenters and power capacity, with the first phase of the plan to come online in the second half of 2026. Notably, these new systems will be using NVIDIA's Vera Rubin platform, the company's upcoming next-generation GPU architecture.

In a nutshell, NVIDIA and OpenAI have partnered to facilitate the development of superintelligence AI and, ultimately, artificial general intelligence, which will be achieved through AI factory growth. According to the press release, NVIDIA and OpenAI will "co-optimize" their roadmaps for OpenAI's model and infrastructure software, along with NVIDIA's upcoming hardware and software.

Continue reading: NVIDIA announces plans to 'co-optimize' hardware roadmap with OpenAI's software (full post)

Microsoft announces world's biggest AI datacenter with hundreds of thousands of NVIDIA GPUs

Kosta Andreadis | Sep 19, 2025 3:02 AM CDT

Microsoft has announced that it has built the "world's most powerful AI datacenter" and the largest and "most sophisticated" AI factory that it's built to date. Called Fairwater, the facility is located in Wisconsin, US, and Microsoft has plans to construct identical Fairwater data centers across the country.

Microsoft announces world's biggest AI datacenter with hundreds of thousands of NVIDIA GPUs

"Fairwater is a seamless cluster of hundreds of thousands of NVIDIA GB200s, connected by enough fiber to circle the Earth 4.5 times," Microsoft CEO Satya Nadella writes on social media. "It will deliver 10x the performance of the world's fastest supercomputer today, enabling AI training and inference workloads at a level never before seen."

To give you a sense of scale, the Fairwater data center spans a massive 315 acres, comprising three large buildings that offer 1.2 million square feet of data center space. Fairwater is distinct from most data centers in that it's designed to function as a single, massive AI supercomputer, utilizing interconnected NVIDIA GB200 servers and the latest NVLink and NVSwitch technologies, which offer bandwidth measured in terabytes per second.

Continue reading: Microsoft announces world's biggest AI datacenter with hundreds of thousands of NVIDIA GPUs (full post)

Samsung finally passes NVIDIA's strict HBM3E 12-Hi qualification tests: 10,000 units on the way

Anthony Garreffa | Sep 18, 2025 11:45 PM CDT

Samsung Electronics has finally passed NVIDIA's strict HBM3E 12-Hi memory qualification tests for use on its AI GPUs, with the South Korean memory giant ready to supply 10,000 units.

Samsung finally passes NVIDIA's strict HBM3E 12-Hi qualification tests: 10,000 units on the way

In a new report from AlphaEconomy picked up by insider @Jukanrosleve on X, NVIDIA recently signed a supply contract for its HBM3E 12-Hi memory to NVIDIA, where the contract will see Samsung supply around 10,000 units of its qualified HBM3 12-Hi product. Samsung commented, saying that everything is "progressing as scheduled".

In previous rumors, Samsung's new HBM3E 12-Hi memory supply was confirmed, but this seems more solid and now a contract is in place, after fellow South Korean memory rival SK hynix has been exclusively supplying NVIDIA with all of the high-end HBM3 and HBM3E memory it needed.

Continue reading: Samsung finally passes NVIDIA's strict HBM3E 12-Hi qualification tests: 10,000 units on the way (full post)

Newsletter Subscription