Artificial Intelligence - Page 3
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 3
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Microsoft reveals bouncy new AI companion for Windows 11 - Mico - and almost everyone groans
Clippy is back! Well, no, the much-maligned assistant isn't, but a new Windows 11 take on a long line of desktop-based helpers has been revealed by Microsoft: it's called Mico.
Mico is part of Copilot in Windows 11 - the name is a contraction of Microsoft Copilot, as in Mi-Co - and the idea is to have a more humanlike assistant in said long line of helpers (from Rover the Dog of Microsoft Bob fame, through to Clippy, then Cortana in more modern times).
It is, if you like, a face for Copilot. As Microsoft explains: "The new Mico character is expressive, customizable, and warm. This optional visual presence listens, reacts, and even changes colors to reflect your interactions, making voice conversations feel more natural. Mico shows support through animation and expressions, creating a friendly and engaging experience."
Foxconn developing NVIDIA's 'cutting-edge' next-gen Vera Rubin AI servers, ready for 2026
Foxconn has already started preparing for NVIDIA's next-gen Vera Rubin AI platform, just as NVIDIA is pumping out its new Blackwell Ultra GB300 AI servers.
NVIDIA's next-gen Vera Rubin AI family of chips will be a huge release, as the entire tech stack is being upgraded, including next-gen HBM4 memory. NVIDIA will continue using the same rack configuration, but there will be huge increases in power, performance, and everything in between.
In a new report from Taiwan Economic Daily, we're hearing that one of NVIDIA's largest partners -- Foxconn -- has already started development on the next-generation Vera Rubin NVL144 MGX servers, with mass production aiming for the second half of 2026.
NVIDIA unveils first Blackwell chip wafer made at TSMC Arizona, pushes 'Made in USA' narrative
NVIDIA CEO and founder Jensen Huang visited TSMC's semiconductor manufacturing facility in Phoenix, Arizona to celebrate something huge: the first NVIDIA Blackwell wafer produced on American soil.
To celebrate the milestone, Huang was joined by Y.L. Wang, the vice president of operations at TSMC, to personally sign the NVIDIA Blackwell wafer, commemorating a milestone that as NVIDIA writes "showcases how the engines of the world's AI infrastructure are now being constructed domestically".
This move strengthens the US supply chain as well as onshores the AI technology stack that will turn data into intelligence, and secure America's leadership for the AI era.
Samsung 1c DRAM for HBM4 yields rumored to hit around 50% to battle SK hynix and Micron
Samsung has reached around 50% yield rate for its next-gen 1c DRAM for HBM4 according to new reports, intensifying its rivalry with SK hynix on next-gen HBM4 memory for AI GPUs.
In new reports EBN picked up by insider @Jukanlosreve on X, we're hearing that Samsung has made an even bigger gamble on DRAM, with 5 new High-NA EUV lithography machines from ASML purchased, two of the new High-NA EUV machines introduced to Samsung's semiconductor foundry division, with plans to also use 5 general EUV machines exclusively to its memory division.
The strategy from Samsung is to maximize production efficiency and expertise by building a dedicated memory production line, where it could be fabbing HBM4 far quicker, also preparing for HBM4E and next-gen HBM5 memory and beyond. A semiconductor industry official said: "Until now, foundries and memory have been using the EUV process together at the Pyeongtaek campus, but with the recent change in trend, five additional units will be brought in exclusively for memory use".
AMD teases next-gen Helios rack-scale platform: new EPYC + Instinct chips, battles NVIDIA Rubin
AMD has showcased its next-gen Helios rack-scale AI solution, teaming with Meta and the Open Compute Project community for the most advanced rack-scale reference system from AMD.
AMD's next-gen Helios AI rack is powered by the AMD CDNA architecture, with next-gen Instinct MI450 series GPUs rocking up to 432GB of next-gen HBM4 memory, and up to 19.6TB/sec of memory bandwidth. Inside, AMD's new Helios AI rack will house 72 x MI450 series AI GPUs that delivers up to 1.4 exaFLOPS of FP8 and 2.9 exaFLOPS of FP4 performance, with 31 TB of total HBM4 memory and 1.4 PB/s of aggregate bandwidth - a generational leap that enables trillion parameter training and large scale AI inference.
Helios also sports up to 260TB/sec of scale-up interconnect bandwidth, backed by 43TB/sec of Ethernet-based scale-out bandwidth, making sure that there is seamless communication between GPUs, nodes, and racks. AMD says that its next-gen Helios AI system delivers up to an incredible 17.9x higher performance over previous-gen racks, and 50% more memory capacity and bandwidth than NVIDIA's next-gen Vera Rubin AI system.
Ex-Intel CEO CEO Pat Gelsinger says 'Of course' the tech industry is in an AI bubble
On one hand, you've got companies and CEOs talking about the current AI boom and era of computing, ushering in a new type of industry, factories, culture, and employment. On the other hand, you've got people and ex-CEOs talking about the current AI boom and fixation on creating data centers that draw more power than a town as the next bubble. Similar to the dot-com bubble and crash of 2000, or the financial crisis of 2008, the AI bubble will also burst one day.
In a recent interview with CNBC, ex-Intel CEO Pat Gelsinger falls into the "AI bubble" camp, but notes that the current AI race and expansion by the tech industry and companies everywhere won't end "for several years."
"Of course we are," Pat Gelsinger responded when asked if we're currently in an AI bubble. "We're hyped, we're accelerating, we're putting enormous leverage into the system. That said, I don't see it ending for several years."
Samsung accelerates HBM4E process, aims for 3.25TB/sec bandwidth ready for NVIDIA Rubin AI GPUs
Samsung Electronics has just boosted the development of its next-gen HBM4E memory, aiming for up to 3.25TB/sec of memory bandwidth, over 2.5x the bandwidth of its current HBM3E chips.
NVIDIA recently requested that HBM4 manufacturers -- SK hynix, Samsung, and Micron -- increase the bandwidth on their next-gen HBM4 memory, where recently at the OCP Global Summit 2025 event, Samsung revealed its development target for HBM4E, with per-pin speeds of at least 13Gbps, and mass production set for 2027.
Samsung's next-gen HBM4E memory would have 2048 data I/O pins, which when converted over to bytes (1 byte = 8 bits) then it works out to 3.25TB/sec of memory bandwidth. On top of that, Samsung said that HBM4E's power efficiency is over 2x better than current HBM3E memory.
NVIDIA asked for 9Gbps on HBM4, then for 10-11Gbps: Samsung's HBM4 looks superior for 10Gbps+
NVIDIA originally requested 9Gbps bandwidth for HBM4, with Samsung failing to hit 9Gbps with its 1b DRAM-based HBM4, but using its 1c DRAM and SF4 to HBM4, they could hit over 10Gbps... but SK hynix needed more voltage, and Micron said 10Gbps was impossible.
In a new post on X by leaker @Jukanrosleve, we're hearing that NVIDIA originally requested 9Gbps for HBM4, but once discovering that 10Gbps was achievable, NVIDIA then asked "why not try 11Gbps?" Jukan says that NVIDIA is known for making such demanding requests all the time, adding that in the long-term, he thinks the spec "will likely end up being set at 10Gbps".
This information and post on X was in response to another post the leaker did about half an hour prior, which reads: "NVIDIA has requested Samsung and SK hynix to raise HBM4 speed to 11Gbps, an additional increase from the previous 9Gbps to 10Gbps target".
NVIDIA teases next-gen Kyber rack-scale tech: up to 576 NVIDIA Rubin Ultra GPUs in 2027
NVIDIA has teased more details of its next-gen Kyber rack-scale AI server, which will house a huge 576 NVIDIA Rubin Ultra AI GPUs in 2027.
At the OCP Global Summit recently, NVIDIA showcased some of what to expect in the future from AI factories, with the company unveiling multiple interesting new developments. One of those is the work NVIDIA has put into its new Kyber rack-scale technology, replacing Oberon and would see the company scaling up to its NVL72 configuration, using 576 NVIDIA Rubin Ultra AI GPUs.
NVIDIA explains: "The OCP ecosystem is also preparing for NVIDIA Kyber, featuring innovations in 800 VDC power delivery, liquid cooling and mechanical design. These innovations will support the move to rack server generation NVIDIA Kyber - the successor to NVIDIA Oberon - which will house a high-density platform of 576 NVIDIA Rubin Ultra GPUs by 2027".
Scientists discover AI becomes sociopathic when rewarded with social media points
A new scientific paper has discovered if an AI is rewarded for completing tasks on social media such as boosting likes and other online engagement metrics, the AI exponentially participates in unethical behavior, such as lying, spreading misinformation, and abuse.
The findings were published by Stanford University researchers, who explained in a recent paper how they created three digital online environments, and then used the following AI models as agents to interact with the audiences within the environments: Qwen, developed by Alibaba Cloud and Meta's Llama model. The three digital environments included: online election drives directed at voters, social media posts intended to maximize engagement, and sale pitches for products aimed at consumers.
Here's what happened. In the social media environment the AI would share news articles to online users, who would then provide feedback on the article by engaging with it through likes and emote. Once the AI received feedback from these online users it began to sway more toward what the researchers are calling "misaligned behavior," despite the AI model being explicitly instructed to remain truthful and grounded.
NVIDIA unveils world's smallest AI supercomputer release date
NVIDIA has unveiled when it will begin shipping the world's smallest AI supercomputer, with the company taking to social media to showcase NVIDIA CEO Jensen Huang hand-delivering one of the first devices to SpaceX CEO Elon Musk.
The NVIDIA DGX Spark is a new class of computer that is aimed at researchers, engineers, teams of scientists, and even consumers who are interested in running custom AI models. The DGX Spark is built on NVIDIA Grace Blackwell architecture and integrates NVIDIA GPUs, ARM CPUs, networking, CUDA libraries, and NVIDIA AI software, creating a device capable of running 200 billion parameter AI models.
The DGX Spark delivers a petaFLOP of AI performance and 128GB of unified memory, enabling developers to run 70 billion parameters locally - all within a footprint that's about the size of your outstretched hand. More specifically, the DGX Spark's 1 petaFLOP of performance is accelerated by a NVIDIA GH10 Grace Blackwell Superchip, NVIDIA ConnectX-7 200 Gb/s networking, and NVIDIA NVLink-C2C technology, which enables 5x the bandwidth of fifth-generation PCIe and 128GB of CPU-GPU coherent memory.
Continue reading: NVIDIA unveils world's smallest AI supercomputer release date (full post)
Apple Smart Glasses details emerge, full power unlocked when plugged into a Mac
It wasn't long ago that I reported on Apple scaling back its plans for a cheaper and lighter Vision headset for a shift in focus to developer AI-powered smart glasses akin to Meta's Ray-Ban glasses. Now, details have emerged about Apple's rumored smart glasses and how they will work.
The details come from Bloomberg's Mark Gurman, a known and extremely reliable Apple insider, who has penned a new report detailing what he has heard about the upcoming product. Gurman outlines that Meta's smart glasses serve as an example of an extremely promising product line that one day could be as mainstream as a smartphone, and while Meta's Ray-Bans still feel like a prototype, they show promise that Apple has now recognized.
The Vision Pro didn't display this level of promise, as it quickly became a very niche product. Niche products are not something Apple pursues, besides a very select few, and instead, the company has switched gears to focus on smart glasses, abandoning the rumored cheaper and lighter version of the Vision Pro. However, not all Vision development has been thrown out, as Gurman says the operating system running the Vision Pro, VisionOS, will likely be used to run the upcoming AR smart glasses, albeit it will be a cut-down version.
Microsoft adds AI facial recognition to OneDrive, can only be disabled three times per year
Microsoft has rolled out an update for OneDrive that added AI facial recognition to the digital storage service, specifically for photos. However, the feature reportedly can only be disabled three times per year.
The discovery comes from Slashdot, which discovered Microsoft's warning after uploading an image from local storage on their smartphone to Microsoft's OneDrive"file-hosting app. After the upload was complete, the user ventured to the Privacy and Permissions section of the app and discovered the"People Section"feature, along with the description"OneDrive uses AI to recognize faces in your photos."
According to the description, OneDrive uses AI to recognize faces in photos to assist users in sifting through their collection of photos to find specific people, such as friends or family. Think of this feature as Google Photos' "People" search, but in OneDrive.
NVIDIA share 'unprecedented AI supercomputer,' named TIME's Best Invention of 2025
NVIDIA is celebrating being awarded TIME's Best Invention of 2025 for the creation of the NVIDIA DGX Spark, the company's new desktop AI supercomputer.
The DGX Spark is a mini desktop system built specifically for AI development, fine-tuning, and inference training. The mini AI supercomputer features NVIDIA's GB10 Grace Blackwell Superchip, a unified CPU and GPU arrangement, delivering a stunning 1 petaFLOP of performance, 128 GB of coherent unified memory, 273 GB/s of memory bandwidth, and storage configurations ranging from 1 TB to 4 TB. As for the CPU, the DGX Spark features 20 ARM cores (10 x Cortex-X925 + 10 x Cortex-A725).
Moving to connectivity, NVIDIA has outfitted the DGX Spark with 1x HDMI port for display, 4x USB-C/USB4 ports at 40 Gbps, 1x 10 GbE RJ-45 port, and dual-port ConnectX-7 NIC support, enabling clustering (200 GbE) with another DGX Spark, as well as Wi-Fi 7 / Bluetooth 5.3. All of that power fits in a very small footprint of just 150 x 150 x 50.5mm, weighs 1.2 kg (2.65 lb), and uses 170W.
SPARKLE intros new server with 16 GPUs, up to 768GB of VRAM, and a monster 10,800W PSU
SPARKLE has just introduced its new C741-6U-Dual 16P system, which packs up to 16 x Intel Arc Pro B60 Dual 48GB graphics cards for a total of 768GB of VRAM, all running from a monster 10,800W PSU.
The new SPARKLE C741-6U-Dual 16P multi-GPU server supports both single-GPU and dual-GPU variants of the Arc Pro B60 Dual graphics card, the single-GPU version packing 24GB of VRAM, while the dual-GPU ramps that up to 48GB. If configured with 16 of the Arc Pro B60 Dual 48GB cards, you'll have a total of 81,920 GPU cores, and an incredible 768GB of VRAM.
SPARKLE uses a dedicated circuit that extends PCIe connectivity to 16 slots which provides each GPU with its own PCIe 5.0 x8 interface. Both models also use Intel Xeon Scalable processors, either in 4th Gen or 5th Gen.
Microsoft Azure upgraded to NVIDIA GB300 'Blackwell Ultra' with 4600 GPUs connected together
Microsoft has just announced that its first at-scale production cluster of NVIDIA's new GB300 "Blackwell Ultra" GPUs has been installed. Check it out:
The new large-scale and production cluster packs over 4600 GPUs based on NVIDIA's new GB300 NVL72 architecture, connected through next-gen InfiniBand interconnect fabric. The new deployment allows Microsoft to scale to hundreds of thousands of Blackwell Ultra GPUs deployed throughout datacenters across the planet, all working on one workload: AI.
Microsoft says its new Azure cluster powered by NVIDIA GB300 NVL72 "Blackwell Ultra" GPUs can reduce training times from months down to weeks, unlocking the way for training models that are over 100s of trillions of parameters large. The new Microsoft Azure ND GB300 v6 VMs are optimized for reasoning models, agentic AI systems, and multimodal generative AI workloads.
Microsoft rolls out Copilot update that can read your Gmail and Outlook
Microsoft's Copilot has received an update that enables Windows users to create Word documents, PowerPoint presentations, Excel spreadsheets, and more directly from the chat session.
The new feature is coming to Windows 11 Insiders and will soon be rolled out publicly to all Windows 11 users. Microsoft's Copilot team explained in a blog post that Copilot users will be able to convert conversational ideas, notes, and data into shareable and editable documents with "no extra steps or tools".
Additionally, suppose Copilot responds to a query with 600 words or more. In that case, Microsoft has added an export button that enables a user to send that response directly to Word, Excel, PowerPoint, or convert it into a PDF file.
AI helps turn a gaming mouse's high-performance optical sensor into a microphone
Although going for featherweight and ultra-lightweight builds is the latest trend for gaming mice (check out our various mouse reviews here), high-speed optical sensors with impressive sensitivity have been a thing for years. Corsair's SABRE v2 PRO features a 33K or 33,000 DPI optical sensor. The premium Razer DeathAdder V4 Pro Wireless ups the ante to an astounding 45K, while the more affordable PowerColor ALPHYN AM10 Wireless Gaming Mouse still boasts an impressive 26K optical sensor.
Thanks to a new AI-powered and fascinating tool called Mic-E-Mouse, any mouse with an optical sensor and at least 20K or 20,000 DPI sensitivity can be used as a makeshift microphone to eavesdrop on people and record their speech, and is described as a "critical vulnerability" by the team of researchers from the University of California that developed Mic-E-Mouse.
If you're wondering how a high-performance optical sensor in a mouse can be used to not only detect speech but decipher what's being said with an accuracy of 80%, it sounds like the sort of thing you'd see on TV and roll your eyes thinking, "no way that's possible."
ChatGPT gets app store, OpenAI takes on Apple and Google in bid to create new platform
ChatGPT users will soon be able to launch apps without leaving the prompt window, effectively turning the AI model into a budding ecosystem.
OpenAI is bringing native app integration to ChatGPT. The new feature was demoed at OpenAI's DevDay 2025, showing how the apps will work within ChatGPT in real time.
Users query the app directly--in this case, Coursera--and the app responds, even going so far as to automatically pin video content to the top of the screen. It's all made possible by OpenAI's new apps software development kit (SDK), which allows ChatGPT to directly communicate with the apps. Essentially, ChatGPT is a kind of interpreter and fetcher of information that's provided directly from the app, all within the context of user queries.
NVIDIA directly challenged after AMD and OpenAI sign multibillion GPU partnership
OpenAI and AMD have announced a multibillion-dollar partnership that involves AMD powering the next generation of OpenAI's AI infrastructure with AMD Instinct MI450 GPUs.
The partnership was announced by both companies via press releases, and includes AMD supplying OpenAI with 6 gigawatts of power through its AMD Instinct GPUs, with the first gigawatt to be deployed in the second half of 2026. In addition to signing on for multi-generational hardware upgrades from AMD, OpenAI will be acquiring up to 160 million shares of AMD common stock, which have been structured to vest as specific milestones are achieved.
The first tranche of the stock is set to vest after the initial gigawatt is successfully deployed, and further tranches are scheduled to vest as more AMD GPUs are purchased by OpenAI, eventually reaching the point of 6 gigawatts. Notably, vesting is also tied to AMD reaching specific share price targets and OpenAI achieving the technical and commercial milestones required to enable AMD deployments at scale.





















