Artificial Intelligence
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology.
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
NVIDIA's Jensen Huang wants the AI doom and a gloom to stop as it's 'extremely hurtful'
Before NVIDIA's CEO Jensen Huang unveiled the Vera Rubin AI computing platform at CES 2026, Huang sat down for an interview where he discussed the doom-speak surrounding AI and its potential impact on humanity.
In an interview with No Priors, Huang discussed many topics stemming from AI, such as the biggest surprises of 2025, how AI will influence jobs, solving labor shortages with robotics, and the AI "doomer" narrative, along with regulation. Since ChatGPT's explosion in popularity and the billions of dollars that have been thrown into the development of new and more sophisticated AI models, some researchers and industry experts have warned about the potential impact on humanity when all-encompassing AI models emerge.
Some experts have issued warnings about how AI has the potential to destroy people's lives, with others mentioning privacy issues following an increasingly encroaching surveillance state. But, according to Huang, these concerns aren't needed, and have actually done irreversible harm to society's acceptance of AI. "[It's] extremely hurtful, frankly, and I think we've done a lot of damage with very well-respected people who have painted a doomer narrative," said Huang.
Oh no: NVIDIA's next-gen Vera Rubin AI systems to eat up MILLIONS of terabytes of SSDs
The only other word I heard more than "AI" at CES 2026 was "DRAM" and the on-going crisis, but now it could get worse with reports that NVIDIA's next-gen Vera Rubin AI systems will eat up MILLIONS of terabytes of SSD capacity in the years to come.
That's just Vera Rubin let alone Rubin Ultra, let alone NVIDIA's next-gen Feynman GPU architecture after that... but in a new X post by @Jukan, we're hearing from a Citi analysis of the subject.
Citi explained: "We estimate that approximately 1,152TB of additional SSD NAND will be required per Vera Rubin server system to support NVIDIA's ICMS operations. Accordingly, assuming Vera Rubin server shipments of 30,000 units in 2026 and 100,000 units in 2027, NAND demand driven by ICMS is projected to reach 34.6 million TB in 2026 and 115.2 million TB in 2027".
HP forced to turn to Chinese memory makers over DRAM supply shortage
AI companies are gobbling up all memory across the world, resulting in skyrocketing prices for memory modules for consumers. As memory supply constraints continue to grow OEMs are being forced to turn to alternative suppliers for memory, with Barron's analyst Tae Kim reporting HP is struggling to obtain supply, and is now looking to add Chinese memory suppliers to its list of component suppliers.
HP is reportedly looking to ship "limited" products into Asia and Europe. Kim also wrote that since supply is drying up from memory suppliers such as Micron, Samsung, and others, OEMs such as HP will begin to turn to Chinese memory manufacturers such CXMT, as the company's DRAM wafer output is estimated to reach up to 300,000 units per month in 2026, and while that figure is low compared to some of the other players in the market, CXMT is renowned for its DDR5 module supply, and its lack of HBM adoption.
CXMT is looking to raise $4.2 billion USD to expand production. One of the hurdles HP will need to overcome if it decides to go with a Chinese supplier such as CXMT is US regulations on sourcing semiconductors from China. Given the current situation of memory supply and the insatiable demand for more memory, it's likely new regulations are going to be put into place around seeking supply from Chinese memory makers. HP is reportedly acquiring supply from a Chinese memory maker to ship a "limited" range of products in Asia and Europe.
Continue reading: HP forced to turn to Chinese memory makers over DRAM supply shortage (full post)
Razer has created an animated holographic AI companion that sits on your desk
Razer's Project AVA has evolved since its debut at CES 2025, when it was presented as an AI companion in the form of an esports coach. At CES 2026, Project AVA has evolved into a flexible on-desk AI companion that pairs with a PC or laptop.
With digital AI avatars a very real thing, the big thing with Project AVA is that it takes the avatar off your display and puts it into a small transparent cylinder that sits on your desk. And with multiple character avatars to choose from, Project AVA comes to life (so to speak) as a 5-inch animated character with eye tracking, facial expressions, lip-syncing, and full-body animation.
The Project AVA unit includes a full HD camera so it can see you and respond to its surroundings. At CES 2026, we got to see a demo of Project AVA that responded to what someone was wearing and provided real-time weather information when asked. However, what makes it interesting and impressive is the addition of PC Vision Mode.
Razer's Project Motoko turns AI smart glasses into a wireless headset
Announced at CES 2026, Razer's Project Motoko offers an interesting take on the AI smart glasses phenomenon, as it's a wireless gaming headset with built-in cameras. Powered by an undisclosed Snapdragon processor, the dual first-person cameras allow for real-time object and test recognition, which then interacts with the on-board AI.
During our demonstration of Project Motoko, we had the AI translate a restaurant menu from Japanese to English and scan a table with a handful of ingredients to provide a quick, easy recipe for a meal. As a headset with a microphone, this is handled via speech and natural language, with AI responses fed directly through the headset.
Similar functionality to Meta Glasses; however, as Project Motoko is a headset, the AI responses are kept private, so those nearby won't be able to listen in. And to capture audio from multiple sources, there are dual far and near-field microphones to capture voice commands and all other sounds within view, including dialogue.
Continue reading: Razer's Project Motoko turns AI smart glasses into a wireless headset (full post)
AMD shows off next-gen Zen 6-based EPYC 'Venice CPU, Instinct MI455X GPU for Helios AI racks
AMD has just shown off its next-gen world-first 2nm EPYC "Venice" CPU with Zen 6 cores, and its Instinct MI455X AI accelerator, ready for its next-gen Helios AI racks.
The company unveiled its new Helios AI rack at its recent Financial Analysts Day 2025, promising some more performance numbers with class-leading performance and efficiency for AI workloads of the future. The new AMD Helios AI rack features a full liquid-cooled design with 4 x Instinct MI455X AI GPUs and a single Zen 6-based EPYC "Venice" CPU.
Helios AI racks use AMD's new Pensando "Salina" 400 DPU and Pensando "Vulcano" 800 AI NIC for networking and interconnection. AMD's next-gen EPYC "Venice" CPUs come with up to 256 cores based on the Zen 6c architecture, and each Instinct MI455X AI GPU packs a ton of GPU cores and next-gen HBM4 memory.
AMD confirms next-gen Instinct MI500 AI accelerator uses CDNA 6, TSMC 2nm, HBM4E
AMD confirmed at CES 2026 that its next-generation Instinct MI500 AI accelerator will be fabbed on TSMC's new 2nm process node, and be powered by the next-gen CDNA 6 architecture and next-gen HBM4E memory.
We will see AMD launching its next-gen Instinct MI500 series AI accelerators in 2027, as the company is moving into a faster annual cadence of releases in order to catch up with NVIDIA, and a similar way to how NVIDIA uses its standard and "Ultra" offerings. For example, NVIDIA has Blackwell and then Blackwell Ultra, Rubin and Rubin Ultra.
AMD provided some more concrete details about its MI500 at CES 2026 this week, confirming that it will be fabricated on an advanced 2nm process node at TSMC, the new CDNA 6 architecture (MI400 uses CDNA 5), and next-gen HBM4E memory (the next-gen standard after HBM4).
SK hynix showcases next-gen 48GB HBM4 at 11.7Gbps, SOCAMM2, LPDDR6 for AI platforms
SK hynix showcased its next-gen memory solutions for AI at CES 2026, showing off its new 48GB HBM4, LPDDR6, SOCAMM2, and more for AI platforms of the future.
SK hynix showed off its next-gen 16-Hi HBM4 with 48GB, newer HBM4 that will succeed the upcoming 12-Hi HBM4 with 36GB that will arrive this year. The faster 16-Hi HBM4 48GB modules are bloody fast, with 2TB/sec of memory bandwidth per stack, destined for NVIDIA's next-gen Vera Rubin AI platform.
The company had its new 16-Hi HBM4 48GB running at the industry's fastest speed of 11.7Gbps, and is still under development at SK hynix, and will be released in the nearish future.
Upscale AI-generated videos to 4K from 720p with NVIDIA's RTX Video
RTX Video Super Resolution is like DLSS for watching videos on YouTube or other streaming platforms: it takes a lower-resolution video, like 720p, and leverages AI to upscale it to 4K, delivering a sharper, more detailed image. RTX Video like DLSS leverages the Tensor Cores on GeForce RTX graphics cards for real-time upscaling.
At CES 2026, as part of a wide range of updates for RTX AI on GeForce RTX GPUs, NVIDIA announced that RTX Video will be coming to the popular, open-source AI platform ComfyUI in February. This means users with GeForce RTX GPUs will be able to take 720p AI-generated videos and upscale them "to 4K in seconds."
With the sheer computational power required to generate 4K AI video and images, most AI enthusiasts with a standard PC or laptop built for RTX AI create this content at lower resolutions, such as 720p.
Continue reading: Upscale AI-generated videos to 4K from 720p with NVIDIA's RTX Video (full post)
NVIDIA officially unveils Rubin: its next-gen AI platform with huge upgrades, next-gen HBM4
NVIDIA founder and CEO Jensen Huang proudly took the stage at CES 2026, unveiling the company's next-generation Rubin AI platform.
NVIDIA's new Rubin AI platform is the successor to its dominant Blackwell AI chips, with Rubin being the first extreme-codesigned, six-chip AI platform, with Jensen adding that it's now in full production. NVIDIA is aiming to "push AI to the next frontier" with Rubin, not just offering far more computing power, but slicing the cost of generating tokens to around 1/10 of Blackwell, making large-scale AI "far more economical to deploy".
The use of extreme codesign means that designing all of the components together is essential because scaling AI to gigascale requires tighter integration innovation between chips, trays, racks, networking, storage, and software to remove bottlenecks. This massively reduces the costs of training and inference, added Huang.
Intel's next-gen 'Jaguar Shores' Gaudi AI accelerator rumored to use new HBM4E memory
Intel's next-generation Jaguar Shores data center AI accelerator platform is rumored to be using newer HBM4E memory, which could launch sometime in the second half of 2027.
Back at the Intel AI Summit Seoul in South Korea back in July 2025, the company seemed to be ready for HBM4 from SK hynix for Jaguar Shores, with a release in 2026. However, Intel hasn't had a stable or successful time with its Gaudi AI accelerators and its release schedule, and it knows it has almost insurmountable competition from AMD and more so NVIDIA, so timelines can change, and specifications -- like using faster HBM4E -- can change, too.
The new information regarding Intel's use of HBM4E on its next-gen Jaguar Shores AI platform is from leaker @Bionic_Squash on X, who replied to @harukaze5719 regarding Jaguar Shores using HBM4, with a simply reply of "Jaguar is HBM4E".
SK hynix, Samsung, and Micron fighting for NVIDIA supply contracts for new 16-Hi HBM4 orders
Samsung, SK hynix, and Micron are all fighting each other in developing new 16-Hi HBM, because NVIDIA requested supply of the new memory chips for the second half of 2026.
16-Hi HBM hasn't been commercialized previously with many technological hurdles to overcome, including DRAM stacking, as things get far more complicated with more HBM stacks. In a new report from the Electronic Times, NVIDIA reportedly requested that domestic and foreign memory manufacturers deliver 16-Hi HBM memory chips by Q4 2026.
SK hynix and Samsung Electronics in South Korea, as well as US-based Micron, have all begun full-scale development work for the mass production supply of 16-Hi HBM memory chips to NVIDIA. The outlet reports that concrete contracts haven't been signed yet; it reports that discussions are happening internally regarding the initial production volumes of 16-Hi HBM chips.
NVIDIA and SK hynix to introduce 'AI SSD' with 10x more performance in middle of DRAM crisis
NVIDIA has teamed with SK hynix on a next-gen ultra-powerful SSD solution for AI inferencing, that could offer 10x the performance for AI inferencing in the middle of the worst DRAM crisis ever.
SK hynix has formalized development of the next-gen SSD with NVIDIA, after the South Korean memory giant enjoyed great results from supplying HBM to NVIDIA for its AI GPUs, and its customer- and service-tailored product development is expanding into the NAND flash sector.
In a new report from Korean outlet Chosun, SK hynix Vice President Kim Cheon-seong said at the recent "2025 Artificial Intelligence Semiconductor Future Technology Conference" (AISFC) that SK hynix was developing a new SSD with 10x more performance with NVIDIA. This new SSD is dubbed "Storage Next" for NVIDIA and "AN-N P" (AI NAND performance) for SK hynix, a new proof of concept that is in the works with the goal of releasing a prototype before the end of 2026.
KIOXIA's groundbreaking AiSAQ Technology now available in leading open-source vector database
KIOXIA's open-source AiSAQ (All-in-Storage ANNS with Product Quantization) has been a game-changer for running complex AI models by offloading vectorized data from expensive DRAM to SSD storage. With memory limitations and costs playing a significant role in which AI workloads can or cannot run, AiSAQ delivers a low-latency, scalable solution for Retrieval Augmented Generation (RAG) pipelines.
This week, KIOXIA announced that AiSAQ has been integrated into Milvus, one of the world's most widely adopted open-source vector databases. Starting with version 2.6.4, AI developers and enterprises can tap into the power of AiSAQ to scale AI applications with SSD storage. With the growth in RAG demands and the size of vector databases for inference, scaling DRAM is often not an option due to the exponential increase in cost.
KIOXIA's open-source AiSAQ is groundbreaking because it dramatically reduces DRAM requirements for running complex AI workloads, opening the door to large-scale system deployment that's more affordable and easier to scale, thanks to large capacity and fast SSD storage.
This tiny personal AI supercomputer can run 120B AI models while fitting in your hand
US deep-tech AI startup Tiiny AI has just unveiled the world's smallest personal AI supercomputer, with the new Tiiny AI Pocket Lab, which has been officially verified by the Guinness World Record under "The Smallest MiniPC (100B LLM Locally)".
This is the first global unveiling of the new Tiiny AI Pocket Lab, which will fit in your hands -- or your pocket, duh -- and is capable of running up to a full 120-billion-parameter LLM (Large Language Model) entirely on-device, without the need of cloud connectivity, servers, or high-end GPUs.
Tiiny has developed its super-small AI supercomputer for energy-efficient personal intelligence, and the Tiiny AI Pocket Lab runs within a 65W power envelope. The new Tiiny AI Pocket Lab enables massive AI model performance at a fraction of the energy and carbon footprint of traditional GPU-based systems.
NVIDIA's new GPU location verification feature for AI GPUs to stop smuggling, no kill switches
NVIDIA has built a new location verification technology that would help the company know where its AI GPUs are being operated, in a bid to help prevent its AI chips from being smuggled into countries where US exports apply.
In a new report from Reuters, the outlet said that NVIDIA has demonstrated its new location verification feature privately over the last few months, but hasn't released it just yet. When it's released, it'll act as a software option that customers and data center operators can install, so they can keep a tighter eye on the AI chips in their fleets.
The newly-developed software from NVIDIA is an opt-in, customer-installed service that keeps an eye on GPU usage, configuration, and errors. It has a pretty decent feature set, providing data center operators with the following abilities:
US authorities catch 'trafficking network' smuggling $160M of NVIDIA AI chips to China
US authorities have busted an AI chip trafficking network that was attempting to send $160 million worth of NVIDIA H100 and H200 AI GPUs to China, as the smugglers were changing the shipments' final destination.
In a press release issued by the U.S. Department of Justice, authorities reported a trafficking network in Houston, Texas, that has been convicted of smuggling NVIDIA AI chips to China using a "complex scheme". Court documents reveal two individuals -- Alan Hao Hsu, and those who worked for his company, Hao Global LLC -- attempted to export NVIDIA H100 and H200 AI GPUs worth $160 million by manipulating official paperwork and hiding the "ultimate destination of the GPUs".
The network itself was busted by the discovery of a wire transfer that began in the People's Republic of China (PRC), with the NVIDIA AI GPUs shipped to US warehouses and then rebranded as "SANDKYAN", allowing the group to misclassify the AI GPUs and then export them.
NVIDIA CEO Jensen Huang tells Joe Rogan an 'AI doomsday is never going to happen'
NVIDIA CEO Jensen Huang sat down with Joe Rogan last week, where he provided some thoughts on his opinion of LLMs of today, turning into Terminator, tomorrow. Check it out:
Joe asked Jensen what he thought about AI and LLMs (Large Language Models) of today, which include generative AI, edge AI, agentic workflows, and more AI workloads. LLMs are super advanced right now, where they will replace many humans in their roles in the coming years, but some worry AI-boosted capabilities will go "too far", where they replace human beings as the "apex species".
Joe Rogan asked: "Well, I don't assume that it would do harm to us, but the fear would be that we would no longer have control and that we would no longer be the apex species on the planet. This thing that we created would now be. Is that funny?"
NVIDIA CUDA Tile is the largest and most comprehensive update to the platform in 20 years
With the release of NVIDIA CUDA 13.1, the company is introducing the "largest and most comprehensive update to the CUDA platform since it was invented two decades ago." Alongside new features and performance improvements, the arrival of NVIDIA CUDA Tile is set to be a game-changer for AI programming.
The initial release is limited to the current Blackwell generation of GPU hardware (future versions will support more architectures), with CUDA Tile programming allowing users to bring their code up a layer with specific chunks of data called tiles. From there, the compiler and runtime determine "the best way to launch that work onto individual threads," including using hardware such as tensor cores.
With the new CUDA Tile programming, removing the need to define each thread's "path of execution," it reduces the effort required to write code that performs well across various GPU architectures.
Samsung wins a presidential award in South Korea for its 24Gb 40Gbps GDDR7 DRAM
Samsung Electronics' GDDR7 memory dies have won presidential recognition this week, as Samsung's continued technological competitiveness gains industry respect after its huge turnaround earlier this year.
In a new report from the Korea Times, we're hearing that at the recent 2025 Korea Tech Festival in Seoul, which was hosted by the Ministry of Trade, Industry, and Resources, the South Korean government awarded a presidential honor to Samsung's world-first 12nm-class, 40Gbps, 24Gb GDDR7 memory.
This isn't the first time for Samsung either, as it's the 12th time that the memory giant has received the presidential commendation, which is the highest number awarded to a single company. It previously received presidential recognition with its 14nm-class DDR5 memory in 2022, and the 64-layer 3D V-NAND flash back in 2017.





















