Artificial Intelligence
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology.
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Lawsuit alleges NVIDIA approved use of pirated books to train AI models
A complaint filed in the US District Court claims NVIDIA executives approved contact with Anna's Archive, a website that harbors millions of copyrighted books and academic papers, to discuss a partnership that involves using Anna's Archive as a dataset for training its Large Language Models (LLMs).
The complaint alleges that "competitive pressures drove NVIDIA to piracy," and that internal NVIDIA emails demonstrate a member of the company's data strategy team contacting Anna's Archive about the collaboration. Furthermore, the complaint states that Anna's Archive warned NVIDIA that its treasure trove of data was obtained illegally, and asked how Team Green wanted to proceed.
The lawsuit states that within a week, NVIDIA approved of the collaboration, and in response, Anna's Archive offered NVIDIA approximately 500 terabytes of data. "Desperate for books, NVIDIA contacted Anna's Archive -- the largest and most brazen of the remaining shadow libraries -- about acquiring its millions of pirated materials and 'including Anna's Archive in pre-training data for our LLMs,'" the complaint notes.
OpenAI to unveil first device in second half of this year
OpenAI CEO Sam Altman and former Apple chief design officer Jony Ive announced in May last year that they are teaming up to release OpenAI's first hardware product.
At the time of the announcement, Altman didn't provide a description of the product. Still, given OpenAI's dominance with ChatGPT, many have presumed it to be some kind of smaller, possibly wearable device that enables users to communicate directly with the online chatbot. Multiple reports have stated OpenAI is developing prototypes of small devices with no screen that can interact with users, and Altman did say the secret device will be more "peaceful" than a smartphone.
Now, OpenAI's policy chief, Chris Lehane, said on Monday that OpenAI is on track to unveil this mysterious device in the second half of 2026. Lehane said "devices" are one of OpenAI's big projects in 2026, and that he will have more to share about the topic "much later in the year." As for when it will become available to the public, Lehane didn't give an exact date, but did say a release in 2026 was "most likely," but "we will see how things advance."
Continue reading: OpenAI to unveil first device in second half of this year (full post)
ASUS chairman confirms company going 'all in AI', no more Zenfone, ROG smartphones to be made
ASUS chairman Jonney Shih personally confirms that "ASUS will no longer add new mobile phone models in the future" with its vast R&D efforts fully shifted into physical AI, as the company is "all in AI" now.
Just a few days ago on January 16, ASUS held its "2025 Year-End Gala" at the Taipei Nangang Exhibition Center -- where Computex takes place -- with the company awarding its staff with 8 new cars and a bunch of different prizes, but the company took the time to announce its future strategy.
In a pre-event interview, ASUS chairman Jonney Shih confirmed the company will be temporarily ceasing the launches of any new smartphones, and will fully shift its R&D prowess to commercial PC systems and "physical AI", as its pushing hard into the Fourth Industrial Revolution.
OpenAI and Sam Altman confirm ads are coming to ChatGPT
OpenAI has announced in a new X post that it's beginning to test embedded advertisements within ChatGPT conversations, specifically within the free and Go tiers of ChatGPT.
The AI company has outlined its advertising principles in a new image, with the first being "Answer Independence," which means ads do not influence the answers that ChatGPT provides users, and that ads are not optimized toward users. Lastly, ads are "always separate and clearly labelled". Next is "Conversation Privacy," which OpenAI explains is the act of keeping user conversations with ChatGPT private from advertisers, adding, "We will never sell your data to advertisers."
"Choice and Control". This guideline states that users will always have the ability to turn off ad personalization, and the option of clearing data used for ads. Users will have these options available to them at any given time, and there will always be an option for users to turn off ads completely, presumably via a paid subscription tier.
Continue reading: OpenAI and Sam Altman confirm ads are coming to ChatGPT (full post)
Rupert Murdoch's News Corp signs major deal with AI journalism startup
Rupert Murdoch's News Corp has signed a deal with Symbolic.ai, a new AI journalism startup that will be joining News Corp's expansive media conglomerate.
Symbolic.ai was founded by former eBay CEO Devin Wenig and Ars Technica co-founder Jon Stokes. According to the creators, Symbolic.ai is capable of assisting journalists in the production of quality journalism and content, and its implementation has led to "productivity gains of as much as 90% for complex research tasks."
The creators say the platform is intended to make workflows more efficient, such as the creation of newsletters, fact-checking, headline optimization, SEO advice, and audio transcription. News Corp hasn't shied away from the emergence of AI, with the company announcing in 2024 that it signed a deal with ChatGPT creator OpenAI for it to license News Corp content.
Continue reading: Rupert Murdoch's News Corp signs major deal with AI journalism startup (full post)
NVIDIA's Jensen Huang wants the AI doom and gloom to stop as it's 'extremely hurtful'
Before NVIDIA's CEO Jensen Huang unveiled the Vera Rubin AI computing platform at CES 2026, Huang sat down for an interview where he discussed the doom-speak surrounding AI and its potential impact on humanity.
In an interview with No Priors, Huang discussed many topics stemming from AI, such as the biggest surprises of 2025, how AI will influence jobs, solving labor shortages with robotics, and the AI "doomer" narrative, along with regulation. Since ChatGPT's explosion in popularity and the billions of dollars that have been thrown into the development of new and more sophisticated AI models, some researchers and industry experts have warned about the potential impact on humanity when all-encompassing AI models emerge.
Some experts have issued warnings about how AI has the potential to destroy people's lives, with others mentioning privacy issues following an increasingly encroaching surveillance state. But, according to Huang, these concerns aren't needed, and have actually done irreversible harm to society's acceptance of AI. "[It's] extremely hurtful, frankly, and I think we've done a lot of damage with very well-respected people who have painted a doomer narrative," said Huang.
Oh no: NVIDIA's next-gen Vera Rubin AI systems to eat up MILLIONS of terabytes of SSDs
The only other word I heard more than "AI" at CES 2026 was "DRAM" and the on-going crisis, but now it could get worse with reports that NVIDIA's next-gen Vera Rubin AI systems will eat up MILLIONS of terabytes of SSD capacity in the years to come.
That's just Vera Rubin let alone Rubin Ultra, let alone NVIDIA's next-gen Feynman GPU architecture after that... but in a new X post by @Jukan, we're hearing from a Citi analysis of the subject.
Citi explained: "We estimate that approximately 1,152TB of additional SSD NAND will be required per Vera Rubin server system to support NVIDIA's ICMS operations. Accordingly, assuming Vera Rubin server shipments of 30,000 units in 2026 and 100,000 units in 2027, NAND demand driven by ICMS is projected to reach 34.6 million TB in 2026 and 115.2 million TB in 2027".
HP forced to turn to Chinese memory makers over DRAM supply shortage
AI companies are gobbling up all memory across the world, resulting in skyrocketing prices for memory modules for consumers. As memory supply constraints continue to grow OEMs are being forced to turn to alternative suppliers for memory, with Barron's analyst Tae Kim reporting HP is struggling to obtain supply, and is now looking to add Chinese memory suppliers to its list of component suppliers.
HP is reportedly looking to ship "limited" products into Asia and Europe. Kim also wrote that since supply is drying up from memory suppliers such as Micron, Samsung, and others, OEMs such as HP will begin to turn to Chinese memory manufacturers such CXMT, as the company's DRAM wafer output is estimated to reach up to 300,000 units per month in 2026, and while that figure is low compared to some of the other players in the market, CXMT is renowned for its DDR5 module supply, and its lack of HBM adoption.
CXMT is looking to raise $4.2 billion USD to expand production. One of the hurdles HP will need to overcome if it decides to go with a Chinese supplier such as CXMT is US regulations on sourcing semiconductors from China. Given the current situation of memory supply and the insatiable demand for more memory, it's likely new regulations are going to be put into place around seeking supply from Chinese memory makers. HP is reportedly acquiring supply from a Chinese memory maker to ship a "limited" range of products in Asia and Europe.
Continue reading: HP forced to turn to Chinese memory makers over DRAM supply shortage (full post)
Razer has created an animated holographic AI companion that sits on your desk
Razer's Project AVA has evolved since its debut at CES 2025, when it was presented as an AI companion in the form of an esports coach. At CES 2026, Project AVA has evolved into a flexible on-desk AI companion that pairs with a PC or laptop.
With digital AI avatars a very real thing, the big thing with Project AVA is that it takes the avatar off your display and puts it into a small transparent cylinder that sits on your desk. And with multiple character avatars to choose from, Project AVA comes to life (so to speak) as a 5-inch animated character with eye tracking, facial expressions, lip-syncing, and full-body animation.
The Project AVA unit includes a full HD camera so it can see you and respond to its surroundings. At CES 2026, we got to see a demo of Project AVA that responded to what someone was wearing and provided real-time weather information when asked. However, what makes it interesting and impressive is the addition of PC Vision Mode.
Razer's Project Motoko turns AI smart glasses into a wireless headset
Announced at CES 2026, Razer's Project Motoko offers an interesting take on the AI smart glasses phenomenon, as it's a wireless gaming headset with built-in cameras. Powered by an undisclosed Snapdragon processor, the dual first-person cameras allow for real-time object and test recognition, which then interacts with the on-board AI.
During our demonstration of Project Motoko, we had the AI translate a restaurant menu from Japanese to English and scan a table with a handful of ingredients to provide a quick, easy recipe for a meal. As a headset with a microphone, this is handled via speech and natural language, with AI responses fed directly through the headset.
Similar functionality to Meta Glasses; however, as Project Motoko is a headset, the AI responses are kept private, so those nearby won't be able to listen in. And to capture audio from multiple sources, there are dual far and near-field microphones to capture voice commands and all other sounds within view, including dialogue.
Continue reading: Razer's Project Motoko turns AI smart glasses into a wireless headset (full post)
AMD shows off next-gen Zen 6-based EPYC 'Venice CPU, Instinct MI455X GPU for Helios AI racks
AMD has just shown off its next-gen world-first 2nm EPYC "Venice" CPU with Zen 6 cores, and its Instinct MI455X AI accelerator, ready for its next-gen Helios AI racks.
The company unveiled its new Helios AI rack at its recent Financial Analysts Day 2025, promising some more performance numbers with class-leading performance and efficiency for AI workloads of the future. The new AMD Helios AI rack features a full liquid-cooled design with 4 x Instinct MI455X AI GPUs and a single Zen 6-based EPYC "Venice" CPU.
Helios AI racks use AMD's new Pensando "Salina" 400 DPU and Pensando "Vulcano" 800 AI NIC for networking and interconnection. AMD's next-gen EPYC "Venice" CPUs come with up to 256 cores based on the Zen 6c architecture, and each Instinct MI455X AI GPU packs a ton of GPU cores and next-gen HBM4 memory.
AMD confirms next-gen Instinct MI500 AI accelerator uses CDNA 6, TSMC 2nm, HBM4E
AMD confirmed at CES 2026 that its next-generation Instinct MI500 AI accelerator will be fabbed on TSMC's new 2nm process node, and be powered by the next-gen CDNA 6 architecture and next-gen HBM4E memory.
We will see AMD launching its next-gen Instinct MI500 series AI accelerators in 2027, as the company is moving into a faster annual cadence of releases in order to catch up with NVIDIA, and a similar way to how NVIDIA uses its standard and "Ultra" offerings. For example, NVIDIA has Blackwell and then Blackwell Ultra, Rubin and Rubin Ultra.
AMD provided some more concrete details about its MI500 at CES 2026 this week, confirming that it will be fabricated on an advanced 2nm process node at TSMC, the new CDNA 6 architecture (MI400 uses CDNA 5), and next-gen HBM4E memory (the next-gen standard after HBM4).
SK hynix showcases next-gen 48GB HBM4 at 11.7Gbps, SOCAMM2, LPDDR6 for AI platforms
SK hynix showcased its next-gen memory solutions for AI at CES 2026, showing off its new 48GB HBM4, LPDDR6, SOCAMM2, and more for AI platforms of the future.
SK hynix showed off its next-gen 16-Hi HBM4 with 48GB, newer HBM4 that will succeed the upcoming 12-Hi HBM4 with 36GB that will arrive this year. The faster 16-Hi HBM4 48GB modules are bloody fast, with 2TB/sec of memory bandwidth per stack, destined for NVIDIA's next-gen Vera Rubin AI platform.
The company had its new 16-Hi HBM4 48GB running at the industry's fastest speed of 11.7Gbps, and is still under development at SK hynix, and will be released in the nearish future.
Upscale AI-generated videos to 4K from 720p with NVIDIA's RTX Video
RTX Video Super Resolution is like DLSS for watching videos on YouTube or other streaming platforms: it takes a lower-resolution video, like 720p, and leverages AI to upscale it to 4K, delivering a sharper, more detailed image. RTX Video like DLSS leverages the Tensor Cores on GeForce RTX graphics cards for real-time upscaling.
At CES 2026, as part of a wide range of updates for RTX AI on GeForce RTX GPUs, NVIDIA announced that RTX Video will be coming to the popular, open-source AI platform ComfyUI in February. This means users with GeForce RTX GPUs will be able to take 720p AI-generated videos and upscale them "to 4K in seconds."
With the sheer computational power required to generate 4K AI video and images, most AI enthusiasts with a standard PC or laptop built for RTX AI create this content at lower resolutions, such as 720p.
Continue reading: Upscale AI-generated videos to 4K from 720p with NVIDIA's RTX Video (full post)
NVIDIA officially unveils Rubin: its next-gen AI platform with huge upgrades, next-gen HBM4
NVIDIA founder and CEO Jensen Huang proudly took the stage at CES 2026, unveiling the company's next-generation Rubin AI platform.
NVIDIA's new Rubin AI platform is the successor to its dominant Blackwell AI chips, with Rubin being the first extreme-codesigned, six-chip AI platform, with Jensen adding that it's now in full production. NVIDIA is aiming to "push AI to the next frontier" with Rubin, not just offering far more computing power, but slicing the cost of generating tokens to around 1/10 of Blackwell, making large-scale AI "far more economical to deploy".
The use of extreme codesign means that designing all of the components together is essential because scaling AI to gigascale requires tighter integration innovation between chips, trays, racks, networking, storage, and software to remove bottlenecks. This massively reduces the costs of training and inference, added Huang.
Intel's next-gen 'Jaguar Shores' Gaudi AI accelerator rumored to use new HBM4E memory
Intel's next-generation Jaguar Shores data center AI accelerator platform is rumored to be using newer HBM4E memory, which could launch sometime in the second half of 2027.
Back at the Intel AI Summit Seoul in South Korea back in July 2025, the company seemed to be ready for HBM4 from SK hynix for Jaguar Shores, with a release in 2026. However, Intel hasn't had a stable or successful time with its Gaudi AI accelerators and its release schedule, and it knows it has almost insurmountable competition from AMD and more so NVIDIA, so timelines can change, and specifications -- like using faster HBM4E -- can change, too.
The new information regarding Intel's use of HBM4E on its next-gen Jaguar Shores AI platform is from leaker @Bionic_Squash on X, who replied to @harukaze5719 regarding Jaguar Shores using HBM4, with a simply reply of "Jaguar is HBM4E".
SK hynix, Samsung, and Micron fighting for NVIDIA supply contracts for new 16-Hi HBM4 orders
Samsung, SK hynix, and Micron are all fighting each other in developing new 16-Hi HBM, because NVIDIA requested supply of the new memory chips for the second half of 2026.
16-Hi HBM hasn't been commercialized previously with many technological hurdles to overcome, including DRAM stacking, as things get far more complicated with more HBM stacks. In a new report from the Electronic Times, NVIDIA reportedly requested that domestic and foreign memory manufacturers deliver 16-Hi HBM memory chips by Q4 2026.
SK hynix and Samsung Electronics in South Korea, as well as US-based Micron, have all begun full-scale development work for the mass production supply of 16-Hi HBM memory chips to NVIDIA. The outlet reports that concrete contracts haven't been signed yet; it reports that discussions are happening internally regarding the initial production volumes of 16-Hi HBM chips.
NVIDIA and SK hynix to introduce 'AI SSD' with 10x more performance in middle of DRAM crisis
NVIDIA has teamed with SK hynix on a next-gen ultra-powerful SSD solution for AI inferencing, that could offer 10x the performance for AI inferencing in the middle of the worst DRAM crisis ever.
SK hynix has formalized development of the next-gen SSD with NVIDIA, after the South Korean memory giant enjoyed great results from supplying HBM to NVIDIA for its AI GPUs, and its customer- and service-tailored product development is expanding into the NAND flash sector.
In a new report from Korean outlet Chosun, SK hynix Vice President Kim Cheon-seong said at the recent "2025 Artificial Intelligence Semiconductor Future Technology Conference" (AISFC) that SK hynix was developing a new SSD with 10x more performance with NVIDIA. This new SSD is dubbed "Storage Next" for NVIDIA and "AN-N P" (AI NAND performance) for SK hynix, a new proof of concept that is in the works with the goal of releasing a prototype before the end of 2026.
KIOXIA's groundbreaking AiSAQ Technology now available in leading open-source vector database
KIOXIA's open-source AiSAQ (All-in-Storage ANNS with Product Quantization) has been a game-changer for running complex AI models by offloading vectorized data from expensive DRAM to SSD storage. With memory limitations and costs playing a significant role in which AI workloads can or cannot run, AiSAQ delivers a low-latency, scalable solution for Retrieval Augmented Generation (RAG) pipelines.
This week, KIOXIA announced that AiSAQ has been integrated into Milvus, one of the world's most widely adopted open-source vector databases. Starting with version 2.6.4, AI developers and enterprises can tap into the power of AiSAQ to scale AI applications with SSD storage. With the growth in RAG demands and the size of vector databases for inference, scaling DRAM is often not an option due to the exponential increase in cost.
KIOXIA's open-source AiSAQ is groundbreaking because it dramatically reduces DRAM requirements for running complex AI workloads, opening the door to large-scale system deployment that's more affordable and easier to scale, thanks to large capacity and fast SSD storage.
This tiny personal AI supercomputer can run 120B AI models while fitting in your hand
US deep-tech AI startup Tiiny AI has just unveiled the world's smallest personal AI supercomputer, with the new Tiiny AI Pocket Lab, which has been officially verified by the Guinness World Record under "The Smallest MiniPC (100B LLM Locally)".
This is the first global unveiling of the new Tiiny AI Pocket Lab, which will fit in your hands -- or your pocket, duh -- and is capable of running up to a full 120-billion-parameter LLM (Large Language Model) entirely on-device, without the need of cloud connectivity, servers, or high-end GPUs.
Tiiny has developed its super-small AI supercomputer for energy-efficient personal intelligence, and the Tiiny AI Pocket Lab runs within a 65W power envelope. The new Tiiny AI Pocket Lab enables massive AI model performance at a fraction of the energy and carbon footprint of traditional GPU-based systems.





















