Artificial Intelligence
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology.
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
SK hynix shows off 16Gb LPDDR6 at 14.4Gbps, while Samsung sends LPDDR6X samples to Qualcomm
SK hynix will be showcasing its latest 16Gbit (2GB) LPDDR6 SDRAM design at International Solid-State Circuits Conference (ISSCC) 2026 next week.
The new 16Gbit LPDDR6 memory is aiming for 14.4Gbps per I/O pin, aligning with the fastest transfer speed for the LPDDR6 standard, with SK hynix saying its 16Gbit LPDDR6 memory design is built on its new 1c DRAM process, its latest 10nm-class node. The upcoming ISSCC 2026 preview also notes that the paper will be focusing on power saving and signal handling changes to hit 14.4Gbps operation on the new LPDDR6 modules.
Samsung has also reportedly started providing early LPDDR6X memory samples to Qualcomm according to The Bell, with the new standard not detailed yet, but we know that LPDDR6X commercialization will happen in the second half of 2027.
Samsung officially ships HBM4 ready for NVIDIA's next-gen Rubin AI chips
Samsung has been fighting hard on its semiconductor and HBM memory business over the last few years, but now it has officially started commercially deploying its next-gen HBM4 memory, ready for NVIDIA's new Rubin AI chips.
The company explained in a press release that its new HBM4 memory has transfer speeds of 11.7Gbps, but when overclocked like NVIDIA needs it, Samsung's new HBM4 is capable of 13Gbps. Its new leading-edge DRAM is based on a 4nm logic die for maximum performance, fabbed in-house at Samsung Foundry, with its 1c DRAM also in play.
Sang Joon Hwang, Executive Vice President and Head of Memory Development at Samsung Electronics, said: "Instead of taking the conventional path of utilizing existing proven designs, Samsung took the leap and adopted the most advanced nodes like the 1c DRAM and 4nm logic process for HBM4. By leveraging our process competitiveness and design optimization, we are able to secure substantial performance headroom, enabling us to satisfy our customers' escalating demands for higher performance, when they need them".
NVIDIA rumor: lowered required HBM4 speeds for Rubin AI chips, else it won't get enough supply
NVIDIA only just recently asked for 9Gbps HBM4 before pushing for 10-11Gbps, but now rumor has it that NVIDIA has quietly lowered the required HBM4 spec speeds for Rubin, because if they didn't, they probably wouldn't be able to enjoy the full volume needs.
In a new post on X from analyst @Jukan, who posted: "Anyway, just wait and see. There's a rumor going around that NVIDIA lowered the required speed specs for HBM4 because even with Samsung alone, they probably can't meet the full volume NVIDIA needs. Who knows? Maybe Micron could sneak in and supply some HBM4 through this gap. (But personally, I still think SK Hynix is the one who's gonna supply it lol)".
If the rumors are true, it would make sense as according to other recent reports, US-based Micron won't be providing any of its HBM4 to NVIDIA for Rubin, with SK hynix providing 70% of the company's HBM4 needs, and Samsung Electronics with the other 30% of HBM4 supply.
Intel shows off next-gen 'ZAM' memory prototype: new Z-angle architecture, next-gen performance
Intel and SoftBank announced their next-generation ZAM memory technology recently, but now the new ZAM memory prototype has been shown off at the recent Intel Connection Japan 2026 event.
The focus on ZAM memory discussion during the event was on how the new Z-angle architecture would help mitigate performance issues and improve thermals using existing cooling technology. Intel Fellow and CTO of Intel Government Technologies, Joshua Fryman, was there alongside Intel Japan CEO Makoto Ono.
ZAM has been limited to just research papers and press releases, but with the new Intel and SAImemory (SoftBank subsidiary) the team is now pushing ahead with ZAM and some prototypes. The biggest difference here compared to HBM and other memories is that ZAM integrates a massive amount of interconnect topology that is routed diagonally throughout the die stack, instead of drilling vertically down. Intel says that the biggest benefit with ZAM is its thermal capabilities.
ChatGPT now has ads and OpenAI says they 'do not influence' answers
OpenAI has announced that it is currently testing ads in ChatGPT in the U.S., for users on the Free and Go subscription tiers. The AI company is quick to note that the addition of ads won't "influence the answers ChatGPT gives you" and that your conversations with the AI platform will remain private and won't be used for marketing.
The good news for those who pay for or use Plus, Pro, Business, Enterprise, or Education accounts is that you won't see ads, and ChatGPT will remain unchanged. OpenAI has showcased what the ads will look like and how they will clearly be marked as sponsored. Ads will be related to the subject or topic, with a food-and-recipe example interaction delivering the sort of ad you might see elsewhere online.
OpenAI notes that it's adding ads to ChatGPT to support "broader access" to its features, presumably to cover the costs of hundreds of millions of people interacting with ChatGPT every day.
Continue reading: ChatGPT now has ads and OpenAI says they 'do not influence' answers (full post)
Razer CEO dislikes 'GenAI Slop' but believes 'AI is a tool to help game developers'
In a new post tagged 'the future of gaming is AI,' the company, best known for creating gaming peripherals, explains why it's investing over $600 million in AI. And that's primarily about AI tools and technologies for game development, with Razer believing that how generative AI is used is more important than whether AI is used at all.
"The way we see it is that AI is a tool to help game developers make better games, rather than replace human creativity," Razer CEO and Co-founder Min-Liang Tan said during a recent episode of The Verge's Decoder podcast. "As gamers... what we're unhappy with is GenAI slop. When I play a game, I want to be engaged. I want to be immersed. I want to compete. I don't want to see characters with extra fingers or shoddily written storylines."
That comment is in response to the influx of AI-generated images and videos, widely referred to as "AI slop," which are considered inferior to human-created art. For Razer, generative AI in games is more of an extension of NPC behavior, procedural systems, and AI used to "strengthen the craft of making games."
AI music generation comes to AMD Ryzen AI processors and Radeon GPUs
When it comes to images, video, voice, and music, AI generation has reached a point where a wide range of models can produce impressive results. That said, on the creative side of generative AI, most users still connect to cloud-based services. ACE Step 1.5 is an open-source foundation model for generating music, and now it's been optimized to run locally on AMD Ryzen AI processors and AMD Radeon graphics cards.
AMD notes that you can generate full-length music tracks like the 'Country Ballad' example above, iterate, and keep all assets on-device from initial prompt to the final piece of generated music.
"Without per-track fees or upload limits, creators can experiment freely, enabling them to sketch ideas, test arrangements, and explore new sounds without friction," AMD writes. "On-device generation enables immediate iteration, making it easier to refine or discard ideas in real time without relying on an internet connection."
Continue reading: AI music generation comes to AMD Ryzen AI processors and Radeon GPUs (full post)
AMD CEO Lisa Su says AI is 'accelerating at a pace that I would not have imagined'
AMD recently reported its Q4 2025 earnings, with record revenue of $10.3 billion and a gross margin of 54%. This was indicative of AMD's banner year for investors, driven by the AI boom and demand for AMD's EPYC processors and Instinct graphics cards.
As part of its fourth-quarter and 2025 financial results, the company also forecast revenue of $9.8 billion for the first quarter of 2026, plus or minus $300 million. Although this is higher than the $9.38 billion estimate from Wall Street analysts, AMD's share price dropped 13% after its latest financial report.
The reason for the drop, according to reports, is that AMD's forecast felt too conservative for a company in the middle of the AI gold rush. Even though this follows last year's announcement of key AI partnerships with OpenAI and Oracle, and AMD's planned rollout of its server-based Helios AI systems later this year, there's also a growing sense of caution about the sustainability of AI infrastructure spending.
Gaming stocks tumble after Google shows Project Genie's real-time AI-generated 3D worlds
Project Genie, from Google, is a new experimental research prototype available to Google AI Ultra subscribers in the U.S. Although it's only a couple of days old, Project Genie has been making waves across social media because it leverages AI to create fully interactive worlds and environments with realistic physics that you can freely explore.
Project Genie combines Google's general-purpose world model, Genie 3, with Nano Banana Pro and Gemini, allowing users to sketch and shape worlds before jumping in to explore them. And with that, many consider it a significant milestone for AI and a step toward generating video games that you can play in real time from a simple text prompt.
At a glance, it's groundbreaking and feels like a glimpse of a future of AI-generated games, but the resolution is limited to 720p, the frame rate is 24 FPS, and the input latency is anything but responsive or smooth. Plus, you've only got 60 seconds. That said, after Google announced Project Genie on Jan 29, 2026, its impact reached the stock market, causing several notable game-related stocks for companies creating engines and handcrafted open worlds to crash.
TSMC needs to double production over the next 10 years to keep up with NVIDIA demand
TSMC is pumping out as much silicon as it can for virtually every big tech company, but NVIDIA CEO Jensen Huang is in Taiwan right now handling business, where he spoke with local media saying TSMC needs to double production over the next 10 years for the "world's largest infrastructure" buildout.
Jensen told local Taiwanese media: "TSMC's production capacity may grow by more than 100% in the next ten years, which is a very significant scale expansion, the largest infrastructure investment in human history, and it will have to double just to meet NVIDIA's demand".
TSMC has been expanding its semiconductor fabs for a while now, as it has factored in geopolitical issues, which has seen TSMC pump serious money into regions including the EU, Japan, and the US. TSMC also plans to build out a supply chain in the US with a massive $250 billion mega-buildout which includes advanced packaging, semiconductors, and R&D centers.
SK hynix makes 'significant' progress in NVIDIA's extensive HBM4 tests, close to mass supply
SK hynix has reportedly had "significant progress" in NVIDIA's extensive HBM4 qualification tests, which will end up inside of Rubin AI GPUs coming soon.
In a new report from South Korean media outlet Hankyung picked up by analyst @Jukan on X, we're hearing from industry sources that on January 30, SK hynix achieved "meaningful results" in NVIDIA's HBM4 System-in-Package (SiP) testing earlier this month. SK hynix started the Customer Sample (CS) certification process with NVIDIA in October 2025, where during that time defects were found in some circuits.
SK hynix made modifications to the circuits and adjusted the process, delivering improved HBM4 memory chips to NVIDIA earlier this month. It's been confirmed that these optimized products are very close to being ready for mass production, with the new HBM4 memory chips are good to go at 10Gbps under general environments, they are hitting 9-10Gbps under NVIDIA's rigorous test conditions for temperature, humidity, and impact.
Elon Musk says xAI will generate high-quality video games 'at scale' in 2027
Last year, Elon Musk took to social media to announce (or simply predict) that the xAI game studio would release a "great AI-generated game" before the end of 2026. Although the statement itself is vague, and there are already games like the popular Arc Raiders that feature AI-generated content, the assumption is that this would be a fully playable experience in which gameplay, art, level design, and so forth are all generated by AI.
It's a bold statement (or prediction) because we have yet to see even a small vertical slice of an AI-generated game that looks impressive, but one that Elon Musk is doubling down on. Responding to a post highlighting how xAI's Grok Imagine AI-video generation has grown in popularity and quality, Elon Musk is now predicting big things for AI-generated content in 2027. Specifically, stuff coming from xAI.
"Real-time, high-quality shows and video games at scale, customized to the individual, next year," the post reads. Adding that high-resolution AI-generated videos are coming this year, but they're too expensive to be mass-market. That second part does sound very likely; however, "high-quality shows" and "video games at scale" still feel out of reach.
Mark Zuckerberg says AI generated content is the future of social media
In a recent earnings call with investors, Meta CEO Mark Zuckerberg spent a few moments discussing how AI will become the next big thing in social media. That is, AI-generated personalized content that includes text, photos, and video.
"We started with text, and then moved to photos when we got phones with cameras, and then moved to video when mobile networks got fast enough," Mark Zuckerberg said. "Soon, we'll see an explosion of new media formats that are more immersive and interactive, and only possible because of advances in AI."
And when it comes to social media platforms like Facebook, Instagram, and Meta's various apps, AI will apparently "understand" users in a way current algorithms don't, and this AI will apparently "generate great personalized content" just for you. Not only that, but users will be able to create worlds, games, and interactive content they can share with friends and family, and remix or "jump into" this content to experience it more meaningfully.
SK hynix will establish US arm that is specialized in AI solutions tentatively named AI Company
SK hynix has just announced plans to establish a US-based "AI solutions" firm that will be called "The AI Company" that will look for opportunities in the future of data center clients in the US.
The company will continue making strategic investments in and working with AI companies to strengthen their competitiveness in memory chips, while providing a range of AI data center solutions. SK hynix hasn't provided many concrete details about its US-based "AI Company" but the South Korean memory giant has $10 billion USD ready to fund its AI Company operations.
SK hynix said in its press release that it's: "leveraging its unparalleled chip technologies, such as HBM, the memory chipmaker will try to play a pivotal role in delivering optimized AI systems for its customers in the AI data center sector. The company will also continue making strategic investments in and collaborating with AI firms to strengthen its competitiveness in memory chips and provide a range of AI data center solutions".
'AI should be a choice. Here's where you stand': 90% of DuckDuckGo users vote no to AI in poll
If you thought AI was hated by Windows 11 users, that's nothing compared to the denizens of DuckDuckGo based on a new poll by the maker of the search engine.
VideoCardz spotted the results of the poll where a healthy 175,000 votes were cast. 90% of respondents chose to say no to using AI with the search engine, with the remaining 10% happy to go with AI in their results.
That's a resounding thumbs-down for AI, but we should, of course, take the source into account here.
Microsoft's Maia 200 AI accelerator has 216GB of memory, outperforms Amazon and Google chips
Microsoft has unveiled its latest AI accelerator for inference and token generation, the Maia 200. Built on TSMC's 3nm process with native FP8 and FP4 tensor cores, and an overhauled memory system featuring 216GB of HBM3e at 7 TB/s and 272MB of on-chip SRAM, Microsoft described the Maia 200 as the "most performant, first-party silicon from any hyperscaler."
And with that, Microsoft claims three times the FP4 performance of Amazon's third-generation Trainium accelerator, with faster FP8 performance than Google's seventh-gen TPU. Microsoft adds that the Maia 200 is its most efficient AI inference accelerator to date, boasting 30% better performance-per-dollar than the "latest generation hardware" it currently deploys.
Set to become part of the company's AI infrastructure, Maia 200 will power the latest GPT-5.2 models from OpenAI, Microsoft Foundry, and Microsoft 365 Copilot. The company will also leverage Maia 200 to train next-gen in-house models using synthetic data.
Samsung should be first with HBM4 powering NVIDIA's new Vera Rubin AI chips, passed all tests
Samsung will debut its next-gen HBM4 memory at NVIDIA GTC 2026 in March, reportedly passing all of NVIDIA's strict verification stages, and will arrive on NVIDIA's next-gen Vera Rubin AI platform.
Samsung has spent the last couple of years struggling with its HBM memory division, leaving its South Korean rival -- SK hynix -- to enjoy providing NVIDIA with all of its HBM3 and HBM3E needs. Samsung completely overhauled its HBM and semiconductor division in the last few years, with the fruits of that labor now showing.
NVIDIA will reportedly use its first allotments of HBM4 memory for Vera Rubin from Samsung, as Samsung's new HBM4 memory is the best of the HBM4 offerings from its rivals in SK hynix and US-based Micron. Samsung's new HBM4 memory is rated for above 11Gbps, much higher than JEDEC standards for HBM4, and was pushed and requested at those higher pin speeds from NVIDIA direct.
Apple fast tracks release of wearable device specifically for AI
Big tech companies are throwing billions of dollars at developing sophisticated AI models, and in order for them to turn a profit, those AI models need to be used by as many people as possible. So, what better way to get AIs into more hands than have a new AI-dedicated device available?
Well, the Humane AI Pin attempted this, along with the Rabbit R1 and the Friend. Each of these products was designed to be wearable AI-powered devices that users interact with through voice commands and gestures. However, none of them were successful, despite at least one of the companies stating it was going to be the smartphone replacement.
However, perhaps none of those attempts had the magic that Apple does when it comes to making products, or at least that is what The Information is reporting, with the publication stating that Apple is working on a new product that is approximately the size of an AirTag, but a little thicker, that will be worn as a pin, and be dedicated to accessing AI models.
Continue reading: Apple fast tracks release of wearable device specifically for AI (full post)
YouTube will soon let you create Shorts with your AI likeness
As part of YouTube CEO Neal Mohan's blog post on what creators can expect from the platform in 2026, the Google executive says, "AI will be a boon to the creatives who are ready to lean in." And by that, he means that creators will soon be able to create a YouTube Short with their own AI likeness later in the year.
YouTube Shorts is one of the platform's biggest formats, with the mobile-friendly, TikTok-like section drawing around 200 billion views every day. Although YouTube hasn't explained or provided an example of what these AI-generated Shorts will look like, Neal Mohan is adamant that AI "will remain a tool for expression" and "not a replacement" for creativity.
On the plus side, there will be transparency and protections in place, with AI-generated content clearly labeled as such, and creators able to manage and protect the use of their AI likeness. So, once this new feature goes live, we shouldn't see a bunch of fake Shorts from random users featuring AI versions of popular YouTubers.
Continue reading: YouTube will soon let you create Shorts with your AI likeness (full post)
Anthropic's CEO says NVIDIA is essentially selling nukes to North Korea and bragging about it
The CEO of Anthropic has commented on NVIDIA being able to supply China with sophisticated AI chips to power the nation's expansive development in AI, describing the US approving trade between NVIDIA and China as "like selling nuclear weapons to North Korea".
Dario Amodei spoke at the World Economic Forum in Davos earlier this week and was asked what he thinks about the US approving the export of high-powered AI chips to China, specifically from NVIDIA. Amodei said, "I think this is crazy. It's a bit like selling nuclear weapons to North Korea and bragging that Boeing made the casings."
The response to the question is undoubtedly jarring, especially considering NVIDIA has invested as much as $10 billion into Anthropic, and Anthropic is using NVIDIA hardware to train its own AI models, such as ChatGPT rival, Claude.





















