Artificial Intelligence
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology.
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
AI music generation comes to AMD Ryzen AI processors and Radeon GPUs
When it comes to images, video, voice, and music, AI generation has reached a point where a wide range of models can produce impressive results. That said, on the creative side of generative AI, most users still connect to cloud-based services. ACE Step 1.5 is an open-source foundation model for generating music, and now it's been optimized to run locally on AMD Ryzen AI processors and AMD Radeon graphics cards.
AMD notes that you can generate full-length music tracks like the 'Country Ballad' example above, iterate, and keep all assets on-device from initial prompt to the final piece of generated music.
"Without per-track fees or upload limits, creators can experiment freely, enabling them to sketch ideas, test arrangements, and explore new sounds without friction," AMD writes. "On-device generation enables immediate iteration, making it easier to refine or discard ideas in real time without relying on an internet connection."
Continue reading: AI music generation comes to AMD Ryzen AI processors and Radeon GPUs (full post)
AMD CEO Lisa Su says AI is 'accelerating at a pace that I would not have imagined'
AMD recently reported its Q4 2025 earnings, with record revenue of $10.3 billion and a gross margin of 54%. This was indicative of AMD's banner year for investors, driven by the AI boom and demand for AMD's EPYC processors and Instinct graphics cards.
As part of its fourth-quarter and 2025 financial results, the company also forecast revenue of $9.8 billion for the first quarter of 2026, plus or minus $300 million. Although this is higher than the $9.38 billion estimate from Wall Street analysts, AMD's share price dropped 13% after its latest financial report.
The reason for the drop, according to reports, is that AMD's forecast felt too conservative for a company in the middle of the AI gold rush. Even though this follows last year's announcement of key AI partnerships with OpenAI and Oracle, and AMD's planned rollout of its server-based Helios AI systems later this year, there's also a growing sense of caution about the sustainability of AI infrastructure spending.
Gaming stocks tumble after Google shows Project Genie's real-time AI-generated 3D worlds
Project Genie, from Google, is a new experimental research prototype available to Google AI Ultra subscribers in the U.S. Although it's only a couple of days old, Project Genie has been making waves across social media because it leverages AI to create fully interactive worlds and environments with realistic physics that you can freely explore.
Project Genie combines Google's general-purpose world model, Genie 3, with Nano Banana Pro and Gemini, allowing users to sketch and shape worlds before jumping in to explore them. And with that, many consider it a significant milestone for AI and a step toward generating video games that you can play in real time from a simple text prompt.
At a glance, it's groundbreaking and feels like a glimpse of a future of AI-generated games, but the resolution is limited to 720p, the frame rate is 24 FPS, and the input latency is anything but responsive or smooth. Plus, you've only got 60 seconds. That said, after Google announced Project Genie on Jan 29, 2026, its impact reached the stock market, causing several notable game-related stocks for companies creating engines and handcrafted open worlds to crash.
TSMC needs to double production over the next 10 years to keep up with NVIDIA demand
TSMC is pumping out as much silicon as it can for virtually every big tech company, but NVIDIA CEO Jensen Huang is in Taiwan right now handling business, where he spoke with local media saying TSMC needs to double production over the next 10 years for the "world's largest infrastructure" buildout.
Jensen told local Taiwanese media: "TSMC's production capacity may grow by more than 100% in the next ten years, which is a very significant scale expansion, the largest infrastructure investment in human history, and it will have to double just to meet NVIDIA's demand".
TSMC has been expanding its semiconductor fabs for a while now, as it has factored in geopolitical issues, which has seen TSMC pump serious money into regions including the EU, Japan, and the US. TSMC also plans to build out a supply chain in the US with a massive $250 billion mega-buildout which includes advanced packaging, semiconductors, and R&D centers.
SK hynix makes 'significant' progress in NVIDIA's extensive HBM4 tests, close to mass supply
SK hynix has reportedly had "significant progress" in NVIDIA's extensive HBM4 qualification tests, which will end up inside of Rubin AI GPUs coming soon.
In a new report from South Korean media outlet Hankyung picked up by analyst @Jukan on X, we're hearing from industry sources that on January 30, SK hynix achieved "meaningful results" in NVIDIA's HBM4 System-in-Package (SiP) testing earlier this month. SK hynix started the Customer Sample (CS) certification process with NVIDIA in October 2025, where during that time defects were found in some circuits.
SK hynix made modifications to the circuits and adjusted the process, delivering improved HBM4 memory chips to NVIDIA earlier this month. It's been confirmed that these optimized products are very close to being ready for mass production, with the new HBM4 memory chips are good to go at 10Gbps under general environments, they are hitting 9-10Gbps under NVIDIA's rigorous test conditions for temperature, humidity, and impact.
Elon Musk says xAI will generate high-quality video games 'at scale' in 2027
Last year, Elon Musk took to social media to announce (or simply predict) that the xAI game studio would release a "great AI-generated game" before the end of 2026. Although the statement itself is vague, and there are already games like the popular Arc Raiders that feature AI-generated content, the assumption is that this would be a fully playable experience in which gameplay, art, level design, and so forth are all generated by AI.
It's a bold statement (or prediction) because we have yet to see even a small vertical slice of an AI-generated game that looks impressive, but one that Elon Musk is doubling down on. Responding to a post highlighting how xAI's Grok Imagine AI-video generation has grown in popularity and quality, Elon Musk is now predicting big things for AI-generated content in 2027. Specifically, stuff coming from xAI.
"Real-time, high-quality shows and video games at scale, customized to the individual, next year," the post reads. Adding that high-resolution AI-generated videos are coming this year, but they're too expensive to be mass-market. That second part does sound very likely; however, "high-quality shows" and "video games at scale" still feel out of reach.
Mark Zuckerberg says AI generated content is the future of social media
In a recent earnings call with investors, Meta CEO Mark Zuckerberg spent a few moments discussing how AI will become the next big thing in social media. That is, AI-generated personalized content that includes text, photos, and video.
"We started with text, and then moved to photos when we got phones with cameras, and then moved to video when mobile networks got fast enough," Mark Zuckerberg said. "Soon, we'll see an explosion of new media formats that are more immersive and interactive, and only possible because of advances in AI."
And when it comes to social media platforms like Facebook, Instagram, and Meta's various apps, AI will apparently "understand" users in a way current algorithms don't, and this AI will apparently "generate great personalized content" just for you. Not only that, but users will be able to create worlds, games, and interactive content they can share with friends and family, and remix or "jump into" this content to experience it more meaningfully.
SK hynix will establish US arm that is specialized in AI solutions tentatively named AI Company
SK hynix has just announced plans to establish a US-based "AI solutions" firm that will be called "The AI Company" that will look for opportunities in the future of data center clients in the US.
The company will continue making strategic investments in and working with AI companies to strengthen their competitiveness in memory chips, while providing a range of AI data center solutions. SK hynix hasn't provided many concrete details about its US-based "AI Company" but the South Korean memory giant has $10 billion USD ready to fund its AI Company operations.
SK hynix said in its press release that it's: "leveraging its unparalleled chip technologies, such as HBM, the memory chipmaker will try to play a pivotal role in delivering optimized AI systems for its customers in the AI data center sector. The company will also continue making strategic investments in and collaborating with AI firms to strengthen its competitiveness in memory chips and provide a range of AI data center solutions".
'AI should be a choice. Here's where you stand': 90% of DuckDuckGo users vote no to AI in poll
If you thought AI was hated by Windows 11 users, that's nothing compared to the denizens of DuckDuckGo based on a new poll by the maker of the search engine.
VideoCardz spotted the results of the poll where a healthy 175,000 votes were cast. 90% of respondents chose to say no to using AI with the search engine, with the remaining 10% happy to go with AI in their results.
That's a resounding thumbs-down for AI, but we should, of course, take the source into account here.
Microsoft's Maia 200 AI accelerator has 216GB of memory, outperforms Amazon and Google chips
Microsoft has unveiled its latest AI accelerator for inference and token generation, the Maia 200. Built on TSMC's 3nm process with native FP8 and FP4 tensor cores, and an overhauled memory system featuring 216GB of HBM3e at 7 TB/s and 272MB of on-chip SRAM, Microsoft described the Maia 200 as the "most performant, first-party silicon from any hyperscaler."
And with that, Microsoft claims three times the FP4 performance of Amazon's third-generation Trainium accelerator, with faster FP8 performance than Google's seventh-gen TPU. Microsoft adds that the Maia 200 is its most efficient AI inference accelerator to date, boasting 30% better performance-per-dollar than the "latest generation hardware" it currently deploys.
Set to become part of the company's AI infrastructure, Maia 200 will power the latest GPT-5.2 models from OpenAI, Microsoft Foundry, and Microsoft 365 Copilot. The company will also leverage Maia 200 to train next-gen in-house models using synthetic data.
Samsung should be first with HBM4 powering NVIDIA's new Vera Rubin AI chips, passed all tests
Samsung will debut its next-gen HBM4 memory at NVIDIA GTC 2026 in March, reportedly passing all of NVIDIA's strict verification stages, and will arrive on NVIDIA's next-gen Vera Rubin AI platform.
Samsung has spent the last couple of years struggling with its HBM memory division, leaving its South Korean rival -- SK hynix -- to enjoy providing NVIDIA with all of its HBM3 and HBM3E needs. Samsung completely overhauled its HBM and semiconductor division in the last few years, with the fruits of that labor now showing.
NVIDIA will reportedly use its first allotments of HBM4 memory for Vera Rubin from Samsung, as Samsung's new HBM4 memory is the best of the HBM4 offerings from its rivals in SK hynix and US-based Micron. Samsung's new HBM4 memory is rated for above 11Gbps, much higher than JEDEC standards for HBM4, and was pushed and requested at those higher pin speeds from NVIDIA direct.
Apple fast tracks release of wearable device specifically for AI
Big tech companies are throwing billions of dollars at developing sophisticated AI models, and in order for them to turn a profit, those AI models need to be used by as many people as possible. So, what better way to get AIs into more hands than have a new AI-dedicated device available?
Well, the Humane AI Pin attempted this, along with the Rabbit R1 and the Friend. Each of these products was designed to be wearable AI-powered devices that users interact with through voice commands and gestures. However, none of them were successful, despite at least one of the companies stating it was going to be the smartphone replacement.
However, perhaps none of those attempts had the magic that Apple does when it comes to making products, or at least that is what The Information is reporting, with the publication stating that Apple is working on a new product that is approximately the size of an AirTag, but a little thicker, that will be worn as a pin, and be dedicated to accessing AI models.
Continue reading: Apple fast tracks release of wearable device specifically for AI (full post)
YouTube will soon let you create Shorts with your AI likeness
As part of YouTube CEO Neal Mohan's blog post on what creators can expect from the platform in 2026, the Google executive says, "AI will be a boon to the creatives who are ready to lean in." And by that, he means that creators will soon be able to create a YouTube Short with their own AI likeness later in the year.
YouTube Shorts is one of the platform's biggest formats, with the mobile-friendly, TikTok-like section drawing around 200 billion views every day. Although YouTube hasn't explained or provided an example of what these AI-generated Shorts will look like, Neal Mohan is adamant that AI "will remain a tool for expression" and "not a replacement" for creativity.
On the plus side, there will be transparency and protections in place, with AI-generated content clearly labeled as such, and creators able to manage and protect the use of their AI likeness. So, once this new feature goes live, we shouldn't see a bunch of fake Shorts from random users featuring AI versions of popular YouTubers.
Continue reading: YouTube will soon let you create Shorts with your AI likeness (full post)
Anthropic's CEO says NVIDIA is essentially selling nukes to North Korea and bragging about it
The CEO of Anthropic has commented on NVIDIA being able to supply China with sophisticated AI chips to power the nation's expansive development in AI, describing the US approving trade between NVIDIA and China as "like selling nuclear weapons to North Korea".
Dario Amodei spoke at the World Economic Forum in Davos earlier this week and was asked what he thinks about the US approving the export of high-powered AI chips to China, specifically from NVIDIA. Amodei said, "I think this is crazy. It's a bit like selling nuclear weapons to North Korea and bragging that Boeing made the casings."
The response to the question is undoubtedly jarring, especially considering NVIDIA has invested as much as $10 billion into Anthropic, and Anthropic is using NVIDIA hardware to train its own AI models, such as ChatGPT rival, Claude.
AMD's Adrenalin Edition software for Radeon now includes an optional AI Bundle
AMD Software: Adrenalin Edition 26.1.1 is available now, and it features a new optional feature for Radeon graphics users called AI Bundle. This streamlined one-click installer is all about installing and making AI development and creative tools ready to use on AMD systems with at least a Radeon RX 7700 desktop GPU or a Ryzen AI 300 Series APU.
"Traditionally, getting an AI stack ready on Windows meant multiple steps and frequent detours: graphics drivers, Python environments, framework downloads, environment variables, and versions that don't play nicely together," AMD explains. "AI Bundle eliminates that complexity. During your AMD Software: Adrenalin Edition application install, simply check the AI Bundle option, and you're ready to go."
The AI Bundle includes PyTorch on Windows for training and inference, ComfyUI for AMD ROCm accelerated creative workflows such as image generation, Ollama for text generation and automation, LM Studio for running large language models, and Amuse for text-to-image generation optimized for Radeon hardware.
NVIDIA upgrades Vera Rubin HBM4 bandwidth by 10% in order to stay ahead of AMD Instinct MI455X
NVIDIA revised its Vera Rubin VR200 NVL72 AI server spec at CES 2026 a couple of weeks ago, increasing the HBM4 memory bandwidth by 10% to ensure it beats AMD's upcoming Instinct MI455X AI accelerator.
In a new post on X from @SemiAnalysis, the NVIDIA Vera Rubin NVL72 AI server specifications now see HBM4 memory bandwidth at 22.2TB/sec, which is a 10% increase from the specs disclosed by NVIDIA at GTC 2025 last year.
The original 13TB/sec of memory bandwidth from the HBM4 memory was impressive from the original specs unveiled for Vera Rubin NVL144 last year, with NVIDIA originally asking for 9Gbps bandwidth per pin from HBM4, and then a faster 10-11Gbps... but why is that? Why did NVIDIA upgrade the HBM4 bandwidth on Vera Rubin?
Lawsuit alleges NVIDIA approved use of pirated books to train AI models
A complaint filed in the US District Court claims NVIDIA executives approved contact with Anna's Archive, a website that harbors millions of copyrighted books and academic papers, to discuss a partnership that involves using Anna's Archive as a dataset for training its Large Language Models (LLMs).
The complaint alleges that "competitive pressures drove NVIDIA to piracy," and that internal NVIDIA emails demonstrate a member of the company's data strategy team contacting Anna's Archive about the collaboration. Furthermore, the complaint states that Anna's Archive warned NVIDIA that its treasure trove of data was obtained illegally, and asked how Team Green wanted to proceed.
The lawsuit states that within a week, NVIDIA approved of the collaboration, and in response, Anna's Archive offered NVIDIA approximately 500 terabytes of data. "Desperate for books, NVIDIA contacted Anna's Archive -- the largest and most brazen of the remaining shadow libraries -- about acquiring its millions of pirated materials and 'including Anna's Archive in pre-training data for our LLMs,'" the complaint notes.
OpenAI to unveil first device in second half of this year
OpenAI CEO Sam Altman and former Apple chief design officer Jony Ive announced in May last year that they are teaming up to release OpenAI's first hardware product.
At the time of the announcement, Altman didn't provide a description of the product. Still, given OpenAI's dominance with ChatGPT, many have presumed it to be some kind of smaller, possibly wearable device that enables users to communicate directly with the online chatbot. Multiple reports have stated OpenAI is developing prototypes of small devices with no screen that can interact with users, and Altman did say the secret device will be more "peaceful" than a smartphone.
Now, OpenAI's policy chief, Chris Lehane, said on Monday that OpenAI is on track to unveil this mysterious device in the second half of 2026. Lehane said "devices" are one of OpenAI's big projects in 2026, and that he will have more to share about the topic "much later in the year." As for when it will become available to the public, Lehane didn't give an exact date, but did say a release in 2026 was "most likely," but "we will see how things advance."
Continue reading: OpenAI to unveil first device in second half of this year (full post)
ASUS chairman confirms company going 'all in AI', no more Zenfone, ROG smartphones to be made
ASUS chairman Jonney Shih personally confirms that "ASUS will no longer add new mobile phone models in the future" with its vast R&D efforts fully shifted into physical AI, as the company is "all in AI" now.
Just a few days ago on January 16, ASUS held its "2025 Year-End Gala" at the Taipei Nangang Exhibition Center -- where Computex takes place -- with the company awarding its staff with 8 new cars and a bunch of different prizes, but the company took the time to announce its future strategy.
In a pre-event interview, ASUS chairman Jonney Shih confirmed the company will be temporarily ceasing the launches of any new smartphones, and will fully shift its R&D prowess to commercial PC systems and "physical AI", as its pushing hard into the Fourth Industrial Revolution.
OpenAI and Sam Altman confirm ads are coming to ChatGPT
OpenAI has announced in a new X post that it's beginning to test embedded advertisements within ChatGPT conversations, specifically within the free and Go tiers of ChatGPT.
The AI company has outlined its advertising principles in a new image, with the first being "Answer Independence," which means ads do not influence the answers that ChatGPT provides users, and that ads are not optimized toward users. Lastly, ads are "always separate and clearly labelled". Next is "Conversation Privacy," which OpenAI explains is the act of keeping user conversations with ChatGPT private from advertisers, adding, "We will never sell your data to advertisers."
"Choice and Control". This guideline states that users will always have the ability to turn off ad personalization, and the option of clearing data used for ads. Users will have these options available to them at any given time, and there will always be an option for users to turn off ads completely, presumably via a paid subscription tier.
Continue reading: OpenAI and Sam Altman confirm ads are coming to ChatGPT (full post)





















