TweakTown editor Anthony Garreffa recovering after suffering a stroke

Artificial Intelligence

Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology.

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

KIOXIA brings its flash memory, SSD, and AI innovations to NVIDIA GTC 2026

Kosta Andreadis | Mar 12, 2026 8:03 AM CDT

NVIDIA GTC 2026, the company's long-running AI conference, kicks off next week in San Jose, and KIOXIA will be there, showcasing its cutting-edge memory and SSD storage solutions built to meet the demands of AI workloads head-on. GTC is all about bringing the AI industry together, from researchers to developers and businesses, with KIOXIA's focus centered on how its flash storage can play a critical role in empowering scalable and efficient AI infrastructures.

KIOXIA brings its flash memory, SSD, and AI innovations to NVIDIA GTC 2026

KIOXIA's booth will feature a wide range of demonstrations, including the company's groundbreaking AiSAQ (All-in-Storage ANNS with Product Quantization) technology. With the large volumes of data required for modern generative AI systems, this open-source solution offloads large-scale AI vector search from increasingly expensive DRAM to SSD storage - like KIOXIA's CM9 Series Enterprise NVMe SSDs.

This plays into KIOXIA's "Redefining Vector DB Management: Uniting Massive Scalability with GPU-Accelerated Indexing" theater session on Wednesday, March 18, presented by Rory Bolt, a senior fellow and principal architect at the company.

Continue reading: KIOXIA brings its flash memory, SSD, and AI innovations to NVIDIA GTC 2026 (full post)

NVIDIA and ComfyUI streamlines local 4K AI Video Generation on GeForce RTX hardware

Kosta Andreadis | Mar 11, 2026 11:32 PM CDT

At the Game Developers Conference (GDC) in San Francisco, NVIDIA and ComfyUI announced new updates that will streamline AI video generation on RTX GPUs and the DGX Spark for game developers. Although the use of generative AI in game development is somewhat controversial, these tools are all about 'concept development and storyboarding.' Plus, with the new simplified App View interface from ComfyUI, 4K video generation will work on any rig with GeForce RTX 5090.

NVIDIA and ComfyUI streamlines local 4K AI Video Generation on GeForce RTX hardware

ComfyUI App View is designed to be a simple, easy way to generate video, with users only needing to enter a prompt, adjust settings and parameters, and then generate the video. This will sit alongside the more detailed, AI enthusiast-friendly Node View, with users given the option to switch between the two. Optimized for RTX hardware, NVIDIA confirms that performance has been improved by 40% since September 2025, with expanded native support for NVFP4 and FP8 formats.

That part is key because it opens the door to 2.5X faster performance and reduces VRAM requirements by 60% on GeForce RTX 50 Series graphics cards when using NVFP4. And with NVIDIA RTX Video Super Resolution support, AI-generated videos can be upscaled to 4K in a matter of seconds. If you've got a powerful GeForce RTX GPU and are looking for a powerful local AI video generation, then the following NVIDIA Studio Sessions video is for you.

Continue reading: NVIDIA and ComfyUI streamlines local 4K AI Video Generation on GeForce RTX hardware (full post)

YouTube announces its stepping up its fight against AI-generated content

Jak Connor | Mar 10, 2026 9:33 PM CDT

With the emergence of AI-powered tools, there have been genuine concerns about their misuse for nefarious purposes, such as impersonating public figures to sell a product or pushing a specific message online.

YouTube announces its stepping up its fight against AI-generated content

These AI "deepfake" videos, which are essentially AI-generated content designed to impersonate an individual, are a real problem that social media platforms will need to tackle. Now YouTube has announced it's upping its defences against AI deepfakes by expanding its likeness detection technology.

YouTube has outlined in a new blog post that creators in the YouTube Partner Program can now submit a short video of themselves along with a government ID to teach the system what they look like.

Continue reading: YouTube announces its stepping up its fight against AI-generated content (full post)

NVIDIA's financial results show record quarterly revenue of $68 billion, driven by AI

Kosta Andreadis | Feb 25, 2026 10:34 PM CST

NVIDIA has announced its financial results for the fourth quarter and fiscal year, which saw the company generate a record $68.1 billion in revenue. This is up 20% from the third quarter and 73% from the same period a year ago, and is driven primarily by NVIDIA's record Data Center segment revenue of $62.3 billion.

NVIDIA's financial results show record quarterly revenue of $68 billion, driven by AI

This too was up 75% from a year ago, showing that the AI boom and appetite for the company's "accelerated computing and AI" are growing. NVIDIA's Data Center revenue for the full year also increased by 68% to a record $193.7 billion.

"Computing demand is growing exponentially, the agentic AI inflection point has arrived," said Jensen Huang, founder and CEO of NVIDIA. "Grace Blackwell with NVLink is the king of inference today, delivering an order-of-magnitude lower cost per token, and Vera Rubin will extend that leadership even further."

Continue reading: NVIDIA's financial results show record quarterly revenue of $68 billion, driven by AI (full post)

AMD and Meta sign massive AI deal, billions in chips, with Meta to own 10% of AMD

Kosta Andreadis | Feb 24, 2026 10:01 PM CST

AMD and Meta have announced a "multi-year, multi-generation partnership" that will see Meta deploy 6 gigawatts of AMD AI hardware, including various Instinct GPUs. This includes AMD's Helios rack-scale architecture optimized for Meta's AI workloads, which it plans to utilize to accelerate the deployment of new cutting-edge AI models. This partnership will also see the two companies align their roadmaps covering "silicon, systems, and software."

AMD and Meta sign massive AI deal, billions in chips, with Meta to own 10% of AMD

With the first gigawatt expected to begin deployment later this year, the deal also includes Meta buying a stake in AMD, which could see it own around 10% of the company. This "performance-based warrant for up to 160 million shares of AMD common stock" has multiple milestones attached, and fully vests when Meta deploys all 6 gigawatts of AMD Instinct GPUs.

"We are proud to expand our strategic partnership with Meta as they push the boundaries of AI at unprecedented scale," said Dr. Lisa Su, chair and CEO, AMD. "This multi-year, multi-generation collaboration across Instinct GPUs, EPYC CPUs, and rack-scale AI systems aligns our roadmaps to deliver high-performance, energy-efficient infrastructure optimized for Meta's workloads."

Continue reading: AMD and Meta sign massive AI deal, billions in chips, with Meta to own 10% of AMD (full post)

OpenAI boss says AI is energy efficient because humans take '20 years' to get smart

Kosta Andreadis | Feb 23, 2026 12:02 AM CST

Recently, OpenAI CEO Sam Altman sat down for a lengthy interview with The Indian Express, where he gave a rather strange and cold response to a question about the energy required to train complex AI models. This has become a significant concern in many markets, as AI energy consumption is now dwarfing that of most, if not all, other industries.

OpenAI boss says AI is energy efficient because humans take '20 years' to get smart

And with that, the underlying approach of Sam Altman's bizarre response is more or less the idea that if a human can complete a task in a few seconds or minutes versus an AI model that took vast amounts of energy, more than, say, a small city, to train, what's the point or benefit?

"One of the things that is always unfair in this comparison, where people talk about how much energy it takes to train an AI model relative to how much it costs for one human to do an inference query," Sam Altman says. "It also takes a lot of energy to train a human; it takes 20 years of life, and all of the food that you eat during that time, before you get smart."

Continue reading: OpenAI boss says AI is energy efficient because humans take '20 years' to get smart (full post)

NVIDIA CEO teases chips 'world has never seen before' for GTC: possibly Rubin Ultra or Feynman

Anthony Garreffa | Feb 19, 2026 2:51 AM CST

NVIDIA CEO Jensen Huang has been hanging out with engineers from Korean semiconductor giant SK hynix, enjoying some chicken and beer in California, where he teased that "a chip that will surprise the world will be unveiled at GTC next month".

NVIDIA CEO teases chips 'world has never seen before' for GTC: possibly Rubin Ultra or Feynman

In an exclusive interview with Jensen and Korean media outlet KED, the NVIDIA CEO said: "we're at the beginning of a billion-dollar infrastructure project". While enjoying some chicken and beer with NVIDIA and South Korean engineers, Jensen said: "we're one team with Korean semiconductors".

Jensen met with the KED reporter after having dinner with around 30 NVIDIA and SK hynix engineers at 99 Chicken, a Korea-style fried chicken restaurant in Santa Clara, California. The NVIDIA CEO readily accepted the unscheduled interview request, saying: "ask as much as you want".

Continue reading: NVIDIA CEO teases chips 'world has never seen before' for GTC: possibly Rubin Ultra or Feynman (full post)

Microsoft's AI boss says AI could replace all white-collar jobs within 18 months

Kosta Andreadis | Feb 15, 2026 9:49 PM CST

Mustafa Suleyman, Microsoft's AI CEO, recently sat down with the Financial Times to discuss all things AI, and he had a few words to say about AI one day taking over and making jobs and employment a thing of the past. In fact, when it comes to white-collar jobs that involve sitting at a computer day in and day out, Suleyman has set a timeline of 12 to 18 months for those jobs to become redundant and "fully automated by an AI."

Microsoft's AI boss says AI could replace all white-collar jobs within 18 months

"I think we're going to have a human-level performance on most, if not all, professional tasks," Mustafa Suleyman says. "White-collar work, where you're sitting down at a computer - either being, you know, a lawyer, or an accountant, or a project manager, or a marketing person - most of those tasks will be fully automated by an AI within the next 12 to 18 months."

Suleyman notes that this will be due to AI achieving "human-level performance" on nearly all professional tasks, regardless of industry. He adds that this AI advance has already reached the software engineering sector, where he says "AI-assisted coding" is now a mainstay.

Continue reading: Microsoft's AI boss says AI could replace all white-collar jobs within 18 months (full post)

SK hynix shows off 16Gb LPDDR6 at 14.4Gbps, while Samsung sends LPDDR6X samples to Qualcomm

Anthony Garreffa | Feb 13, 2026 12:36 AM CST

SK hynix will be showcasing its latest 16Gbit (2GB) LPDDR6 SDRAM design at International Solid-State Circuits Conference (ISSCC) 2026 next week.

SK hynix shows off 16Gb LPDDR6 at 14.4Gbps, while Samsung sends LPDDR6X samples to Qualcomm

The new 16Gbit LPDDR6 memory is aiming for 14.4Gbps per I/O pin, aligning with the fastest transfer speed for the LPDDR6 standard, with SK hynix saying its 16Gbit LPDDR6 memory design is built on its new 1c DRAM process, its latest 10nm-class node. The upcoming ISSCC 2026 preview also notes that the paper will be focusing on power saving and signal handling changes to hit 14.4Gbps operation on the new LPDDR6 modules.

Samsung has also reportedly started providing early LPDDR6X memory samples to Qualcomm according to The Bell, with the new standard not detailed yet, but we know that LPDDR6X commercialization will happen in the second half of 2027.

Continue reading: SK hynix shows off 16Gb LPDDR6 at 14.4Gbps, while Samsung sends LPDDR6X samples to Qualcomm (full post)

Samsung officially ships HBM4 ready for NVIDIA's next-gen Rubin AI chips

Anthony Garreffa | Feb 12, 2026 10:10 PM CST

Samsung has been fighting hard on its semiconductor and HBM memory business over the last few years, but now it has officially started commercially deploying its next-gen HBM4 memory, ready for NVIDIA's new Rubin AI chips.

Samsung officially ships HBM4 ready for NVIDIA's next-gen Rubin AI chips

The company explained in a press release that its new HBM4 memory has transfer speeds of 11.7Gbps, but when overclocked like NVIDIA needs it, Samsung's new HBM4 is capable of 13Gbps. Its new leading-edge DRAM is based on a 4nm logic die for maximum performance, fabbed in-house at Samsung Foundry, with its 1c DRAM also in play.

Sang Joon Hwang, Executive Vice President and Head of Memory Development at Samsung Electronics, said: "Instead of taking the conventional path of utilizing existing proven designs, Samsung took the leap and adopted the most advanced nodes like the 1c DRAM and 4nm logic process for HBM4. By leveraging our process competitiveness and design optimization, we are able to secure substantial performance headroom, enabling us to satisfy our customers' escalating demands for higher performance, when they need them".

Continue reading: Samsung officially ships HBM4 ready for NVIDIA's next-gen Rubin AI chips (full post)

NVIDIA rumor: lowered required HBM4 speeds for Rubin AI chips, else it won't get enough supply

Anthony Garreffa | Feb 10, 2026 10:10 PM CST

NVIDIA only just recently asked for 9Gbps HBM4 before pushing for 10-11Gbps, but now rumor has it that NVIDIA has quietly lowered the required HBM4 spec speeds for Rubin, because if they didn't, they probably wouldn't be able to enjoy the full volume needs.

NVIDIA rumor: lowered required HBM4 speeds for Rubin AI chips, else it won't get enough supply

In a new post on X from analyst @Jukan, who posted: "Anyway, just wait and see. There's a rumor going around that NVIDIA lowered the required speed specs for HBM4 because even with Samsung alone, they probably can't meet the full volume NVIDIA needs. Who knows? Maybe Micron could sneak in and supply some HBM4 through this gap. (But personally, I still think SK Hynix is the one who's gonna supply it lol)".

If the rumors are true, it would make sense as according to other recent reports, US-based Micron won't be providing any of its HBM4 to NVIDIA for Rubin, with SK hynix providing 70% of the company's HBM4 needs, and Samsung Electronics with the other 30% of HBM4 supply.

Continue reading: NVIDIA rumor: lowered required HBM4 speeds for Rubin AI chips, else it won't get enough supply (full post)

Intel shows off next-gen 'ZAM' memory prototype: new Z-angle architecture, next-gen performance

Anthony Garreffa | Feb 10, 2026 9:09 PM CST

Intel and SoftBank announced their next-generation ZAM memory technology recently, but now the new ZAM memory prototype has been shown off at the recent Intel Connection Japan 2026 event.

Intel shows off next-gen 'ZAM' memory prototype: new Z-angle architecture, next-gen performance

The focus on ZAM memory discussion during the event was on how the new Z-angle architecture would help mitigate performance issues and improve thermals using existing cooling technology. Intel Fellow and CTO of Intel Government Technologies, Joshua Fryman, was there alongside Intel Japan CEO Makoto Ono.

ZAM has been limited to just research papers and press releases, but with the new Intel and SAImemory (SoftBank subsidiary) the team is now pushing ahead with ZAM and some prototypes. The biggest difference here compared to HBM and other memories is that ZAM integrates a massive amount of interconnect topology that is routed diagonally throughout the die stack, instead of drilling vertically down. Intel says that the biggest benefit with ZAM is its thermal capabilities.

Continue reading: Intel shows off next-gen 'ZAM' memory prototype: new Z-angle architecture, next-gen performance (full post)

ChatGPT now has ads and OpenAI says they 'do not influence' answers

Kosta Andreadis | Feb 9, 2026 11:57 PM CST

OpenAI has announced that it is currently testing ads in ChatGPT in the U.S., for users on the Free and Go subscription tiers. The AI company is quick to note that the addition of ads won't "influence the answers ChatGPT gives you" and that your conversations with the AI platform will remain private and won't be used for marketing.

ChatGPT now has ads and OpenAI says they 'do not influence' answers

The good news for those who pay for or use Plus, Pro, Business, Enterprise, or Education accounts is that you won't see ads, and ChatGPT will remain unchanged. OpenAI has showcased what the ads will look like and how they will clearly be marked as sponsored. Ads will be related to the subject or topic, with a food-and-recipe example interaction delivering the sort of ad you might see elsewhere online.

OpenAI notes that it's adding ads to ChatGPT to support "broader access" to its features, presumably to cover the costs of hundreds of millions of people interacting with ChatGPT every day.

Continue reading: ChatGPT now has ads and OpenAI says they 'do not influence' answers (full post)

Razer CEO dislikes 'GenAI Slop' but believes 'AI is a tool to help game developers'

Kosta Andreadis | Feb 9, 2026 10:03 PM CST

In a new post tagged 'the future of gaming is AI,' the company, best known for creating gaming peripherals, explains why it's investing over $600 million in AI. And that's primarily about AI tools and technologies for game development, with Razer believing that how generative AI is used is more important than whether AI is used at all.

Razer CEO dislikes 'GenAI Slop' but believes 'AI is a tool to help game developers'

"The way we see it is that AI is a tool to help game developers make better games, rather than replace human creativity," Razer CEO and Co-founder Min-Liang Tan said during a recent episode of The Verge's Decoder podcast. "As gamers... what we're unhappy with is GenAI slop. When I play a game, I want to be engaged. I want to be immersed. I want to compete. I don't want to see characters with extra fingers or shoddily written storylines."

That comment is in response to the influx of AI-generated images and videos, widely referred to as "AI slop," which are considered inferior to human-created art. For Razer, generative AI in games is more of an extension of NPC behavior, procedural systems, and AI used to "strengthen the craft of making games."

Continue reading: Razer CEO dislikes 'GenAI Slop' but believes 'AI is a tool to help game developers' (full post)

AI music generation comes to AMD Ryzen AI processors and Radeon GPUs

Kosta Andreadis | Feb 5, 2026 10:34 PM CST

When it comes to images, video, voice, and music, AI generation has reached a point where a wide range of models can produce impressive results. That said, on the creative side of generative AI, most users still connect to cloud-based services. ACE Step 1.5 is an open-source foundation model for generating music, and now it's been optimized to run locally on AMD Ryzen AI processors and AMD Radeon graphics cards.

AI music generation comes to AMD Ryzen AI processors and Radeon GPUs

AMD notes that you can generate full-length music tracks like the 'Country Ballad' example above, iterate, and keep all assets on-device from initial prompt to the final piece of generated music.

"Without per-track fees or upload limits, creators can experiment freely, enabling them to sketch ideas, test arrangements, and explore new sounds without friction," AMD writes. "On-device generation enables immediate iteration, making it easier to refine or discard ideas in real time without relying on an internet connection."

Continue reading: AI music generation comes to AMD Ryzen AI processors and Radeon GPUs (full post)

AMD CEO Lisa Su says AI is 'accelerating at a pace that I would not have imagined'

Kosta Andreadis | Feb 4, 2026 9:31 PM CST

AMD recently reported its Q4 2025 earnings, with record revenue of $10.3 billion and a gross margin of 54%. This was indicative of AMD's banner year for investors, driven by the AI boom and demand for AMD's EPYC processors and Instinct graphics cards.

AMD CEO Lisa Su says AI is 'accelerating at a pace that I would not have imagined'

As part of its fourth-quarter and 2025 financial results, the company also forecast revenue of $9.8 billion for the first quarter of 2026, plus or minus $300 million. Although this is higher than the $9.38 billion estimate from Wall Street analysts, AMD's share price dropped 13% after its latest financial report.

The reason for the drop, according to reports, is that AMD's forecast felt too conservative for a company in the middle of the AI gold rush. Even though this follows last year's announcement of key AI partnerships with OpenAI and Oracle, and AMD's planned rollout of its server-based Helios AI systems later this year, there's also a growing sense of caution about the sustainability of AI infrastructure spending.

Continue reading: AMD CEO Lisa Su says AI is 'accelerating at a pace that I would not have imagined' (full post)

Gaming stocks tumble after Google shows Project Genie's real-time AI-generated 3D worlds

Kosta Andreadis | Feb 2, 2026 12:31 AM CST

Project Genie, from Google, is a new experimental research prototype available to Google AI Ultra subscribers in the U.S. Although it's only a couple of days old, Project Genie has been making waves across social media because it leverages AI to create fully interactive worlds and environments with realistic physics that you can freely explore.

Gaming stocks tumble after Google shows Project Genie's real-time AI-generated 3D worlds

Project Genie combines Google's general-purpose world model, Genie 3, with Nano Banana Pro and Gemini, allowing users to sketch and shape worlds before jumping in to explore them. And with that, many consider it a significant milestone for AI and a step toward generating video games that you can play in real time from a simple text prompt.

At a glance, it's groundbreaking and feels like a glimpse of a future of AI-generated games, but the resolution is limited to 720p, the frame rate is 24 FPS, and the input latency is anything but responsive or smooth. Plus, you've only got 60 seconds. That said, after Google announced Project Genie on Jan 29, 2026, its impact reached the stock market, causing several notable game-related stocks for companies creating engines and handcrafted open worlds to crash.

Continue reading: Gaming stocks tumble after Google shows Project Genie's real-time AI-generated 3D worlds (full post)

TSMC needs to double production over the next 10 years to keep up with NVIDIA demand

Anthony Garreffa | Feb 1, 2026 9:09 PM CST

TSMC is pumping out as much silicon as it can for virtually every big tech company, but NVIDIA CEO Jensen Huang is in Taiwan right now handling business, where he spoke with local media saying TSMC needs to double production over the next 10 years for the "world's largest infrastructure" buildout.

TSMC needs to double production over the next 10 years to keep up with NVIDIA demand

Jensen told local Taiwanese media: "TSMC's production capacity may grow by more than 100% in the next ten years, which is a very significant scale expansion, the largest infrastructure investment in human history, and it will have to double just to meet NVIDIA's demand".

TSMC has been expanding its semiconductor fabs for a while now, as it has factored in geopolitical issues, which has seen TSMC pump serious money into regions including the EU, Japan, and the US. TSMC also plans to build out a supply chain in the US with a massive $250 billion mega-buildout which includes advanced packaging, semiconductors, and R&D centers.

Continue reading: TSMC needs to double production over the next 10 years to keep up with NVIDIA demand (full post)

SK hynix makes 'significant' progress in NVIDIA's extensive HBM4 tests, close to mass supply

Anthony Garreffa | Jan 30, 2026 1:33 AM CST

SK hynix has reportedly had "significant progress" in NVIDIA's extensive HBM4 qualification tests, which will end up inside of Rubin AI GPUs coming soon.

SK hynix makes 'significant' progress in NVIDIA's extensive HBM4 tests, close to mass supply

In a new report from South Korean media outlet Hankyung picked up by analyst @Jukan on X, we're hearing from industry sources that on January 30, SK hynix achieved "meaningful results" in NVIDIA's HBM4 System-in-Package (SiP) testing earlier this month. SK hynix started the Customer Sample (CS) certification process with NVIDIA in October 2025, where during that time defects were found in some circuits.

SK hynix made modifications to the circuits and adjusted the process, delivering improved HBM4 memory chips to NVIDIA earlier this month. It's been confirmed that these optimized products are very close to being ready for mass production, with the new HBM4 memory chips are good to go at 10Gbps under general environments, they are hitting 9-10Gbps under NVIDIA's rigorous test conditions for temperature, humidity, and impact.

Continue reading: SK hynix makes 'significant' progress in NVIDIA's extensive HBM4 tests, close to mass supply (full post)

Elon Musk says xAI will generate high-quality video games 'at scale' in 2027

Kosta Andreadis | Jan 29, 2026 11:32 PM CST

Last year, Elon Musk took to social media to announce (or simply predict) that the xAI game studio would release a "great AI-generated game" before the end of 2026. Although the statement itself is vague, and there are already games like the popular Arc Raiders that feature AI-generated content, the assumption is that this would be a fully playable experience in which gameplay, art, level design, and so forth are all generated by AI.

Elon Musk says xAI will generate high-quality video games 'at scale' in 2027

It's a bold statement (or prediction) because we have yet to see even a small vertical slice of an AI-generated game that looks impressive, but one that Elon Musk is doubling down on. Responding to a post highlighting how xAI's Grok Imagine AI-video generation has grown in popularity and quality, Elon Musk is now predicting big things for AI-generated content in 2027. Specifically, stuff coming from xAI.

"Real-time, high-quality shows and video games at scale, customized to the individual, next year," the post reads. Adding that high-resolution AI-generated videos are coming this year, but they're too expensive to be mass-market. That second part does sound very likely; however, "high-quality shows" and "video games at scale" still feel out of reach.

Continue reading: Elon Musk says xAI will generate high-quality video games 'at scale' in 2027 (full post)

Newsletter Subscription