Artificial Intelligence - Page 7

Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 7

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

Broadcom secures $10 billion ASIC contract, with Apple and xAI next in line for new AI chips

Anthony Garreffa | Sep 9, 2025 6:06 PM CDT

Broadcom has secured a huge $10 billion custom ASIC contract from a major new customer that is outside of the core hyperscale cloud service provider (CSP) segment, with Apple and xAI also next in line behind them.

Broadcom secures $10 billion ASIC contract, with Apple and xAI next in line for new AI chips

In a new report from Digitimes picked up by @Jukanrosleve on X, we're hearing that further orders from TikTok parent company ByteDance, Apple, and Elon Musk's xAI are already in the pipeline, with a development analysis seeing it as reinforcing Broadcom's position as a "credible challenger" to NVIDIA.

According to industry sources, OpenAI's new custom ASIC will enter mass production in 2026, and will position the AI startup as Broadcom's fourth confirmed large-scale ASIC customer. Neither Broadcom nor OpenAI have commented on the deal yet, but insiders that are familiar with Broadcom's roadmap have confirmed the order is indeed genuine.

Continue reading: Broadcom secures $10 billion ASIC contract, with Apple and xAI next in line for new AI chips (full post)

NVIDIA CFO talks $5B AI GPU revenue in China, GB300 ramp up, next-gen Vera Rubin AI GPU demand

Anthony Garreffa | Sep 8, 2025 8:34 PM CDT

NVIDIA CFO Colette Kress has discussed the ongoing geopolitical issues between the US and China, its H20 AI GPU, new Blackwell Ultra GB300 system, and next-gen Vera Rubin AI platform at the recent Goldman Sachs Communacopia + Technology Conference.

NVIDIA CFO talks $5B AI GPU revenue in China, GB300 ramp up, next-gen Vera Rubin AI GPU demand

Kress said that the geopolitical issues between the United States and China are affecting NVIDIA's ability to recognize revenues made from its H20 AI GPU to China, where the NVIDIA CFO said that the company has received H20 licenses from the Trump administration, and could account for around $5 billion in revenue from H20 AI GPU sales in Q3 2025.

Kress started out her talk with NVIDIA data center revenues and its growth, despite removing its H20 AI GPU sales from the mix, where the NVIDIA CFO said if we look at NVIDIA's revenues -- including data center and networking -- there's a 12% sequential, or quarter-over-quarter growth in Q2 2025.

Continue reading: NVIDIA CFO talks $5B AI GPU revenue in China, GB300 ramp up, next-gen Vera Rubin AI GPU demand (full post)

RTX 5090 and RTX PRO 6000 GPU have a new bug: need a full system reboot after virtualization

Anthony Garreffa | Sep 8, 2025 4:19 PM CDT

NVIDIA's higher-end GeForce RTX 5090 and RTX PRO 6000 cards have hit a new bug after running virtualization for a few days, which requires a full system reboot to get them back online again.

RTX 5090 and RTX PRO 6000 GPU have a new bug: need a full system reboot after virtualization

CloudRift is a GPU cloud for developers, reporting crashing issues with both the RTX 5090 and RTX PRO 6000 cards saying that after a "few days" of VM usage, the cards were completely unresponsive. The GPUs can no longer be accessed unless the node system is rebooted, but thankfully it's only happening to the RTX 5090 and RTX PRO 6000, as the RTX 4090, Hopper H100, and Blackwell B200 aren't affected, for now.

What's happening exactly? The GPU gets assigned to a VM environment using the device driver VFIO, and after the Functional Level Reset (FLR), the GPU is completely unresponsive. After the GPU becomes unresponsive, it results in a kernel "soft lock" which puts the host and client environments under a deadlock. In order to get out of that deadlock the machine has to be rebooted, which isn't an easy thing to do for CloudRift, as they have a big volume of guest machines.

Continue reading: RTX 5090 and RTX PRO 6000 GPU have a new bug: need a full system reboot after virtualization (full post)

GIGABYTE's new AI TOP CXL card: add up to 512GB memory to TRX50 and W790 AI TOP motherboards

Anthony Garreffa | Sep 5, 2025 6:06 PM CDT

GIGABYTE has just revealed its new AI TOP CXL R5X4 memory expansion card, letting AI TOP motherboard owners add another 512GB of RAM to their system. Check it out:

GIGABYTE's new AI TOP CXL card: add up to 512GB memory to TRX50 and W790 AI TOP motherboards

The new GIGABYTE AI TOP CXL R5X4 won't work on every motherboard, instead it will only work on two motherboards: the TRX 50 AI TOP and W790 AI TOP, with GIGABYTE reminding users that they should contact the company before making the purchase to make sure.

The AI TOP CXL R5X4 uses the regular PCIe 5.0 x16 interface, supporting CXL 2.0/1.1 operation, with four DDR5 DIMM slots for registered ECC memory modules, for a total of up to 512GB (128GB x 4 sticks). GIGABYTE's new AI TOP CXL R5X4 measures 12.0 x 25.4cm and features a 16-layer HDI PCB.

Continue reading: GIGABYTE's new AI TOP CXL card: add up to 512GB memory to TRX50 and W790 AI TOP motherboards (full post)

OpenAI to move away from NVIDIA GPUs with new Broadcom partnership

Jak Connor | Sep 5, 2025 10:08 AM CDT

OpenAI has reportedly signed a deal with US semiconductor firm Broadcom in a long-term plan to reduce its reliance on NVIDIA GPUs.

OpenAI to move away from NVIDIA GPUs with new Broadcom partnership

OpenAI, the creators of the famed AI chatbot ChatGPT, have reportedly entered into a partnership with Broadcom to make custom AI accelerators slated to go into use next year. The report comes from The Financial Times and Reuters, which states OpenAI is looking to move away from NVIDIA's expensive AI GPUs for training future models and wants to create their own custom chip to reduce expenditure.

NVIDIA has become the global wholesaler for AI GPUs, with the company's data center division generating $115.2 billion last year - more than AMD and Intel's entire company revenues combined. NVIDIA's dominance in the market has been a result of AI companies such as Meta, Microsoft, and OpenAI requiring more GPU horsepower to train more sophisticated AI models. However, the AI GPU prices are extremely expensive, and with more horsepower required it means larger investments.

Continue reading: OpenAI to move away from NVIDIA GPUs with new Broadcom partnership (full post)

Warner Bros Discovery joins the list of companies suing Midjourney AI

Kosta Andreadis | Sep 4, 2025 10:33 PM CDT

A few months ago, we reported on two of the biggest entertainment studios in Hollywood, Disney and Universal, suing the AI company Midjourney for copyright infringement. The lawsuit didn't pull any punches, with the entertainment giants calling Midjourney and its AI-powered image and animation generation tools a "copyright free-rider" and "a bottomless pit of plagiarism."

Warner Bros Discovery joins the list of companies suing Midjourney AI

We can now add Warner Bros. Discovery to the list of entertainment companies suing Midjourney for copyright infringement. "Superman, Batman, Wonder Woman, Bugs Bunny, and Scooby-Doo," the complaint reads. "These are some of the most popular and valuable fictional characters ever created, and they (and many other characters) are owned by Warner Bros. Discovery," it continues, adding that only Warner Bros. Discovery has the right to create content and build a business around copyrighted characters.

The complaint is similarly strongly worded as previous lawsuits filed against Midjourney, stating that "Midjourney thinks it is above the law" and that the company could "easily stop its theft and exploitation" of its intellectual property.

Continue reading: Warner Bros Discovery joins the list of companies suing Midjourney AI (full post)

Analyst says all 6 of NVIDIA's next-gen Vera Rubin AI chips are in final pre-production at TSMC

Anthony Garreffa | Sep 4, 2025 9:53 PM CDT

NVIDIA has confirmed its next-gen Vera Rubin AI AI platform isn't delayed, and that it's on track for release in the second half of 2026, with all 6 of its chips taped out at TSMC already.

Analyst says all 6 of NVIDIA's next-gen Vera Rubin AI chips are in final pre-production at TSMC

According to JPMorgan analyst Harlan Sur who attended an investor group meeting with Toshiya Hari, the VP of AI and strategic finance at NVIDIA, walked away with an upbeat view on the demand profile for NVIDIA's current-gen AI GPUs and the upcoming production cadence of its next-gen Vera Rubin AI platform.

NVIDIA is pumping out its new Blackwell Ultra GB300 AI GPUs and AI servers, with Sur noting that the ramp-up of Blackwell AI chips: "we believe NVIDIA's 12-month forward order book continues to outstrip supply". He reinstates his previous "overweight" rating on NVIDIA stock, lifting his price target 26% from $170 to $215.

Continue reading: Analyst says all 6 of NVIDIA's next-gen Vera Rubin AI chips are in final pre-production at TSMC (full post)

Chinese companies prepared to pay $24,000 for NVIDIA's next-gen China-specific B30A AI GPU

Anthony Garreffa | Sep 4, 2025 5:05 PM CDT

Chinese tech companies are willing to pay $24,000 for each of NVIDIA's new China-specific B30A AI GPUs, and are eager to have their H20 AI GPU orders processed ASAP.

Chinese companies prepared to pay $24,000 for NVIDIA's next-gen China-specific B30A AI GPU

In a new report from Reuters, the outlet reports that its sources said Chinese tech companies like Alibaba and TikTok parent company ByteDance are eager to have their H20 AI GPU orders processed, and are willing to pay $24,000 for NVIDIA's next-gen China-specific B30A AI GPU.

Chinese AI developers in all sizes with small and medium-sized businesses continue to prefer NVIDIA AI GPU offerings as they provide better software integration and performance in chip clusters, even in the midst of the Chinese government aggressively pushing local companies to rely on domestic AI chips, going as far as mandating government clusters to feature at least 50% domestic AI chips.

Continue reading: Chinese companies prepared to pay $24,000 for NVIDIA's next-gen China-specific B30A AI GPU (full post)

Samsung to use more ASML High-NA EUV lithography tools to speed up 2nm GAA wafer production

Anthony Garreffa | Sep 4, 2025 6:18 AM CDT

Samsung is reportedly considering bringing in more of ASML's new High-NA EUV lithography machines into its domestic semiconductor labs to accelerate the advancement of its leading-edge 2nm GAA process technology.

Samsung to use more ASML High-NA EUV lithography tools to speed up 2nm GAA wafer production

South Korea is heating up with its domestic semiconductor foundry competition, with SK hynix and now Samsung eyeing the use of ASML's new High-NA EUV machines. In a new report from FNnews, we're hearing that Samsung, TSMC, and Intel are looking into deploying additional High-NA EUV machines to boost next-gen semiconductor technology.

Samsung's move to use more High-NA EUV machines is essential for building new chips with ultra-fine circuits measuring 2nm or less, and the timing of the machine installation would determine success -- or failure -- in next-gen AI and high-performance semiconductor markets. Samsung was the first to deploy ASML's new High-NA EUV EXE:5000 lithography machine in South Korea, SK hynix claiming the same, at its Hwaseong campus earlier this year in March 2025.

Continue reading: Samsung to use more ASML High-NA EUV lithography tools to speed up 2nm GAA wafer production (full post)

AMD Instinct MI500 UAL256 Mega Pod should scale up to 256 GPUs, 64 EPYC 'Verano' CPUs

Anthony Garreffa | Sep 4, 2025 5:22 AM CDT

AMD will release its next-gen Instinct MI500 Scale Up Mega Pod with a huge 256 x MI500 AI chips spread across three interconnected racks, with next-gen AMD EPYC "Verano" CPUs.

AMD Instinct MI500 UAL256 Mega Pod should scale up to 256 GPUs, 64 EPYC 'Verano' CPUs

The new MI500 Scale Up Mega Pod will feature 256 physical GPU packages, reports SemiAnalysis on X, versus just 144 physical GPU packages for NVIDIA's next-gen Kyber VR300 NVL576 AI server. Each of the outer racks would house 32 compute trays per rack, with 18 switch trays in the middle, for a total of 64 compute trays per Mega Pod.

Each of the new Mega Pods would feature 256 physical/logical GPU packages, compared to 144 physical/logical GPU packages for NVIDIA's upcoming Kyber VR300 NVL576 AI servers with new Vera Rubin AI GPUs inside. Instinct MI500 UAL256 will be AMD's second-gen rack-scale AI system, after the upcoming Instinct MI450X IF128 known as "Helios" which will be launching in 2H 2026.

Continue reading: AMD Instinct MI500 UAL256 Mega Pod should scale up to 256 GPUs, 64 EPYC 'Verano' CPUs (full post)

SK hynix assembles industry's first High-NA EUV machine for its M16 fab plant in South Korea

Anthony Garreffa | Sep 3, 2025 11:44 PM CDT

SK hynix has just announced that it has assembled the industry's first High-NA EUV lithography machine, ready for mass production at its M16 fabrication plant in South Korea.

SK hynix assembles industry's first High-NA EUV machine for its M16 fab plant in South Korea

The new High-NA EUV lithography machines are next-generation lithography systems that are capable of better resolution by using a larger NA, compared to earlier EUV systems, enabling the world's finest patterns, expected to help shrink the pattern and improve density.

The TWINSCAN EXE:5200B, the first model for volume production of ASML's High NA EUV product line, enables printing of transistors 1.7 times smaller and achievement of transistor densities 2.9 times higher, compared with the existing EUV system, with a 40% improvement in the NA to 0.55 from 0.33.

Continue reading: SK hynix assembles industry's first High-NA EUV machine for its M16 fab plant in South Korea (full post)

Acer's new Veriton GN100: AI Mini-PC powered by the new NVIDIA GB10 Superchip, starts at $3999

Anthony Garreffa | Sep 3, 2025 7:07 PM CDT

Acer has just unveiled its new Veriton GN100 AI Mini-PC workstation system powered by NVIDIA's new GB10 Superchip, starting at $3999 and get this... you can combine two of them together, almost like SLI AI Mini-PCs!

Acer's new Veriton GN100: AI Mini-PC powered by the new NVIDIA GB10 Superchip, starts at $3999

The new Acer Veriton GN100 AI Mini-PC is a new ultra-compact AI workstation with the NVIDIA GB10 Grace Blackwell Superchip with 128GB of unified memory, 4TB of storage, and 1 PFLOPS of FP4 AI performance. Inside, the GB10 Superchip features a 20-core Arm CPU and a Blackwell GPU with 6144 cores.

There's 128GB of LPDDR5X memory that gets shared between the CPU and GPU, the aforementioned 4TB of SSD storage, and a boatload of I/O that includes 4 x USB 3.2 Type-C ports, HDMI 2.1b, a LAN port, Wi-Fi 7, Bluetooth 5.1, and NVIDIA ConnectX-7 NIC.

Continue reading: Acer's new Veriton GN100: AI Mini-PC powered by the new NVIDIA GB10 Superchip, starts at $3999 (full post)

NVIDIA is teaching AI human common sense by getting it to make toast

Jak Connor | Sep 3, 2025 7:33 AM CDT

NVIDIA has detailed in a recent press release that it intends to teach AI what seems obvious to humans: common sense. The company says that visual AI models currently lack this understanding, and if physical AI is ever to come to the real world, it will need to have a grasp on what humans deem common sense.

NVIDIA is teaching AI human common sense by getting it to make toast

Common sense, or the basic understanding that humans develop through real-world experiences, can't be organically learned by AI; the models have to be specifically taught it. In order to teach AI models common sense, a series of tests was developed to coach them on the limitations of the physical world.

For example, NVIDIA's Cosmos Reason model, an open reasoning vision language model (VLM) that is used for physical AI applications such as robotics, autonomous vehicles, and smart spaces, is currently leading when it comes to the physical reasoning (common sense) leaderboard.

Continue reading: NVIDIA is teaching AI human common sense by getting it to make toast (full post)

Microsoft's VibeVoice uses AI to create 90-minute podcasts with multiple speakers

Kosta Andreadis | Sep 2, 2025 9:32 PM CDT

Microsoft's new open-source text-to-voice generative AI tool, VibeVoice, is an interesting one, as it can generate audio of up to 90 minutes in length with four distinct speakers. Naturally, with a script, VibeVoice becomes a viable tool for creating an audio podcast or other "expressive, long-form, multi-speaker conversational audio."

Microsoft's VibeVoice uses AI to create 90-minute podcasts with multiple speakers

With there already being quite a few AI-powered Text-to-Speech (TTS) systems and tools, what separates VibeVoice from the pack is its ability to maintain and preserve audio fidelity, speaker consistency, and "natural turn-taking" over an extended period.

"VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details," the official description reads. VibeVoice offers a live demo for you to check out, along with the option to download it.

Continue reading: Microsoft's VibeVoice uses AI to create 90-minute podcasts with multiple speakers (full post)

NVIDIA GTC 2026 confirmed for March 16-19: expect Blackwell, Rubin, Feynman GPUs but no GeForce

Anthony Garreffa | Aug 31, 2025 2:55 AM CDT

NVIDIA will be hosting its annual GPU Technology Conference (GTC) 2025 event once again next year between March 16-19, 2026, in San Jose, California. We should expect the big launch of its next-gen Rubin AI GPU architecture to take the stage.

NVIDIA GTC 2026 confirmed for March 16-19: expect Blackwell, Rubin, Feynman GPUs but no GeForce

NVIDIA GTC is all about GPU technology, with data center being NVIDIA's biggest revenue driver by far, with developers, researchers, and other professionals descending upon San Jose for the event. Last year at GTC 2025, NVIDIA unveiled its next steps in GPU architectures with the company confirming its Blackwell Ultra (NVL72), and Vera Rubin architecture for 2026, scaling to huge NVL144 systems for hyperscale deployments.

The company even teased its Rubin Ultra (NVL576) for 2027, and its next-next-gen Feynman architecture that will roll out in 2028. NVL72 = 72 AI GPUs per server, NVL144 = 144 AI GPUs per server, but with Rubin Ultra coming in 2027... NVL576 = an insane 576 AI GPUs per server, which is truly mind-blowing.

Continue reading: NVIDIA GTC 2026 confirmed for March 16-19: expect Blackwell, Rubin, Feynman GPUs but no GeForce (full post)

TSMC's first 1.4nm chip facility ahead of schedule, initial investment could be close to $50B

Anthony Garreffa | Aug 28, 2025 9:36 PM CDT

TSMC is reportedly ahead of schedule with its next-gen 1.4nm process node, where it will break ground on the new semiconductor fab with an initial investment that could reach $49 billion.

TSMC's first 1.4nm chip facility ahead of schedule, initial investment could be close to $50B

In a new report from Economic Daily News, we're hearing that TSMC suppliers have been informed of the changed plans, just in case things need to be expedited in order to kick off 1.4nm production. TSMC's new "Fab 25" semiconductor facility will be built at the Central Taiwan Science Park which is located near Taichung City, with Fab 25 to comprise of four different plants, with the first plant undergoing trial production towards the tail end of 2027.

TSMC could move into full-scale production of its next-gen 1.4nm process node (A14) in the second half of 2028, with the new 1.4nm node promising a 15% improvement in performance, and a larger 30% in power efficiency. TSMC's other three plants will also be working on the new 1.4nm wafer production, with the report saying TSMC is looking at an even more advanced lithography of 1nm, which we've previously reported would be unleashed in 2029.

Continue reading: TSMC's first 1.4nm chip facility ahead of schedule, initial investment could be close to $50B (full post)

NVIDIA confirms next-gen Rubin AI GPUs with HBM4 are in the fab, volume production in 2H 2026

Anthony Garreffa | Aug 28, 2025 8:08 PM CDT

NVIDIA CEO Jensen Huang has confirmed that its next-generation Rubin AI GPUs are already in the fabs, and are aiming for volume production in the second half of 2026, after it posted another quarter of record revenue hitting $46.7 billion.

NVIDIA confirms next-gen Rubin AI GPUs with HBM4 are in the fab, volume production in 2H 2026

During its recent Q2 2026 earnings call, Jensen confirmed Rubin and its 5 other chips -- Vera CPU, CX9 SuperNIC, Spectrum-X, scale Silicon Photonics processor, and NVLINK 144 switch, are in the fabs at TSMC right now, and will launch in 2026. NVIDIA confirmed that all of these new chips will be ready for volume production in 2H 2026.

Jensen said: "The chips of the Rubin platform are in fab, the Vera CPU, Rubin GPU, CX9 SuperNIC, NVLink 144 scale up switch, Spectrum-X scale out and scale across switch, and the silicon photonics processor. Rubin remains on schedule for volume production next year. Rubin will be our third-generation NVLink rack scale AI supercomputer with a mature and full-scale supply chain. This keeps us on track with our pace of an annual product cadence and continuous innovation across compute, networking, systems and software".

Continue reading: NVIDIA confirms next-gen Rubin AI GPUs with HBM4 are in the fab, volume production in 2H 2026 (full post)

NVIDIA details Blackwell Ultra GB300: dual-die design, 208B transistors, up to 288GB HBM3E

Anthony Garreffa | Aug 26, 2025 11:22 PM CDT

NVIDIA has quite a lot of things to detail and announce at Hot Chips 2025, with one of them being more details on its new Blackwell Ultra GB300 GPU, the fastest AI chip the company has ever made, and it's 50% faster than GB200.

NVIDIA details Blackwell Ultra GB300: dual-die design, 208B transistors, up to 288GB HBM3E

The new entry into the Blackwell AI GPU family before its next-gen Rubin AI chips debut in 2026, the new Blackwell Ultra GB300 features two Reticle-sized Blackwell GPU dies, connecting them through NVIDIA's in-house NV-HBI high-bandwidth interface, making them appear as a single GPU.

The Blackwell Ultra GPU is made on the TSMC N4P process node (which is an optimized 5nm node for NVIDIA) with 208 billion transistors in total, beating out the 185 billion transistors in AMD's new flagship Instinct MI355X AI accelerator. The NV-HBI interface on Blackwell Ultra GB300 has 10TB/sec of bandwidth for the two GPU dies, while functioning as a single chip.

Continue reading: NVIDIA details Blackwell Ultra GB300: dual-die design, 208B transistors, up to 288GB HBM3E (full post)

NVIDIA's new Spectrum-X Ethernet: silicon photonics enters the chat, a game changer for AI

Anthony Garreffa | Aug 26, 2025 10:47 PM CDT

NVIDIA has just unveiled more details on its new Spectrum-X Ethernet Photonics interconnect, using next-gen silicon photonics replacing the traditional optical interconnect, and it's a "game changer" for AI.

NVIDIA's new Spectrum-X Ethernet: silicon photonics enters the chat, a game changer for AI

During the recent Hot Chips 2025 event, NVIDIA presented its next-gen Spectrum-X Ethernet Photonics interconnect, showing some rather huge improvements in scaling AI factories, and making sure the new interconnect will be an effective and powerful replacement of the traditional optical interconnect.

NVIDIA went into detail about the need of co-packaged photonics, and how massively it can scale AI factories, with the company noting that AI factories use around 17x more optics power compared to a traditional cloud data center, mostly because of the increased in GPU clusters that need dozens of optical transistors in order to talk to other GPUs.

Continue reading: NVIDIA's new Spectrum-X Ethernet: silicon photonics enters the chat, a game changer for AI (full post)

AMD details Instinct MI350: 3D chiplet, 185B transistors, 288GB HBM3E, TSMC N3P node

Anthony Garreffa | Aug 26, 2025 10:18 PM CDT

AMD launched its new Instinct MI350 series AI accelerators two months ago, but the company has now detailed the MI350 chip at Hot Chips 2025, all fabbed on TSMC's bleeding-edge N3P process node.

AMD details Instinct MI350: 3D chiplet, 185B transistors, 288GB HBM3E, TSMC N3P node

AMD's new Instinct MI350 series AI accelerators feature the CDNA 4 architecture, bringing improved performance and efficiency for AI workloads, as well as support for larger capacities of VRAM and capacity at higher speeds, faster AI training and inference on large models with boosted link speed, and improved power efficiency and performance.

The new flagship Instinct MI355X AI accelerator is liquid-cooled with up to 1400W of power, with its GPU running at 2400MHz, with up to 288GB of HBM3E memory.

Continue reading: AMD details Instinct MI350: 3D chiplet, 185B transistors, 288GB HBM3E, TSMC N3P node (full post)

Newsletter Subscription