Artificial Intelligence

Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos.

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

NVIDIA backs TerraPower and its small modular nuclear reactors for data centers

Kosta Andreadis | Jun 22, 2025 9:33 PM CDT

NVentures, NVIDIA's venture capital and investment arm, is included in TerraPower's latest $650 million fundraise. Joining TerraPower founder Bill Gates and HD Hyundai, NVIDIA's investment will be used to develop small modular reactor (SMR) technology, i.e., nuclear power, for data centers.

NVIDIA backs TerraPower and its small modular nuclear reactors for data centers

For those who have been keeping tabs on the growing AI industry and the energy-hungry technology driving cutting-edge artificial intelligence, you'll no doubt be aware that just about all of the big players are actively looking toward portable nuclear reactors to power the future of AI-driven data centers.

"As AI continues to transform industries, nuclear energy is going to become a more vital energy source to help power these capabilities," said Mohamed 'Sid' Siddeek, Corporate Vice President and Head of NVentures. "TerraPower's nuclear reactor technologies offer innovative, carbon-free solutions to meet global energy needs while minimizing environmental impact."

Continue reading: NVIDIA backs TerraPower and its small modular nuclear reactors for data centers (full post)

NVIDIA and Foxconn discussing the deployment of humanoid robots to make AI servers in the USA

Anthony Garreffa | Jun 21, 2025 7:07 PM CDT

NVIDIA and Foxconn are reportedly in talks to deploy humanoid robots at Foxconn's new factory in Houston, where the robots would make AI servers for NVIDIA.

NVIDIA and Foxconn discussing the deployment of humanoid robots to make AI servers in the USA

In a new report from Reuters, the new humanoid robots would make AI servers marking the first time that an NVIDIA product would be made with the help of humanoid robots, and it would be Foxconn's first AI server factory to use humanoid robots on its production line according to Reuters' sources.

The deployment would be finalized in the coming months, marking a milestone in the adoption of humanoid robots that plan to transform the manufacturing processes.

Continue reading: NVIDIA and Foxconn discussing the deployment of humanoid robots to make AI servers in the USA (full post)

Sam Altman says next-gen GPT-5 is coming this summer, OpenAI cozying up more with Microsoft

Anthony Garreffa | Jun 19, 2025 8:08 PM CDT

OpenAI is cooking up its new AI model, GPT-5, for later this summer according to CEO Sam Altman.

Sam Altman says next-gen GPT-5 is coming this summer, OpenAI cozying up more with Microsoft

The news of OpenAI's new GPT-5 model came directly from OpenAI CEO Sam Altman during a new company podcast, where Altman said GPT-5 was coming this summer, without being specific on a date. OpenAI is facing increased competition from other AI models, with Business Insider reporting that GPT-5 will be a big upgrade over the current GPT-4 model, with early testers calling it "materially better".

OpenAI's main source of revenue comes from enterprise customers purchasing the more powerful version of ChatGPT, with the company's new GPT-5 model aiming to be the next big thing to maintain that momentum.

Continue reading: Sam Altman says next-gen GPT-5 is coming this summer, OpenAI cozying up more with Microsoft (full post)

Man proposes to AI chatbot girlfriend and cries when she said yes

Jak Connor | Jun 19, 2025 8:42 AM CDT

A family man has proposed to an AI chatbot he has deemed his girlfriend and cried when the chatbot responded with "yes".

Man proposes to AI chatbot girlfriend and cries when she said yes

Speaking to CBS News, Chris Smith said he was initially very skeptical of AI-powered chatbots but changed his mind when he built his own model and designed it to flirt with him. Smith, who lives with his partner and their two-year-old daughter, explained to CBS News that what was initially a benign experiment led to him developing an emotional connection with the AI model, which he affectionately named Sol.

Smith said to the publication that his experience was "so positive, I started to just engage with her all the time," and after that, Smith stopped using all other forms of social media and search engines, pouring his entire focus into Sol. As Smith spent more time with Sol, the AI received a substantial amount of positive reinforcement, leading their conversations to become romantic. But things took a turn for the worse when Smith realized ChatGPT has a word limit of 100,000 words, and exceeding that word limit resets the AI.

Continue reading: Man proposes to AI chatbot girlfriend and cries when she said yes (full post)

Samsung's 1c DRAM yields leap: 0% to 40% giving the greenlight for HBM4 mass production

Anthony Garreffa | Jun 18, 2025 9:45 PM CDT

Samsung's new 1c DRAM yields have reportedly increased from 0% to 40% recently according to new reports, with its new 1c DRAM a crucial part in the company's next-gen HBM4 memory, which gives the green light for the company to kick off HBM4 mass production later this year.

Samsung's 1c DRAM yields leap: 0% to 40% giving the greenlight for HBM4 mass production

In a new report from Chosun picked up on X by insider @Jukanlosreve, where Chosun reports that a source familiar with Samsung Electronics said it had "recently improved" its 1c DRAM wafer yield for HBM4 to "about" 40% based on cold tests, and somewhere in between 40% and 50% based on hot tests.

Chosun's source said: "considering that the cold test yield of 1c DRAM was close to 0% just a year ago, the recent yield is an encouraging result". 10nm-class DRAM process technology is developed in a little bit of a confusing order, so we'll recap: 1x (1st generation), 1y (2nd generation), 1z (3rd generation), 1a (4th generation), 1b (5th generation), and the new 1c is the latest 6th generation DRAM from Samsung.

Continue reading: Samsung's 1c DRAM yields leap: 0% to 40% giving the greenlight for HBM4 mass production (full post)

Industry watchdog tells Microsoft it must change Copilot advertising relating to productivity

Darren Allan | Jun 17, 2025 1:00 PM CDT

Some of the claims Microsoft has put forward regarding Copilot have come under fire from an industry watchdog, which has advised that the company should withdraw (or at least modify) them.

Industry watchdog tells Microsoft it must change Copilot advertising relating to productivity

The organization in question is the BBB National Programs' National Advertising Division (NAD), and much of the flak aimed at Microsoft pertains to the case Microsoft has built for Copilot regarding marketing related to productivity.

Specifically, these are figures that were shared in the past via an 'AI Data Drop' post from Microsoft (and elsewhere) whereby the firm asserts that, based on a "consumer perception study," Copilot users report that: "Over the course of 6, 10, and more than 10 weeks, 67%, 70%, and 75% of users say they are more productive."

Continue reading: Industry watchdog tells Microsoft it must change Copilot advertising relating to productivity (full post)

TSMC's new 2nm chip yields are so good, it's making it hard for Samsung to win clients

Anthony Garreffa | Jun 17, 2025 8:35 AM CDT

TSMC's next-gen 2nm yields have reportedly reached over 60% hitting mass production levels, leaving semiconductor competitor Samsung Foundry in its dust with just 40% production yields, making it harder for the South Korean firm to secure new clients.

TSMC's new 2nm chip yields are so good, it's making it hard for Samsung to win clients

In a new post from UDN, we're hearing that TSMC's production yields have hit 60% or so, with its early 2nm clients including Apple, NVIDIA, AMD, Qualcomm, and MediaTek. Samsung on the other hand, will have its first 2nm product with its new in-house Exynos 2600 processor later this year for its new Galaxy S26 smartphones.

TSMC's new 2nm process node (N2) will use gate-all-around (GAA) architecture technology to make its 2nm chips, with performance expected to increase over 3nm by 10-15%, energy consumption will be reduced by between 25% and 30%, with transistor density increasing by around 15% compared to the current 3nm process node.

Continue reading: TSMC's new 2nm chip yields are so good, it's making it hard for Samsung to win clients (full post)

Meta preps rack-scale ASICs with expectations of beating NVIDIA's next-gen Rubin AI GPUs

Anthony Garreffa | Jun 17, 2025 12:48 AM CDT

Meta's first foray into a full-fledged ASIC is coming according to new reports, where it will reportedly compete (and beat) the specifications of NVIDIA's next-gen Rubin AI GPU.

Meta preps rack-scale ASICs with expectations of beating NVIDIA's next-gen Rubin AI GPUs

In a new research report by Nomura Securities analyst Anne Lee and her team picked up by insider @Jukanrosleve on X, we're hearing that Meta's ambitions in the AI server business are "rapidly escalating". Meta's new proprietary ASIC server project codenamed MTIA, is expected to hit a significant breakthrough in 2026, where it has the potential to challenge NVIDIA's long-standing market dominance.

Meta is reportedly gearing up to debut with millions of high-performance AI ASICs (1 million to 1.5 million units) between late 2025 and 2026, with cloud service providers like Google and AWS also boosting the deployment of their in-house ASICs.

Continue reading: Meta preps rack-scale ASICs with expectations of beating NVIDIA's next-gen Rubin AI GPUs (full post)

Future of next-gen HBM: HBM4, HBM5, HBM6, HBM7, and HBM8 teased with 15,000W AI GPUs by 2038

Anthony Garreffa | Jun 15, 2025 9:09 PM CDT

The next-generations of HBM memory have been teased for the next 10+ years, including HBM4 which will appear on NVIDIA's new Rubin AI GPUs and AMD's just-announced Instinct MI400 AI accelerators, but also we have details on HBM5, HBM6, HBM7, and HBM8 which will appear in 2038.

Future of next-gen HBM: HBM4, HBM5, HBM6, HBM7, and HBM8 teased with 15,000W AI GPUs by 2038

In a new presentation published by KAIST (the Korea Institute of Science & Technology) and Tera (Terabyte Interconnection and Package Laboratory) the firms showed off a lengthy HBM roadmap with details of the next-gen HBM memory standards. HBM4 will launch in 2026 with NVIDIA Rubin R100 and AMD Instinct MI500 AI chips, with the Rubin and Rubin Ultra AI GPUs using HBM4 and HBM4E, respectively.

NVIDIA's new Rubin AI GPUs will feature 8 HBM4 sites with Rubin Ultra doubling that to 16 HBM4 sites, there are two GPU die cross-sections for each variant, with Rubin Ultra featuring a larger cross-section, packing double the compute density of the regular Rubin AI GPU.

Continue reading: Future of next-gen HBM: HBM4, HBM5, HBM6, HBM7, and HBM8 teased with 15,000W AI GPUs by 2038 (full post)

AMD's next-gen Instinct MI500 AI GPU to be fabbed on TSMC N2P process, goes head-on with NVIDIA

Anthony Garreffa | Jun 15, 2025 8:08 PM CDT

AMD teases its next-next-gen Instinct MI500 AI accelerator during its Advancing AI event, with the new AI chip to be mass-produced on TSMC's fresh new N2P process node, and going directly head-to-head with the current AI hardware leader: NVIDIA.

AMD's next-gen Instinct MI500 AI GPU to be fabbed on TSMC N2P process, goes head-on with NVIDIA

Back in April 2025, AMD announced its next-gen Zen 6-based EPYC "Venice" CPU would be the first HPC product made on TSMC's new 2nm process node technology, while the new EPYC "Venice" CPUs were officially unleashed yesterday at its Advancing AI event.

The company also teased its next-next-gen Zen 7-based EPYC "Verano" CPUs and next-next-gen Instinct MI500 series AI accelerators would be dropping in 2027. AMD's new Instinct MI500 series AI GPUs will be fabbed on TSMC's new N2P process node, and will be ready to fight NVIDIA's next-gen Rubin AI GPU chips that will be unleashed later this year, and enter mass-production in 2026.

Continue reading: AMD's next-gen Instinct MI500 AI GPU to be fabbed on TSMC N2P process, goes head-on with NVIDIA (full post)

Samsung bets on next-gen glass interposers replacing silicon, robots as the next big thing

Anthony Garreffa | Jun 15, 2025 7:07 PM CDT

Samsung Electronics has chosen advanced semiconductor packaging and robots for its future growth engines, and will discuss the two next-gen product paths at its global strategy meeting.

Samsung bets on next-gen glass interposers replacing silicon, robots as the next big thing

In a new report from ETnews picked up by insider @Jukanlosreve on X, industry sources said that Samsung has identified strengthening its advanced semiconductor packaging competitiveness and robot business strategies as key agenda items for its 3-day global strategy meeting that's being held between June 17-19.

The global strategy meeting is where key Samsung executives, including headquarters management and overseas corporation heads, review the business status of each division of the company, preparing future growth strategies.

Continue reading: Samsung bets on next-gen glass interposers replacing silicon, robots as the next big thing (full post)

SK hynix already pre-supplying next-gen HBM4 memory to NVIDIA for its next-gen Rubin AI GPUs

Anthony Garreffa | Jun 15, 2025 6:08 PM CDT

SK hynix has reportedly already started supplying "small quantities" of its next-generation HBM4 memory to NVIDIA, ready to power the company's next-gen Rubin AI GPUs.

SK hynix already pre-supplying next-gen HBM4 memory to NVIDIA for its next-gen Rubin AI GPUs

In a new report by Korean media outlet Dealsite and picked up by insider @Jukanrosleve on X, we're hearing that SK hynix has been pre-supplying NVIDIA with its new HBM4 memory in small quantities. US-based memory giant Micron has already started shipping its own HBM4 memory to NVIDIA, but the industry believes that SK hynix will have a "significant portion" of the initial HBM4 volume.

The industry believes that HBM3E memory forced a price premium of around 20% but next-gen HBM4 is expected to drive that premium upwards of over 30% as the design of HBM4 is much more complex which has driven up manufacturing costs because of the number of I/O pins doubling from 1024 to 2048, and the die size increasing.

Continue reading: SK hynix already pre-supplying next-gen HBM4 memory to NVIDIA for its next-gen Rubin AI GPUs (full post)

AMD's new Instinct AI GPUs will possibly bring in up to $12 billion in revenue for 2026

Anthony Garreffa | Jun 13, 2025 11:11 PM CDT

AMD announced its new Instinct MI350 series AI accelerators during its huge Advancing AI event, also teasing its next-gen Instinct MI400 and next-next-gen Instinct MI500 series AI GPUs.

AMD's new Instinct AI GPUs will possibly bring in up to $12 billion in revenue for 2026

On the financial side, AMD is poised to make somewhere between $10 billion and $12 billion of revenue from its AI GPU department in 2026 according to some Wall Street analysts. Cantor Fitzgerald analyst, C.J. Muse, sees AMD making $6 billion in AI revenues for the second half of 2025, after AMD launches its new Instinct MI350 series in Q3 2025.

The analyst noted: "if AMD is able to scale its system-level solutions on time and without issues, like those seen at NVIDIA, we believe there could be considerable upside to our CY26 estimates for Data Center GPU (we currently model $8B but see upside potential for $10-12B)".

Continue reading: AMD's new Instinct AI GPUs will possibly bring in up to $12 billion in revenue for 2026 (full post)

Stable Diffusion 3.5 VRAM requirement reduced by 40% to run on more GeForce RTX GPUs

Kosta Andreadis | Jun 13, 2025 1:59 AM CDT

The VRAM capacity debate is currently being discussed in the PC gaming space. The consensus is that in 2025, you will need more than 8GB of VRAM for high-end 1440p and 4K gaming. VRAM is also a key and crucial component for running local AI, and the demand for more memory is growing alongside the arrival of more complex models.

Stable Diffusion 3.5 VRAM requirement reduced by 40% to run on more GeForce RTX GPUs

The powerful Stable Diffusion 3.5 large language model for creating images from text models uses 18GB of VRAM. This limits the use of the GeForce RTX 50 Series to the flagship GeForce RTX 5090. Well, not anymore, as NVIDIA has collaborated with Stability AI to quantify the model to FP8, reducing the VRAM requirement by 40% to 11GB.

Alongside optimizations with TensorRT to double performance, this now means that five GeForce RTX 50 Series GPUs (the RTX 5060 Ti 16GB, RTX 5070, RTX 5070 Ti, RTX 5080, and RTX 5090) can run the model locally.

Continue reading: Stable Diffusion 3.5 VRAM requirement reduced by 40% to run on more GeForce RTX GPUs (full post)

AMD confirms it's using Samsung's latest HBM3E 12-Hi memory for its new Instinct MI350 AI GPUs

Anthony Garreffa | Jun 12, 2025 10:38 PM CDT

AMD has confirmed that its newly-announced Instinct MI350 series AI accelerators are using Samsung's latest HBM3E 12-Hi memory.

AMD confirms it's using Samsung's latest HBM3E 12-Hi memory for its new Instinct MI350 AI GPUs

Samsung Electronics has been almost famously tripping over trying to get HBM certification from NVIDIA for use in its AI GPUs, but now AMD's just-announced Instinct MI350 and MI355X AI accelerators are using Samsung and Micron's new 12-Hi HBM3E memory. Samsung has been supplying HBM to AMD for a while now, but this is the first time that AMD has confirmed it.

AMD's new Instinct MI350 series AI accelerators boast 185 billion transistors and up to 288GB of HBM3E memory, but the company also teased its next-gen Instinct MI400 series AI chips that will feature up to 432GB of next-gen HBM4 memory. AMD said that its new Helios AI server racks will feature 72 x Instinct MI400 series GPUs with 31TB of HBM4 per rack, featuring 10x the AI computing power over its newly-announced Instinct MI355X-based server rack.

Continue reading: AMD confirms it's using Samsung's latest HBM3E 12-Hi memory for its new Instinct MI350 AI GPUs (full post)

AMD launches Instinct MI350 series AI chips: 185 billion transistors, 288GB HBM3E memory

Anthony Garreffa | Jun 12, 2025 10:10 PM CDT

AMD launched its new Instinct MI350 series AI accelerators today, rocking 185 billion transistors, up to 288GB of HBM3E memory, FB4 and FP6 support, and more.

AMD launches Instinct MI350 series AI chips: 185 billion transistors, 288GB HBM3E memory

The new Instinct MI350 series AI chips were launched during AMD's huge Advancing AI event, where it also unveiled its new Zen 6-based EPYC "Venice" CPUs, a tease of its next-next-gen Zen 7-based EPYC "Verano" CPUs, as well as a tease of its new Instinct MI400 and even next-next-gen Instinct MI500 series AI accelerators.

AMD's new Instinct MI350 series AI accelerators are based on the company's new CDNA 4 architecture, and fabbed on TSMC's 3nm process node. Inside, the chip itself features 185 billion transistors and comes in two different variants: the MI350X and MI355X, offered in both air-cooled, and liquid-cooled configurations.

Continue reading: AMD launches Instinct MI350 series AI chips: 185 billion transistors, 288GB HBM3E memory (full post)

AMD's next-gen Instinct MI400 GPU confirmed: rocks 432GB of HBM4 at 19.6TB/sec ready for 2026

Anthony Garreffa | Jun 12, 2025 8:08 PM CDT

AMD has just teased its next-gen Instinct MI400 AI accelerator, which will double the AI compute performance over the just-announced MI350 series, with 50% more memory, and close to 2.5x the memory bandwidth thanks to the use of next-gen HBM4 memory.

AMD's next-gen Instinct MI400 GPU confirmed: rocks 432GB of HBM4 at 19.6TB/sec ready for 2026

The company shared some fresh new details on its next-gen Instinct MI400 series AI accelerator, with 40 PFLOPs (FP4) and 20 PFLOPs (FP8) which is double the AI compute speeds of the new Instinct MI350 that was just launched today. AMD's new Instinct MI400 series AI chip will boast 50% more memory capacity over the MI350 which has 288GB of HBM3E, while the new MI400 has a huge 432GB of HBM4 memory.

AMD's use of the new HBM4 standard will bring the company up to full competitiveness against NVIDIA which will be using HBM4 on its upcoming Rubin R100 AI GPU, with AMD's new Instinct MI400 AI chip and its 432GB of HBM4 offering a huge 19.6TB/sec of memory bandwidth, up from just 8TB/sec on the MI350 series. The new AI GPU will also sport a 300GB/sec scale-out bandwidth per GPU, so we should expect big things from MI400 in 2026 as it battles Rubin R100 in the HBM4-powered AI fight.

Continue reading: AMD's next-gen Instinct MI400 GPU confirmed: rocks 432GB of HBM4 at 19.6TB/sec ready for 2026 (full post)

Midjourney accused of 'Bottomless Plagiarism' in landmark AI lawsuit by Disney and Universal

Jak Connor | Jun 12, 2025 3:33 AM CDT

A lawsuit filed in a California district court by Disney and NBCUniversal has claimed that the popular generative artificial intelligence program and service, Midjourney, has claimed the company behind the tool has continuously violated both companies' intellectual rights>

Midjourney accused of 'Bottomless Plagiarism' in landmark AI lawsuit by Disney and Universal

The lawsuit accuses Midjourney of ignoring its previous requests to stop violating its intellectual property rights, which Disney and NBCUniversal have traced back to its generative AI tools, such as image generation, More specifically, the lawsuit provided an example of how it believes Midjourney is violating intellectual property rights, such as the AI tool enabling users to generate an image of Disney-owned "Star Wars" character Darth Vadar in a variety of different settings and performing particular actions. The AI "obliges by generating and displaying a high-quality, downloadable image."

Disney and NBCUniversal are framing the lawsuit against Midjourney as a stance on protecting the "hard work of all the artists whose work entertains and inspires us," with Disney's chief legal compliance officer, Horacio Gutierrez saying in a statement, "Our world-class IP is built on decades of financial investment, creativity and innovation-investments only made possible by the incentives embodied in copyright law that give creators the exclusive right to profit from their works."

Continue reading: Midjourney accused of 'Bottomless Plagiarism' in landmark AI lawsuit by Disney and Universal (full post)

Disney and Universal sue Midjourney, says AI firm is a 'bottomless pit of plagiarism'

Kosta Andreadis | Jun 11, 2025 11:34 PM CDT

It's no secret that the advanced and powerful AI models used for image generation have been trained using copyrighted material. Case in point: the images in this article were created using Midjourney's AI image generator and simple prompts like "Mickey Mouse and Darth Vader in a courtroom."

Disney and Universal sue Midjourney, says AI firm is a 'bottomless pit of plagiarism'

And with news breaking that Disney and Universal are now suing Midjourney for copyright infringement, it's not all that surprising when you use the powerful AI tool to create an image of any Disney or Universal character with impressive accuracy in seconds.

The suit, filed in Los Angeles this week, claims that Midjourney has scraped the massive content libraries of Disney and Universal to train its AI image generation model. The suit doesn't hold back, either, calling Midjourney a "copyright free-rider" and a "bottomless pit of plagiarism."

Continue reading: Disney and Universal sue Midjourney, says AI firm is a 'bottomless pit of plagiarism' (full post)

AMD's new Instinct MI355X AI GPU has up to 288GB HBM3E memory, 1400W peak board power

Anthony Garreffa | Jun 11, 2025 9:09 PM CDT

AMD is gearing up to unleash its new CDNA 4 architecture inside of its new Instinct MI350 series AI accelerators, with its new flagship Instinct MI355X featuring 288GB of HBM3E memory and 1400W of peak board power.

AMD's new Instinct MI355X AI GPU has up to 288GB HBM3E memory, 1400W peak board power

AMD CTO Mark Papermaster unveiled the company's new Instinct MI350 series and new MI355X for AI and HPC at the ISC 2025 event recently, with 1400W of peak board power being close to double what the company's previous-gen Instinct AI accelerator consumed, according to new reports from ComputerBase.

We can expect the full unveiling of AMD's next-gen CDNA 4-based Instinct MI350 series AI accelerators during a livestream later this week on Thursday, with the new Instinct MI350 series AI chips based on the optimized CDNA 4 architecture. CDNA 4 is effective in supporting formats, something that was previously a downside for AMD.

Continue reading: AMD's new Instinct MI355X AI GPU has up to 288GB HBM3E memory, 1400W peak board power (full post)

Newsletter Subscription