Artificial Intelligence News - Page 5

All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, impressive AI demos & plenty more - Page 5.

Follow TweakTown on Google News

NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023

Anthony Garreffa | Mar 13, 2024 11:33 PM CDT

NVIDIA has had an absolute record-breaking last 12 months or so, but that momentum isn't slowing down... it's only ramping up... to a huge predicted $130 billion in revenue once we get to 2026.

NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023

In a new report from Bloomberg, they predict NVIDIA revenue will swell to a huge $130 billion in 2026, a gargantuan $100 billion increase from 2021. The crazy numbers are fueled by the insatiable AI GPU demand, which NVIDIA is absolutely dominating in... and that's just with current-gen H100 AI GPU offerings, let alone its soon-to-be-released H200 AI GPU, and its next-gen Blackwell B100 AI GPU both right around the corner.

We already heard last year that NVIDIA was expected to generate $300 billion in AI-powered sales by 2027, so the leap from $130 billion to $300 billion in a single year -- 2026 to 2027 -- is absolutely mammoth. We've got market researchers like Omdia, predicting NVIDIA to make $87 billion this year from its data center GPUs, and with next-gen AI GPUs right around the corner... well, NVIDIA is really just getting started.

Continue reading: NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023 (full post)

Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs

Kosta Andreadis | Mar 13, 2024 10:31 PM CDT

We know that AI is big business, and that is why companies like Microsoft, Meta, Google, and Amazon are investing mind-boggling amounts of money in creating new infrastructure and AI-focused data centers. As per Meta's latest post regarding its "GenAI Infrastructure," the company has announced two "24,576 GPU data center scale clusters" to support current and next-gen AI models, research, and development.

Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs

That's over 24,000 NVIDIA Tensor Core H100 GPUs, with Meta adding that its AI infrastructure and data centers will house 350,000 NVIDIA H100 GPUs by the end of 2024. There's only one response to seeing that many GPUs: a comically long and cartoonish whistle or a Neo-style "Woah." Meta is going all in on AI, a market in which it wants to be the leader.

"To lead in developing AI means leading investments in hardware infrastructure," the pot writes. "Meta's long-term vision is to build artificial general intelligence (AGI) that is open and built responsibly so that it can be widely available for everyone to benefit from."

Continue reading: Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs (full post)

Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products

Anthony Garreffa | Mar 13, 2024 9:00 PM CDT

Samsung is reportedly using MUF technology for its next-gen HBM chip production, with the South Korean giant reportedly issuing purchasing orders for MUF tools.

Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products

The company says that the "rumors" it will use MUF technology are "not true," according to Reuters, which is reporting the news. HBM makers like SK hynix, Micron, and Samsung are all fighting for the future of HBM technology and future-gen AI GPUs, and it seems Samsung has its tail between its legs now.

One reason Samsung is falling behind is that it has stuck with its chip-making technology, non-conductive film (NCF), which has caused production issues. Meanwhile, HBM competitor and South Korean rival SK Hynix has switched to mass reflow molded underfill (MR-MUF) to work through NCF's weakness, "according to analysts and industry watchers," reports Reuters.

Continue reading: Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products (full post)

JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed

Anthony Garreffa | Mar 13, 2024 7:02 PM CDT

HBM3E memory is about to be unleashed with NVIDIA's upcoming beefed-up H200 AI GPU, but now JEDEC has reportedly relaxed the rules for HBM4 memory configurations.

JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed

JEDEC has reportedly reduced the package thickness of HBM4 down to 775 micrometers for both 12-layer and 16-layer HBM4 stacks, as it gets more complex at higher thickness levels, making it easier... especially as HBM makers fly in the face of insatiable demand for AI GPUs (now, and into the future with HBM4-powered chips).

HBM manufacturers, including SK hynix, Micron, and Samsung, were poised to use hybrid bonding with the process, a newer packaging technology, and more to reduce the package thickness of HBM4, which uses direct bonding with the onboard chip and wafer. However, HBM4, being a new technology, sees that hybrid bonding would increase pricing, making HBM4-powered AI GPUs of the future even more expensive.

Continue reading: JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed (full post)

Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4

Anthony Garreffa | Mar 13, 2024 6:36 PM CDT

Cerebras Systems just unveiled its new WSE-3 AI chip with 4 trillion transistors and 900,000 AI-optimized cores... as well as its new CS-3 AI supercomputer.

Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4

The new CS-3 AI supercomputer has enough power to train models that are 10x larger than GPT-4 and Gemini, which is thanks to its gigantic memory pool. Cerebras Systems' new CS-3 AI supercomputer has been designed for enterprise and hyperscale users, delivering huge performance efficiency gains over current AI GPUs.

The new Condor Galaxy 3 supercomputer features 64 x CS-3 AI systems, packing 8 Exaflops of AI compute performance, which is double the performance of the previous system, but at the same power... and the same cost.

Continue reading: Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4 (full post)

Cerebras WSE-3 wafer-scale AI chip: 57x bigger than largest GPU with 4 trillion transistors

Anthony Garreffa | Mar 13, 2024 6:03 PM CDT

Cerebras Systems has just revealed its third-generation wafer-scale engine (WSE) chip, WSE-3, which packs 4 trillion transistors and 900,000 AI-optimized cores.

Cerebras WSE-3 wafer-scale AI chip: 57x bigger than largest GPU with 4 trillion transistors

The company hasn't stopped on its journey of AI processor releases, with some truly crazy specifications for Cerebras' new WSE-3 chip. We have 4 trillion transistors, 900,000 AI-optimized cores, 125 petaflops of peak AI performance, and 44GB of on-chip SRAM made on the 5nm process node at TSMC.

WSE-3 also features either 1.5TB, 12TB, or 1.2PB of external memory -- yeah, 1.2 petabytes of memory -- capable of training AI models with up to 24 trillion parameters. Cerebras says its new WSE-3 has a die size of 46,225mm2, which is an insane 57x larger than NVIDIA's current H100 AI GPU, which measures 826mm2.

Continue reading: Cerebras WSE-3 wafer-scale AI chip: 57x bigger than largest GPU with 4 trillion transistors (full post)

One of Copilot Pro's best features has arrived in the free version of the AI, to our surprise

Darren Allan | Mar 13, 2024 11:00 AM CDT

Microsoft's Copilot just got a lot better for users of the free version of the AI with the introduction of the GPT-4 Turbo model.

One of Copilot Pro's best features has arrived in the free version of the AI, to our surprise

Previously, GPT-4 Turbo was only available to paying subscribers - those on Copilot Pro. However, as Mikhail Parakhin, head of Advertising and Web Services at Microsoft, made clear on X (formerly Twitter), the more advanced model is now available to those on the freebie version of Copilot.

Free users were on vanilla GPT-4 up until now, with the Turbo version - which provides faster responses, and better, more accurate ones, to boot - effectively paywalled.

Continue reading: One of Copilot Pro's best features has arrived in the free version of the AI, to our surprise (full post)

Researchers break into OpenAI's and Google's AI models revealing hidden secrets

Jak Connor | Mar 13, 2024 6:16 AM CDT

A new paper penned by researchers from Google DeepMind, ETH Zurich, University of Washington, OpenAI, and McGill University reveals that OpenAI and Google's AI models have been cracked open.

Researchers break into OpenAI's and Google's AI models revealing hidden secrets

The new paper reveals that thirteen computer scientists from the aforementioned locations were able to launch an attack on OpenAI and Google's closed AI services, and this attack resulted in the revealing of a significant hidden portion of the underlying transformer models. More specifically, the attack revealed the embedding projection layer of a transformer model through API queries. Notably, the attack technique was originally proposed back in 2016 and has since been built upon to achieve the breaking of OpenAI and Google's AI models.

The team made Google and OpenAI aware of their infiltration, which both companies responded to by implementing mitigation techniques for that specific type of attack. Furthermore, the team decided not to publish their findings online, which would have been the exact size of OpenAI's GPT-3.5-turbo model, as this information was deemed harmful to the product since bad actors would learn aspects of the model, such as total parameter count, weight, size, etc.

Continue reading: Researchers break into OpenAI's and Google's AI models revealing hidden secrets (full post)

Micron HBM3E for NVIDIA's beefed-up H200 AI GPU has shocked HBM competitors like SK hynix

Anthony Garreffa | Mar 12, 2024 11:09 PM CDT

Micron was the first to announce mass production of its new ultra-fast HBM3E memory in February 2024, seeing the company ahead of HBM rivals in SK hynix and Samsung... leaving its HBM competitors shocked.

Micron HBM3E for NVIDIA's beefed-up H200 AI GPU has shocked HBM competitors like SK hynix

The US memory company announced it would provide HBM3E memory chips for NVIDIA's upcoming beefed-up H200 AI GPU, which will feature HBM3E memory, unlike its predecessor with the H100 AI GPU, which featured HBM3 memory.

Micron will make its new HBM3E memory chips on its 1b nanometer DRAM chips, comparable to 12nm nodes, which HBM leader SK Hynix is using on its HBM. According to Korea JoongAng Daily, Micron is "technically ahead" of HBM competitor Samsung, which is still using 1a nanometer technology, which is the equivalent of 14nm technology.

Continue reading: Micron HBM3E for NVIDIA's beefed-up H200 AI GPU has shocked HBM competitors like SK hynix (full post)

AI company responds to outrage of male humanoid robot 'groping' female reporter

Jak Connor | Mar 12, 2024 5:01 AM CDT

Saudi Arabia robotics company QSS unveiled its humanoid robot called Mohammad at a premiere described as the "meeting place for the global artificial intelligence ecosystem." During the unveiling, Mohammad appeared to reach out and try to grab a female reporter's backside.

AI company responds to outrage of male humanoid robot 'groping' female reporter

The DeepFest event in Riyadh was held last week and during the event a female reporter for Al Arabiya named Rawya Kassem, was standing in front of the humanoid robot talking to the audience. The above video shows the robot reaching its hand out with the goal of what appears to be touching the backside of Kassem. The reporter quickly moves back away from Mohammad raising her palm towards it before she continues to address the crowd.

It wasn't long before users on X began to accuse the humanoid robot of attempting to grope the reporter, but QSS has responded to the claims, telling Metro that Mohammad is built to help out in hazardous situations and may have been attempting to encourage Kassem to step further back on the stage to prevent falling all its edge. Additionally, QSS stated it conducted a thorough review of the footage and the circumstances surrounding the incident and found there were "no deviations from expected behavior".

Continue reading: AI company responds to outrage of male humanoid robot 'groping' female reporter (full post)