Live from Computex Taipei 2025 - Stay updated with the latest news and product reveals!

Artificial Intelligence - Page 69

Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 69

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

Sony rolls out PS5 update that gives your controller better sound and AI

Jak Connor | Mar 15, 2024 12:02 AM CDT

Sony has announced that PS5 system software version 24.02-09.00.00 contains controller improvements.

Sony rolls out PS5 update that gives your controller better sound and AI

PlayStation 5 owners received a notification on Wednesday to download the latest version of their console, which makes the DualSense and DualSense Edge controller speakers louder when used to produce in-game sounds and voice chat. Additionally, Sony writes on its website that details the update version 24.02-09.00.00 will also enhance the microphone through a "new AI machine-learning mode" that reduces the sound of background noise caused by button presses, thus improving the voice chat experience.

The update has also added some more customizability to the PS5 power indicator, which can be found in Settings > System > Beep and Light > Brightness. From there, users can select between dim, medium, and bright, which is the default setting. Other new features within the update are pointers and emoji reactions to the Share Screen feature, which viewers of your shared screen can send to the host. Notably, emojis and pointers can be switched off by the host within the Share Screen settings.

Continue reading: Sony rolls out PS5 update that gives your controller better sound and AI (full post)

SK hynix was the initial exclusive supplier of HBM3 to NVIDIA, Samsung and Micron catching up

Anthony Garreffa | Mar 14, 2024 6:30 PM CDT

NVIDIA and AMD's best AI GPUs use HBM3 memory, but with the introduction of the H200 AI GPU, NVIDIA will be the first to market with an HBM3E-based AI GPU.

SK hynix was the initial exclusive supplier of HBM3 to NVIDIA, Samsung and Micron catching up

HBM3E will be found inside of NVIDIA's upcoming H200 and next-gen B100 AI GPUs, with TrendForce noting that the supply bottleneck through advanced CoWoS packaging technology and the long production cycle of HBM extend the timeline from wafer initiation to the final production past 6 months.

NVIDIA's current H100 AI GPU uses HBM3 memory primarily supplied by SK hynix, which has caused stock worldwide issues due to the crazy-high demand for AI GPUs. Samsung's entry into NVIDIA's supply chain with its new HBM3 memory in late 2023, were "initially minor, signifies its breakthrough in this segment," reports TrendForce.

Continue reading: SK hynix was the initial exclusive supplier of HBM3 to NVIDIA, Samsung and Micron catching up (full post)

US government warns AI may be an 'extinction-level threat' to humans

Jak Connor | Mar 14, 2024 12:16 AM CDT

A new report commissioned by the US State Department warns the exponential development of artificial intelligence may pose a significant risk to national security and even humanity.

US government warns AI may be an 'extinction-level threat' to humans

The new report titled "An Action Plan to Increase the Safety and Security of Advanced AI" recommends the US government move "quickly and decisively" with implementing measures that hinder the rise of artificial intelligence-powered systems being developed, even to the point of potentially limiting compute power used to train such models. The report goes on to say that if these hindering measures aren't implemented, there is a chance of AI or Artificial General Intelligence (AGI) being an "extinction-level threat to the human species."

The US State Department report involved more than 200 experts in the field, which included officials from companies that are big players in the AI game, such as OpenAI, Meta, Google, Google DeepMind, and government workers. The report goes on to recommend the US government implement limitations on how much compute power any given party developing AI is able to have at one time while also requiring AI companies to request permission from the US government to train any new AI model.

Continue reading: US government warns AI may be an 'extinction-level threat' to humans (full post)

OpenAI reveals its new text-to-video generator Sora will release 'later this year'

Jak Connor | Mar 14, 2024 12:01 AM CDT

It was only last month that OpenAI revealed its upcoming text-to-video generator platform named Sora, and the general reaction to the new AI-powered tool was impressive yet concerning.

OpenAI reveals its new text-to-video generator Sora will release 'later this year'

The upcoming AI-powered tool works exactly the same way as OpenAI's extremely popular ChatGPT, but instead of the chatbot responding to user prompts with text its capable of producing high-quality video, even to the point of photorealism. OpenAI took to its YouTube channel to share the above video showcasing Sora's capabilities and at first glance it appears some of the examples shown are shot with a real-life camera.

However, upon closer inspection of the examples tell-tale signs of AI-generated content begin to stand out, such as physics-based movements like people walking, hand movements, and more. OpenAI is currently "red-teaming" Sora to iron out these issues before its released to the public, which means people are pushing the AI model to its brink to bring these vulnerabilities to light so they can be fixed.

Continue reading: OpenAI reveals its new text-to-video generator Sora will release 'later this year' (full post)

NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023

Anthony Garreffa | Mar 13, 2024 11:33 PM CDT

NVIDIA has had an absolute record-breaking last 12 months or so, but that momentum isn't slowing down... it's only ramping up... to a huge predicted $130 billion in revenue once we get to 2026.

NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023

In a new report from Bloomberg, they predict NVIDIA revenue will swell to a huge $130 billion in 2026, a gargantuan $100 billion increase from 2021. The crazy numbers are fueled by the insatiable AI GPU demand, which NVIDIA is absolutely dominating in... and that's just with current-gen H100 AI GPU offerings, let alone its soon-to-be-released H200 AI GPU, and its next-gen Blackwell B100 AI GPU both right around the corner.

We already heard last year that NVIDIA was expected to generate $300 billion in AI-powered sales by 2027, so the leap from $130 billion to $300 billion in a single year -- 2026 to 2027 -- is absolutely mammoth. We've got market researchers like Omdia, predicting NVIDIA to make $87 billion this year from its data center GPUs, and with next-gen AI GPUs right around the corner... well, NVIDIA is really just getting started.

Continue reading: NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023 (full post)

Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs

Kosta Andreadis | Mar 13, 2024 10:31 PM CDT

We know that AI is big business, and that is why companies like Microsoft, Meta, Google, and Amazon are investing mind-boggling amounts of money in creating new infrastructure and AI-focused data centers. As per Meta's latest post regarding its "GenAI Infrastructure," the company has announced two "24,576 GPU data center scale clusters" to support current and next-gen AI models, research, and development.

Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs

That's over 24,000 NVIDIA Tensor Core H100 GPUs, with Meta adding that its AI infrastructure and data centers will house 350,000 NVIDIA H100 GPUs by the end of 2024. There's only one response to seeing that many GPUs: a comically long and cartoonish whistle or a Neo-style "Woah." Meta is going all in on AI, a market in which it wants to be the leader.

"To lead in developing AI means leading investments in hardware infrastructure," the pot writes. "Meta's long-term vision is to build artificial general intelligence (AGI) that is open and built responsibly so that it can be widely available for everyone to benefit from."

Continue reading: Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs (full post)

Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products

Anthony Garreffa | Mar 13, 2024 9:00 PM CDT

Samsung is reportedly using MUF technology for its next-gen HBM chip production, with the South Korean giant reportedly issuing purchasing orders for MUF tools.

Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products

The company says that the "rumors" it will use MUF technology are "not true," according to Reuters, which is reporting the news. HBM makers like SK hynix, Micron, and Samsung are all fighting for the future of HBM technology and future-gen AI GPUs, and it seems Samsung has its tail between its legs now.

One reason Samsung is falling behind is that it has stuck with its chip-making technology, non-conductive film (NCF), which has caused production issues. Meanwhile, HBM competitor and South Korean rival SK Hynix has switched to mass reflow molded underfill (MR-MUF) to work through NCF's weakness, "according to analysts and industry watchers," reports Reuters.

Continue reading: Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products (full post)

JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed

Anthony Garreffa | Mar 13, 2024 7:02 PM CDT

HBM3E memory is about to be unleashed with NVIDIA's upcoming beefed-up H200 AI GPU, but now JEDEC has reportedly relaxed the rules for HBM4 memory configurations.

JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed

JEDEC has reportedly reduced the package thickness of HBM4 down to 775 micrometers for both 12-layer and 16-layer HBM4 stacks, as it gets more complex at higher thickness levels, making it easier... especially as HBM makers fly in the face of insatiable demand for AI GPUs (now, and into the future with HBM4-powered chips).

HBM manufacturers, including SK hynix, Micron, and Samsung, were poised to use hybrid bonding with the process, a newer packaging technology, and more to reduce the package thickness of HBM4, which uses direct bonding with the onboard chip and wafer. However, HBM4, being a new technology, sees that hybrid bonding would increase pricing, making HBM4-powered AI GPUs of the future even more expensive.

Continue reading: JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed (full post)

Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4

Anthony Garreffa | Mar 13, 2024 6:36 PM CDT

Cerebras Systems just unveiled its new WSE-3 AI chip with 4 trillion transistors and 900,000 AI-optimized cores... as well as its new CS-3 AI supercomputer.

Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4

The new CS-3 AI supercomputer has enough power to train models that are 10x larger than GPT-4 and Gemini, which is thanks to its gigantic memory pool. Cerebras Systems' new CS-3 AI supercomputer has been designed for enterprise and hyperscale users, delivering huge performance efficiency gains over current AI GPUs.

The new Condor Galaxy 3 supercomputer features 64 x CS-3 AI systems, packing 8 Exaflops of AI compute performance, which is double the performance of the previous system, but at the same power... and the same cost.

Continue reading: Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4 (full post)

Cerebras WSE-3 wafer-scale AI chip: 57x bigger than largest GPU with 4 trillion transistors

Anthony Garreffa | Mar 13, 2024 6:03 PM CDT

Cerebras Systems has just revealed its third-generation wafer-scale engine (WSE) chip, WSE-3, which packs 4 trillion transistors and 900,000 AI-optimized cores.

Cerebras WSE-3 wafer-scale AI chip: 57x bigger than largest GPU with 4 trillion transistors

The company hasn't stopped on its journey of AI processor releases, with some truly crazy specifications for Cerebras' new WSE-3 chip. We have 4 trillion transistors, 900,000 AI-optimized cores, 125 petaflops of peak AI performance, and 44GB of on-chip SRAM made on the 5nm process node at TSMC.

WSE-3 also features either 1.5TB, 12TB, or 1.2PB of external memory -- yeah, 1.2 petabytes of memory -- capable of training AI models with up to 24 trillion parameters. Cerebras says its new WSE-3 has a die size of 46,225mm2, which is an insane 57x larger than NVIDIA's current H100 AI GPU, which measures 826mm2.

Continue reading: Cerebras WSE-3 wafer-scale AI chip: 57x bigger than largest GPU with 4 trillion transistors (full post)

Newsletter Subscription