Artificial Intelligence - Page 69
Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 69
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Sony rolls out PS5 update that gives your controller better sound and AI
Sony has announced that PS5 system software version 24.02-09.00.00 contains controller improvements.
PlayStation 5 owners received a notification on Wednesday to download the latest version of their console, which makes the DualSense and DualSense Edge controller speakers louder when used to produce in-game sounds and voice chat. Additionally, Sony writes on its website that details the update version 24.02-09.00.00 will also enhance the microphone through a "new AI machine-learning mode" that reduces the sound of background noise caused by button presses, thus improving the voice chat experience.
The update has also added some more customizability to the PS5 power indicator, which can be found in Settings > System > Beep and Light > Brightness. From there, users can select between dim, medium, and bright, which is the default setting. Other new features within the update are pointers and emoji reactions to the Share Screen feature, which viewers of your shared screen can send to the host. Notably, emojis and pointers can be switched off by the host within the Share Screen settings.
SK hynix was the initial exclusive supplier of HBM3 to NVIDIA, Samsung and Micron catching up
NVIDIA and AMD's best AI GPUs use HBM3 memory, but with the introduction of the H200 AI GPU, NVIDIA will be the first to market with an HBM3E-based AI GPU.
HBM3E will be found inside of NVIDIA's upcoming H200 and next-gen B100 AI GPUs, with TrendForce noting that the supply bottleneck through advanced CoWoS packaging technology and the long production cycle of HBM extend the timeline from wafer initiation to the final production past 6 months.
NVIDIA's current H100 AI GPU uses HBM3 memory primarily supplied by SK hynix, which has caused stock worldwide issues due to the crazy-high demand for AI GPUs. Samsung's entry into NVIDIA's supply chain with its new HBM3 memory in late 2023, were "initially minor, signifies its breakthrough in this segment," reports TrendForce.
US government warns AI may be an 'extinction-level threat' to humans
A new report commissioned by the US State Department warns the exponential development of artificial intelligence may pose a significant risk to national security and even humanity.
The new report titled "An Action Plan to Increase the Safety and Security of Advanced AI" recommends the US government move "quickly and decisively" with implementing measures that hinder the rise of artificial intelligence-powered systems being developed, even to the point of potentially limiting compute power used to train such models. The report goes on to say that if these hindering measures aren't implemented, there is a chance of AI or Artificial General Intelligence (AGI) being an "extinction-level threat to the human species."
The US State Department report involved more than 200 experts in the field, which included officials from companies that are big players in the AI game, such as OpenAI, Meta, Google, Google DeepMind, and government workers. The report goes on to recommend the US government implement limitations on how much compute power any given party developing AI is able to have at one time while also requiring AI companies to request permission from the US government to train any new AI model.
Continue reading: US government warns AI may be an 'extinction-level threat' to humans (full post)
OpenAI reveals its new text-to-video generator Sora will release 'later this year'
It was only last month that OpenAI revealed its upcoming text-to-video generator platform named Sora, and the general reaction to the new AI-powered tool was impressive yet concerning.
The upcoming AI-powered tool works exactly the same way as OpenAI's extremely popular ChatGPT, but instead of the chatbot responding to user prompts with text its capable of producing high-quality video, even to the point of photorealism. OpenAI took to its YouTube channel to share the above video showcasing Sora's capabilities and at first glance it appears some of the examples shown are shot with a real-life camera.
However, upon closer inspection of the examples tell-tale signs of AI-generated content begin to stand out, such as physics-based movements like people walking, hand movements, and more. OpenAI is currently "red-teaming" Sora to iron out these issues before its released to the public, which means people are pushing the AI model to its brink to bring these vulnerabilities to light so they can be fixed.
NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023
NVIDIA has had an absolute record-breaking last 12 months or so, but that momentum isn't slowing down... it's only ramping up... to a huge predicted $130 billion in revenue once we get to 2026.
In a new report from Bloomberg, they predict NVIDIA revenue will swell to a huge $130 billion in 2026, a gargantuan $100 billion increase from 2021. The crazy numbers are fueled by the insatiable AI GPU demand, which NVIDIA is absolutely dominating in... and that's just with current-gen H100 AI GPU offerings, let alone its soon-to-be-released H200 AI GPU, and its next-gen Blackwell B100 AI GPU both right around the corner.
We already heard last year that NVIDIA was expected to generate $300 billion in AI-powered sales by 2027, so the leap from $130 billion to $300 billion in a single year -- 2026 to 2027 -- is absolutely mammoth. We've got market researchers like Omdia, predicting NVIDIA to make $87 billion this year from its data center GPUs, and with next-gen AI GPUs right around the corner... well, NVIDIA is really just getting started.
Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs
We know that AI is big business, and that is why companies like Microsoft, Meta, Google, and Amazon are investing mind-boggling amounts of money in creating new infrastructure and AI-focused data centers. As per Meta's latest post regarding its "GenAI Infrastructure," the company has announced two "24,576 GPU data center scale clusters" to support current and next-gen AI models, research, and development.
That's over 24,000 NVIDIA Tensor Core H100 GPUs, with Meta adding that its AI infrastructure and data centers will house 350,000 NVIDIA H100 GPUs by the end of 2024. There's only one response to seeing that many GPUs: a comically long and cartoonish whistle or a Neo-style "Woah." Meta is going all in on AI, a market in which it wants to be the leader.
"To lead in developing AI means leading investments in hardware infrastructure," the pot writes. "Meta's long-term vision is to build artificial general intelligence (AGI) that is open and built responsibly so that it can be widely available for everyone to benefit from."
Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products
Samsung is reportedly using MUF technology for its next-gen HBM chip production, with the South Korean giant reportedly issuing purchasing orders for MUF tools.
The company says that the "rumors" it will use MUF technology are "not true," according to Reuters, which is reporting the news. HBM makers like SK hynix, Micron, and Samsung are all fighting for the future of HBM technology and future-gen AI GPUs, and it seems Samsung has its tail between its legs now.
One reason Samsung is falling behind is that it has stuck with its chip-making technology, non-conductive film (NCF), which has caused production issues. Meanwhile, HBM competitor and South Korean rival SK Hynix has switched to mass reflow molded underfill (MR-MUF) to work through NCF's weakness, "according to analysts and industry watchers," reports Reuters.
JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed
HBM3E memory is about to be unleashed with NVIDIA's upcoming beefed-up H200 AI GPU, but now JEDEC has reportedly relaxed the rules for HBM4 memory configurations.
JEDEC has reportedly reduced the package thickness of HBM4 down to 775 micrometers for both 12-layer and 16-layer HBM4 stacks, as it gets more complex at higher thickness levels, making it easier... especially as HBM makers fly in the face of insatiable demand for AI GPUs (now, and into the future with HBM4-powered chips).
HBM manufacturers, including SK hynix, Micron, and Samsung, were poised to use hybrid bonding with the process, a newer packaging technology, and more to reduce the package thickness of HBM4, which uses direct bonding with the onboard chip and wafer. However, HBM4, being a new technology, sees that hybrid bonding would increase pricing, making HBM4-powered AI GPUs of the future even more expensive.
Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4
Cerebras Systems just unveiled its new WSE-3 AI chip with 4 trillion transistors and 900,000 AI-optimized cores... as well as its new CS-3 AI supercomputer.
The new CS-3 AI supercomputer has enough power to train models that are 10x larger than GPT-4 and Gemini, which is thanks to its gigantic memory pool. Cerebras Systems' new CS-3 AI supercomputer has been designed for enterprise and hyperscale users, delivering huge performance efficiency gains over current AI GPUs.
The new Condor Galaxy 3 supercomputer features 64 x CS-3 AI systems, packing 8 Exaflops of AI compute performance, which is double the performance of the previous system, but at the same power... and the same cost.
Cerebras WSE-3 wafer-scale AI chip: 57x bigger than largest GPU with 4 trillion transistors
Cerebras Systems has just revealed its third-generation wafer-scale engine (WSE) chip, WSE-3, which packs 4 trillion transistors and 900,000 AI-optimized cores.
The company hasn't stopped on its journey of AI processor releases, with some truly crazy specifications for Cerebras' new WSE-3 chip. We have 4 trillion transistors, 900,000 AI-optimized cores, 125 petaflops of peak AI performance, and 44GB of on-chip SRAM made on the 5nm process node at TSMC.
WSE-3 also features either 1.5TB, 12TB, or 1.2PB of external memory -- yeah, 1.2 petabytes of memory -- capable of training AI models with up to 24 trillion parameters. Cerebras says its new WSE-3 has a die size of 46,225mm2, which is an insane 57x larger than NVIDIA's current H100 AI GPU, which measures 826mm2.