Artificial Intelligence - Page 47

Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 47

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

NVIDIA Audio2Face uses AI to generate lip synching and facial animation, showcased in two games

Kosta Andreadis | Mar 19, 2024 12:04 AM CDT

Localization is a big thing in gaming, where a game's dialogue and presentation are recorded in various languages to reach a global audience. Like how Netflix dubs all of its original programming into multiple languages, localization for games is a complex process that requires accurately translating dialogue and text, maintaining dramatic or comedic tones, and then re-recording dialogue with voice actors from different countries.

NVIDIA Audio2Face uses AI to generate lip synching and facial animation, showcased in two games

Where games differ from movies or television drama is that all performances are digital and completely malleable, which is why you've got a situation where Sony's PlayStation-exclusive Ghost of Tsushima (which is coming soon to PC) offers both English and native Japanese language options with full lip-syncing and correct facial animation.

Although it's possible to do this for dozens of languages, Ghost of Tsushima limits full lip-synching to two language options due to the task's complexity and the animation involved. This is where generative AI, specifically NVIDIA's Audio2Face technology, will step in.

Continue reading: NVIDIA Audio2Face uses AI to generate lip synching and facial animation, showcased in two games (full post)

NVIDIA's Covert Protocol tech demo has you become a detective investigating AI Digital Humans

Kosta Andreadis | Mar 18, 2024 11:31 PM CDT

At GDC 2024, NVIDIA presented a new tech demo for a theoretical detective game, Covert Protocol, created in collaboration with Inworld. If you recall NVIDIA's recent AI collabs, which created cyberpunk-style tech demos featuring AI avatars you can interact with, this takes it all one step further. It presents an old-school adventure game where you're talking to characters to solve a mystery in a brand-new way.

NVIDIA's Covert Protocol tech demo has you become a detective investigating AI Digital Humans

In Covert Protocol, you play a private detective and explore a realistic environment, speaking to 'digital humans' as you piece together critical and key information. The game is powered by the Inworld AI Engine, which fully uses NVIDIA ACE (the technology we've been following over the past year) services to ensure that no two playthroughs are the same.

"This level of AI-driven interactivity and player agency opens up new possibilities for emergent gameplay," NVIDIA writes. "Players must think on their feet and adapt their strategies in real-time to navigate the intricacies of the game world."

Continue reading: NVIDIA's Covert Protocol tech demo has you become a detective investigating AI Digital Humans (full post)

NVIDIA creates Earth-2 digital twin: generative AI to simulate, visualize weather and climate

Anthony Garreffa | Mar 18, 2024 11:09 PM CDT

NVIDIA isn't just changing up the GPU and AI GPU game with its Blackwell AI GPU chips, announcing Earth-2 today during the GPU Technology Conference (GTC).

NVIDIA creates Earth-2 digital twin: generative AI to simulate, visualize weather and climate

NVIDIA announced its new Earth-2 climate digital twin cloud platform so people could simulate and visualize weather and climate at scales never seen before. Earth-2's new cloud APIs are available on NVIDIA DGX Cloud, allowing virtually anyone to create AI-powered emulations to make interactive, high-resolution simulations ranging from the global atmosphere to localized cloud cover all the way through to typhoons and mega-storms.

The new Earth-2 APIs offer AI models that use new NVIDIA generative AI model technology called CorrDiff, using state-of-the-art diffusion modeling, capable of generating 12.5x higher resolution images than current numerical models that are 1000x faster and 3000x more energy efficient.

Continue reading: NVIDIA creates Earth-2 digital twin: generative AI to simulate, visualize weather and climate (full post)

GIGABYTE teases DGX, Superchips, PCIe cards based on NVIDIA's new Blackwell B200 AI GPUs

Anthony Garreffa | Mar 18, 2024 9:00 PM CDT

GIGABYTE is showing off its next-gen compact GPU cluster scalable unit: a new rack with GIGABYTE G593-SD2 servers, which have NVIDIA HGX H100 8-GPU designs and Intel 5th Gen Xeon Scalable processors inside.

GIGABYTE teases DGX, Superchips, PCIe cards based on NVIDIA's new Blackwell B200 AI GPUs

The company has said it will support NVIDIA's new Blackwell GPU that succeeds Hopper, with enterprise servers "ready for the market according to NVIDIA's production schedule". The new NVIDIA B200 Tensor Core GPU for generative AI and accelerated computing will have "significant benefits," says GIGABYTE, especially in LLM inference workloads.

GIGABYTE will have products for HGX baseboards, Superchips, and PCIe cards with more details to be provided "later this year," adds the company.

Continue reading: GIGABYTE teases DGX, Superchips, PCIe cards based on NVIDIA's new Blackwell B200 AI GPUs (full post)

NVIDIA's new Blackwell-based DGX SuperPOD: ready for trillion-parameter scale for generative AI

Anthony Garreffa | Mar 18, 2024 5:22 PM CDT

NVIDIA has just revealed its new Blackwell B200 GPU, with its new DGX B200 systems ready for the future of AI supercomputing platforms for AI model training, fine-tuning, and inference.

NVIDIA's new Blackwell-based DGX SuperPOD: ready for trillion-parameter scale for generative AI

The new NVIDIA DGX B200 is a sixth-generation system that's air-cooled in a traditional rack-mounted DGX design used worldwide. Inside, the new Blackwell GPU architecture powers the DGX B200 system using 8 x NVIDIA Blackwell GPUs and 2 x Intel 5th Gen Xeon CPUs.

Each DGX B200 system features up to 144 petaFLOPS of AI performance, an insane 1.4TB/sec of GPU memory (HBM3E) with a bonkers 64TB/sec of memory bandwidth, driving 15x faster real-time inference for trillion-parameter models over the previous-gen Hopper GPU architecture.

Continue reading: NVIDIA's new Blackwell-based DGX SuperPOD: ready for trillion-parameter scale for generative AI (full post)

NVIDIA GB200 Grace Blackwell Superchip: 864GB HBM3E memory, 16TB/sec memory bandwidth

Anthony Garreffa | Mar 18, 2024 4:48 PM CDT

NVIDIA has finally announced its new Blackwell GPU, DGX system, and Superchip platforms all powered by Blackwell B200 AI GPU and Grace CPU.

NVIDIA GB200 Grace Blackwell Superchip: 864GB HBM3E memory, 16TB/sec memory bandwidth

The new NVIDIA GB200 Grace Blackwell Superchip is a processor for trillion-parameter-scale generative AI, with 40 petaFLOPS of AI performance, a whopping 864GB of ultra-fast HBM3E memory with an even more incredible 16TB/sec of memory bandwidth.

Each of the new GB200 Grace Blackwell Superchips will feature 2 x B200 AI GPUs and a single Grace CPU with 72 Arm-based Neoverse V2 cores. Alongside the 864GB HBM3E memory pool, 16TB/sec memory bandwidth is joined by a super-fast 3.6TB/sec NVLink connection.

Continue reading: NVIDIA GB200 Grace Blackwell Superchip: 864GB HBM3E memory, 16TB/sec memory bandwidth (full post)

NVIDIA's next-gen Blackwell AI GPU: multi-chip GPU die, 208 billion transistors, 192GB HBM3E

Anthony Garreffa | Mar 18, 2024 3:48 PM CDT

NVIDIA has just revealed its next-gen Blackwell GPU with a few new announcements: B100, B200, and GH200 Superchip, and they're all mega-exciting.

NVIDIA's next-gen Blackwell AI GPU: multi-chip GPU die, 208 billion transistors, 192GB HBM3E

The new NVIDIA B200 AI GPU features a whopping 208 billion transistors made on TSMC's new N4P process node. It also has 192GB of ultra-fast HBM3E memory with 8TB/sec of memory bandwidth. NVIDIA is not using a single GPU die here, but a multi-GPU die with a small line between the dies differentiating the two dies, a first for NVIDIA.

The two chips think they're a single chip, with 10TB/sec of bandwidth between the GPU dies, which have no idea they're separate. The two B100 GPU dies think they're a single chip, with no memory locality issues and no cache issues... it just thinks it's a single GPU and does its (AI) thing at blistering speeds, which is thanks to NV-HBI (NVIDIA High Bandwidth Interface).

Continue reading: NVIDIA's next-gen Blackwell AI GPU: multi-chip GPU die, 208 billion transistors, 192GB HBM3E (full post)

ZOTAC's new HPC server can take 2 x Intel Xeon CPUs, 10 x GPUs and 12,000W of power

Anthony Garreffa | Mar 17, 2024 11:59 PM CDT

ZOTAC has just announced its expanded GPU Server Series systems. The first of the series is the Enterprise lineup, which offers companies affordable, high-performance computing solutions for countless applications, including AI.

ZOTAC's new HPC server can take 2 x Intel Xeon CPUs, 10 x GPUs and 12,000W of power

These new ZOTAC systems are aimed at core-to-core inferencing, data visualization, model training, HPC modeling, simulation, and AI workloads. ZOTAC's new family of GPU servers comes in varying form factors and configurations, with the Tower Workstation and Rack Mount Servers both offered in either AMD EPYC or Intel Xeon CPU form.

There's support for a huge 10 x GPUs, with a modular design that makes it easier to get in and configure the system, with a high space-to-performance ratio, industry-standard features like redundant power supplies and various cooling options, ZOTAC has your back with its new GPU Server Series.

Continue reading: ZOTAC's new HPC server can take 2 x Intel Xeon CPUs, 10 x GPUs and 12,000W of power (full post)

NVIDIA's new B100 AI GPU rumor: 2 x dies, 192GB of HBM3E memory, while B200 has 288GB HBM3E

Anthony Garreffa | Mar 17, 2024 4:55 PM CDT

NVIDIA will unveil its next-generation Blackwell GPU architecture at GTC 2024... tomorrow, if you can believe it, detailing its new B100 AI GPU and giving us a tease at the beefed-up B200 AI GPU expected in 2025.

NVIDIA's new B100 AI GPU rumor: 2 x dies, 192GB of HBM3E memory, while B200 has 288GB HBM3E

In a new post on X by "XpeaGPU," we hear that NVIDIA's new B100 is truly a monster: 2 x GPU dies on the latest TSMC CoWoS-L (Chip-on-Wafer-on-Substrate-L) 2.5D packaging technology, which allows companies to design and manufacture larger chips. NVIDIA's next-gen B100 will have up to 192GB of ultra-fast HBM3E memory on 8-Hi stacks, while the beefed-up B200 AI GPU will feature a huge 288GB of HBM3E memory.

NVIDIA's current H100 AI GPU ships with 80GB or 141GB of HBM3 memory, while its competitor, the AMD Instinct MI300X, ships with 192GB of HMB3 memory. The release of the B100 AI GPU will see NVIDIA match AMD for the amount of HBM memory, but NVIDIA's new B100 will have the new ultra-fast HBM3E memory and will be the first GPU with HBM3E to market.

Continue reading: NVIDIA's new B100 AI GPU rumor: 2 x dies, 192GB of HBM3E memory, while B200 has 288GB HBM3E (full post)

Engineers release terrifying and impressive video of a robot talking like a human

Jak Connor | Mar 15, 2024 3:45 AM CDT

NVIDIA and Microsoft-backed humanoid robot Figure has published a new video of what it calls "Figure 01", a humanoid robot powered by OpenAI technology designed to simulate speech.

Engineers release terrifying and impressive video of a robot talking like a human

Figure, which raised $675 million in Series B funding at a $2.6 billion valuation received funding from companies such as Microsoft, OpenAI's Startup fund, NVIDIA, Amazon founder Jeff Bezos and much more. The goal of the company is to develop "next-generation AI models for humanoid robots" and judging by the latest video posted by the company they are well along the way of achieving that.

The new video shows an engineer chatting with Figure 01, with the engineer asking the humanoid robot, "Can I have something to eat?" to which the robot responded, "Sure thing," and then proceeded to hand over a red apple. Figure 01 was then asked why it "did what it just did" while it was picking up trash from a table. The robot explained that it gave the engineer the red apple as it was the "only edible item I could provide you with from the table."

Continue reading: Engineers release terrifying and impressive video of a robot talking like a human (full post)

AMD CEO Lisa Su says AI is the 'most important technology' to arrive in the last 50 years

Kosta Andreadis | Mar 15, 2024 1:02 AM CDT

Dr. Lisa Su, AMD's CEO, delivered the keynote at SXSW the other day with her speech focused on the future of AI. "AI is the most important technology to come on the scene in the last 50 years," Lisa Su said. Adding, "Companies that learn how to leverage AI are going to win over companies that are not."

AMD CEO Lisa Su says AI is the 'most important technology' to arrive in the last 50 years

With that, AMD is all in on AI, leveraging AI to help design better chips and software. Lisa Su added that AI is a productivity tool within AMD's walls. With 2024 ushering in the era of the AI PC, there are already plenty of options out there (both mobile and desktop) powered by AMD Ryzen CPUs with inbuilt NPU hardware and Radeon RX GPUs with dedicated AI hardware.

Like other big players, Lisa Su and AMD advocate for an open-source ecosystem for AI because "no one company has all the answers" when it comes to building the AI future; "it takes a village."

Continue reading: AMD CEO Lisa Su says AI is the 'most important technology' to arrive in the last 50 years (full post)

Sony rolls out PS5 update that gives your controller better sound and AI

Jak Connor | Mar 15, 2024 12:02 AM CDT

Sony has announced that PS5 system software version 24.02-09.00.00 contains controller improvements.

Sony rolls out PS5 update that gives your controller better sound and AI

PlayStation 5 owners received a notification on Wednesday to download the latest version of their console, which makes the DualSense and DualSense Edge controller speakers louder when used to produce in-game sounds and voice chat. Additionally, Sony writes on its website that details the update version 24.02-09.00.00 will also enhance the microphone through a "new AI machine-learning mode" that reduces the sound of background noise caused by button presses, thus improving the voice chat experience.

The update has also added some more customizability to the PS5 power indicator, which can be found in Settings > System > Beep and Light > Brightness. From there, users can select between dim, medium, and bright, which is the default setting. Other new features within the update are pointers and emoji reactions to the Share Screen feature, which viewers of your shared screen can send to the host. Notably, emojis and pointers can be switched off by the host within the Share Screen settings.

Continue reading: Sony rolls out PS5 update that gives your controller better sound and AI (full post)

SK hynix was the initial exclusive supplier of HBM3 to NVIDIA, Samsung and Micron catching up

Anthony Garreffa | Mar 14, 2024 6:30 PM CDT

NVIDIA and AMD's best AI GPUs use HBM3 memory, but with the introduction of the H200 AI GPU, NVIDIA will be the first to market with an HBM3E-based AI GPU.

SK hynix was the initial exclusive supplier of HBM3 to NVIDIA, Samsung and Micron catching up

HBM3E will be found inside of NVIDIA's upcoming H200 and next-gen B100 AI GPUs, with TrendForce noting that the supply bottleneck through advanced CoWoS packaging technology and the long production cycle of HBM extend the timeline from wafer initiation to the final production past 6 months.

NVIDIA's current H100 AI GPU uses HBM3 memory primarily supplied by SK hynix, which has caused stock worldwide issues due to the crazy-high demand for AI GPUs. Samsung's entry into NVIDIA's supply chain with its new HBM3 memory in late 2023, were "initially minor, signifies its breakthrough in this segment," reports TrendForce.

Continue reading: SK hynix was the initial exclusive supplier of HBM3 to NVIDIA, Samsung and Micron catching up (full post)

US government warns AI may be an 'extinction-level threat' to humans

Jak Connor | Mar 14, 2024 12:16 AM CDT

A new report commissioned by the US State Department warns the exponential development of artificial intelligence may pose a significant risk to national security and even humanity.

US government warns AI may be an 'extinction-level threat' to humans

The new report titled "An Action Plan to Increase the Safety and Security of Advanced AI" recommends the US government move "quickly and decisively" with implementing measures that hinder the rise of artificial intelligence-powered systems being developed, even to the point of potentially limiting compute power used to train such models. The report goes on to say that if these hindering measures aren't implemented, there is a chance of AI or Artificial General Intelligence (AGI) being an "extinction-level threat to the human species."

The US State Department report involved more than 200 experts in the field, which included officials from companies that are big players in the AI game, such as OpenAI, Meta, Google, Google DeepMind, and government workers. The report goes on to recommend the US government implement limitations on how much compute power any given party developing AI is able to have at one time while also requiring AI companies to request permission from the US government to train any new AI model.

Continue reading: US government warns AI may be an 'extinction-level threat' to humans (full post)

OpenAI reveals its new text-to-video generator Sora will release 'later this year'

Jak Connor | Mar 14, 2024 12:01 AM CDT

It was only last month that OpenAI revealed its upcoming text-to-video generator platform named Sora, and the general reaction to the new AI-powered tool was impressive yet concerning.

OpenAI reveals its new text-to-video generator Sora will release 'later this year'

The upcoming AI-powered tool works exactly the same way as OpenAI's extremely popular ChatGPT, but instead of the chatbot responding to user prompts with text its capable of producing high-quality video, even to the point of photorealism. OpenAI took to its YouTube channel to share the above video showcasing Sora's capabilities and at first glance it appears some of the examples shown are shot with a real-life camera.

However, upon closer inspection of the examples tell-tale signs of AI-generated content begin to stand out, such as physics-based movements like people walking, hand movements, and more. OpenAI is currently "red-teaming" Sora to iron out these issues before its released to the public, which means people are pushing the AI model to its brink to bring these vulnerabilities to light so they can be fixed.

Continue reading: OpenAI reveals its new text-to-video generator Sora will release 'later this year' (full post)

NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023

Anthony Garreffa | Mar 13, 2024 11:33 PM CDT

NVIDIA has had an absolute record-breaking last 12 months or so, but that momentum isn't slowing down... it's only ramping up... to a huge predicted $130 billion in revenue once we get to 2026.

NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023

In a new report from Bloomberg, they predict NVIDIA revenue will swell to a huge $130 billion in 2026, a gargantuan $100 billion increase from 2021. The crazy numbers are fueled by the insatiable AI GPU demand, which NVIDIA is absolutely dominating in... and that's just with current-gen H100 AI GPU offerings, let alone its soon-to-be-released H200 AI GPU, and its next-gen Blackwell B100 AI GPU both right around the corner.

We already heard last year that NVIDIA was expected to generate $300 billion in AI-powered sales by 2027, so the leap from $130 billion to $300 billion in a single year -- 2026 to 2027 -- is absolutely mammoth. We've got market researchers like Omdia, predicting NVIDIA to make $87 billion this year from its data center GPUs, and with next-gen AI GPUs right around the corner... well, NVIDIA is really just getting started.

Continue reading: NVIDIA projected to make $130 billion from AI GPUs in 2026, which is 5x higher than 2023 (full post)

Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs

Kosta Andreadis | Mar 13, 2024 10:31 PM CDT

We know that AI is big business, and that is why companies like Microsoft, Meta, Google, and Amazon are investing mind-boggling amounts of money in creating new infrastructure and AI-focused data centers. As per Meta's latest post regarding its "GenAI Infrastructure," the company has announced two "24,576 GPU data center scale clusters" to support current and next-gen AI models, research, and development.

Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs

That's over 24,000 NVIDIA Tensor Core H100 GPUs, with Meta adding that its AI infrastructure and data centers will house 350,000 NVIDIA H100 GPUs by the end of 2024. There's only one response to seeing that many GPUs: a comically long and cartoonish whistle or a Neo-style "Woah." Meta is going all in on AI, a market in which it wants to be the leader.

"To lead in developing AI means leading investments in hardware infrastructure," the pot writes. "Meta's long-term vision is to build artificial general intelligence (AGI) that is open and built responsibly so that it can be widely available for everyone to benefit from."

Continue reading: Meta has two new AI data centers equipped with over 24,000 NVIDIA H100 GPUs (full post)

Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products

Anthony Garreffa | Mar 13, 2024 9:00 PM CDT

Samsung is reportedly using MUF technology for its next-gen HBM chip production, with the South Korean giant reportedly issuing purchasing orders for MUF tools.

Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products

The company says that the "rumors" it will use MUF technology are "not true," according to Reuters, which is reporting the news. HBM makers like SK hynix, Micron, and Samsung are all fighting for the future of HBM technology and future-gen AI GPUs, and it seems Samsung has its tail between its legs now.

One reason Samsung is falling behind is that it has stuck with its chip-making technology, non-conductive film (NCF), which has caused production issues. Meanwhile, HBM competitor and South Korean rival SK Hynix has switched to mass reflow molded underfill (MR-MUF) to work through NCF's weakness, "according to analysts and industry watchers," reports Reuters.

Continue reading: Samsung to use MR-MUF technology, like SK hynix, for its future-gen HBM products (full post)

JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed

Anthony Garreffa | Mar 13, 2024 7:02 PM CDT

HBM3E memory is about to be unleashed with NVIDIA's upcoming beefed-up H200 AI GPU, but now JEDEC has reportedly relaxed the rules for HBM4 memory configurations.

JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed

JEDEC has reportedly reduced the package thickness of HBM4 down to 775 micrometers for both 12-layer and 16-layer HBM4 stacks, as it gets more complex at higher thickness levels, making it easier... especially as HBM makers fly in the face of insatiable demand for AI GPUs (now, and into the future with HBM4-powered chips).

HBM manufacturers, including SK hynix, Micron, and Samsung, were poised to use hybrid bonding with the process, a newer packaging technology, and more to reduce the package thickness of HBM4, which uses direct bonding with the onboard chip and wafer. However, HBM4, being a new technology, sees that hybrid bonding would increase pricing, making HBM4-powered AI GPUs of the future even more expensive.

Continue reading: JEDEC chills on next-gen HBM4 thickness: 16-Hi stacks with current bonding tech allowed (full post)

Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4

Anthony Garreffa | Mar 13, 2024 6:36 PM CDT

Cerebras Systems just unveiled its new WSE-3 AI chip with 4 trillion transistors and 900,000 AI-optimized cores... as well as its new CS-3 AI supercomputer.

Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4

The new CS-3 AI supercomputer has enough power to train models that are 10x larger than GPT-4 and Gemini, which is thanks to its gigantic memory pool. Cerebras Systems' new CS-3 AI supercomputer has been designed for enterprise and hyperscale users, delivering huge performance efficiency gains over current AI GPUs.

The new Condor Galaxy 3 supercomputer features 64 x CS-3 AI systems, packing 8 Exaflops of AI compute performance, which is double the performance of the previous system, but at the same power... and the same cost.

Continue reading: Cerebras Systems unveils CS-3 AI supercomputer: can train models that are 10x bigger than GPT-4 (full post)

Newsletter Subscription