Artificial Intelligence - Page 68
Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 68
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
NVIDIA's Covert Protocol tech demo has you become a detective investigating AI Digital Humans
At GDC 2024, NVIDIA presented a new tech demo for a theoretical detective game, Covert Protocol, created in collaboration with Inworld. If you recall NVIDIA's recent AI collabs, which created cyberpunk-style tech demos featuring AI avatars you can interact with, this takes it all one step further. It presents an old-school adventure game where you're talking to characters to solve a mystery in a brand-new way.
In Covert Protocol, you play a private detective and explore a realistic environment, speaking to 'digital humans' as you piece together critical and key information. The game is powered by the Inworld AI Engine, which fully uses NVIDIA ACE (the technology we've been following over the past year) services to ensure that no two playthroughs are the same.
"This level of AI-driven interactivity and player agency opens up new possibilities for emergent gameplay," NVIDIA writes. "Players must think on their feet and adapt their strategies in real-time to navigate the intricacies of the game world."
NVIDIA creates Earth-2 digital twin: generative AI to simulate, visualize weather and climate
NVIDIA isn't just changing up the GPU and AI GPU game with its Blackwell AI GPU chips, announcing Earth-2 today during the GPU Technology Conference (GTC).
NVIDIA announced its new Earth-2 climate digital twin cloud platform so people could simulate and visualize weather and climate at scales never seen before. Earth-2's new cloud APIs are available on NVIDIA DGX Cloud, allowing virtually anyone to create AI-powered emulations to make interactive, high-resolution simulations ranging from the global atmosphere to localized cloud cover all the way through to typhoons and mega-storms.
The new Earth-2 APIs offer AI models that use new NVIDIA generative AI model technology called CorrDiff, using state-of-the-art diffusion modeling, capable of generating 12.5x higher resolution images than current numerical models that are 1000x faster and 3000x more energy efficient.
GIGABYTE teases DGX, Superchips, PCIe cards based on NVIDIA's new Blackwell B200 AI GPUs
GIGABYTE is showing off its next-gen compact GPU cluster scalable unit: a new rack with GIGABYTE G593-SD2 servers, which have NVIDIA HGX H100 8-GPU designs and Intel 5th Gen Xeon Scalable processors inside.
The company has said it will support NVIDIA's new Blackwell GPU that succeeds Hopper, with enterprise servers "ready for the market according to NVIDIA's production schedule". The new NVIDIA B200 Tensor Core GPU for generative AI and accelerated computing will have "significant benefits," says GIGABYTE, especially in LLM inference workloads.
GIGABYTE will have products for HGX baseboards, Superchips, and PCIe cards with more details to be provided "later this year," adds the company.
NVIDIA's new Blackwell-based DGX SuperPOD: ready for trillion-parameter scale for generative AI
NVIDIA has just revealed its new Blackwell B200 GPU, with its new DGX B200 systems ready for the future of AI supercomputing platforms for AI model training, fine-tuning, and inference.
The new NVIDIA DGX B200 is a sixth-generation system that's air-cooled in a traditional rack-mounted DGX design used worldwide. Inside, the new Blackwell GPU architecture powers the DGX B200 system using 8 x NVIDIA Blackwell GPUs and 2 x Intel 5th Gen Xeon CPUs.
Each DGX B200 system features up to 144 petaFLOPS of AI performance, an insane 1.4TB/sec of GPU memory (HBM3E) with a bonkers 64TB/sec of memory bandwidth, driving 15x faster real-time inference for trillion-parameter models over the previous-gen Hopper GPU architecture.
NVIDIA GB200 Grace Blackwell Superchip: 864GB HBM3E memory, 16TB/sec memory bandwidth
NVIDIA has finally announced its new Blackwell GPU, DGX system, and Superchip platforms all powered by Blackwell B200 AI GPU and Grace CPU.
The new NVIDIA GB200 Grace Blackwell Superchip is a processor for trillion-parameter-scale generative AI, with 40 petaFLOPS of AI performance, a whopping 864GB of ultra-fast HBM3E memory with an even more incredible 16TB/sec of memory bandwidth.
Each of the new GB200 Grace Blackwell Superchips will feature 2 x B200 AI GPUs and a single Grace CPU with 72 Arm-based Neoverse V2 cores. Alongside the 864GB HBM3E memory pool, 16TB/sec memory bandwidth is joined by a super-fast 3.6TB/sec NVLink connection.
NVIDIA's next-gen Blackwell AI GPU: multi-chip GPU die, 208 billion transistors, 192GB HBM3E
NVIDIA has just revealed its next-gen Blackwell GPU with a few new announcements: B100, B200, and GH200 Superchip, and they're all mega-exciting.
The new NVIDIA B200 AI GPU features a whopping 208 billion transistors made on TSMC's new N4P process node. It also has 192GB of ultra-fast HBM3E memory with 8TB/sec of memory bandwidth. NVIDIA is not using a single GPU die here, but a multi-GPU die with a small line between the dies differentiating the two dies, a first for NVIDIA.
The two chips think they're a single chip, with 10TB/sec of bandwidth between the GPU dies, which have no idea they're separate. The two B100 GPU dies think they're a single chip, with no memory locality issues and no cache issues... it just thinks it's a single GPU and does its (AI) thing at blistering speeds, which is thanks to NV-HBI (NVIDIA High Bandwidth Interface).
ZOTAC's new HPC server can take 2 x Intel Xeon CPUs, 10 x GPUs and 12,000W of power
ZOTAC has just announced its expanded GPU Server Series systems. The first of the series is the Enterprise lineup, which offers companies affordable, high-performance computing solutions for countless applications, including AI.
These new ZOTAC systems are aimed at core-to-core inferencing, data visualization, model training, HPC modeling, simulation, and AI workloads. ZOTAC's new family of GPU servers comes in varying form factors and configurations, with the Tower Workstation and Rack Mount Servers both offered in either AMD EPYC or Intel Xeon CPU form.
There's support for a huge 10 x GPUs, with a modular design that makes it easier to get in and configure the system, with a high space-to-performance ratio, industry-standard features like redundant power supplies and various cooling options, ZOTAC has your back with its new GPU Server Series.
NVIDIA's new B100 AI GPU rumor: 2 x dies, 192GB of HBM3E memory, while B200 has 288GB HBM3E
NVIDIA will unveil its next-generation Blackwell GPU architecture at GTC 2024... tomorrow, if you can believe it, detailing its new B100 AI GPU and giving us a tease at the beefed-up B200 AI GPU expected in 2025.
In a new post on X by "XpeaGPU," we hear that NVIDIA's new B100 is truly a monster: 2 x GPU dies on the latest TSMC CoWoS-L (Chip-on-Wafer-on-Substrate-L) 2.5D packaging technology, which allows companies to design and manufacture larger chips. NVIDIA's next-gen B100 will have up to 192GB of ultra-fast HBM3E memory on 8-Hi stacks, while the beefed-up B200 AI GPU will feature a huge 288GB of HBM3E memory.
NVIDIA's current H100 AI GPU ships with 80GB or 141GB of HBM3 memory, while its competitor, the AMD Instinct MI300X, ships with 192GB of HMB3 memory. The release of the B100 AI GPU will see NVIDIA match AMD for the amount of HBM memory, but NVIDIA's new B100 will have the new ultra-fast HBM3E memory and will be the first GPU with HBM3E to market.
Engineers release terrifying and impressive video of a robot talking like a human
NVIDIA and Microsoft-backed humanoid robot Figure has published a new video of what it calls "Figure 01", a humanoid robot powered by OpenAI technology designed to simulate speech.
Figure, which raised $675 million in Series B funding at a $2.6 billion valuation received funding from companies such as Microsoft, OpenAI's Startup fund, NVIDIA, Amazon founder Jeff Bezos and much more. The goal of the company is to develop "next-generation AI models for humanoid robots" and judging by the latest video posted by the company they are well along the way of achieving that.
The new video shows an engineer chatting with Figure 01, with the engineer asking the humanoid robot, "Can I have something to eat?" to which the robot responded, "Sure thing," and then proceeded to hand over a red apple. Figure 01 was then asked why it "did what it just did" while it was picking up trash from a table. The robot explained that it gave the engineer the red apple as it was the "only edible item I could provide you with from the table."
AMD CEO Lisa Su says AI is the 'most important technology' to arrive in the last 50 years
Dr. Lisa Su, AMD's CEO, delivered the keynote at SXSW the other day with her speech focused on the future of AI. "AI is the most important technology to come on the scene in the last 50 years," Lisa Su said. Adding, "Companies that learn how to leverage AI are going to win over companies that are not."
With that, AMD is all in on AI, leveraging AI to help design better chips and software. Lisa Su added that AI is a productivity tool within AMD's walls. With 2024 ushering in the era of the AI PC, there are already plenty of options out there (both mobile and desktop) powered by AMD Ryzen CPUs with inbuilt NPU hardware and Radeon RX GPUs with dedicated AI hardware.
Like other big players, Lisa Su and AMD advocate for an open-source ecosystem for AI because "no one company has all the answers" when it comes to building the AI future; "it takes a village."