GuidesGamingGPUsSSDsRAMNewsletterAboutForumContact
Hot Content
TT ShowBlack Myth: WukongNVIDIARTX 5090RTX 5080PlayStation 6AMD Zen 5GTA 6PlayStation 5 Pro

Artificial Intelligence News

All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more.
Follow TweakTown on Google News

MSI details exclusive 'AI Boost' feature that overclocks the NPU for a performance boost

Jak Connor | Oct 10, 2024 10:33 AM CDT

MSI has officially rolled out a new feature on its new range of X870 and Z890 motherboards designed for Intel and AMD's new generation of CPUs, and the new feature enables more performance to be squeezed out of onboard NPUs.

MSI details exclusive 'AI Boost' feature that overclocks the NPU for a performance boost

The new motherboards for AMD's Ryzen 9000 series and the Intel Core Ultra (Series 2) come with brand-new chipsets that have a range of hardware improvements. To accompany these hardware improvements MSI has overhauled its BIOS interface into what its calling Click BIOS X, and during a recent tour of MSI's motherboard factory in Shenzhen, China, we were able to spend some time with some setups featuring Intel's new Arrow Lake CPUs and the new BIOS interface.

All of MSI's new range of motherboards come with the new BIOS interface, and one of the built-in features is AI Boost. This was a particularly impressive feature as the setting within the BIOS enabled overclocking for the NPU, which MSI claimed to improve AI performance and efficiency by up to 5%. According to MSI, enabling the new feature will provide the user with "faster data processing, enhanced AI performance, improved efficiency in AI tasks, better multi-tasking capabilities, and maximized hardware utilization."

Continue reading: MSI details exclusive 'AI Boost' feature that overclocks the NPU for a performance boost (full post)

OpenAI gets one of the first engineering builds of NVIDIA's new Blackwell DGX B200 AI system

Anthony Garreffa | Oct 9, 2024 7:07 PM CDT

OpenAI has just received one of the first engineering builds of the NVIDIA DGX B200 AI server, posting a picture of their new delivery on X:

OpenAI gets one of the first engineering builds of NVIDIA's new Blackwell DGX B200 AI system

Inside, the NVIDIA DGX B200 is a unified AI platform for training, fine-tuning, and inference using NVIDIA's new Blackwell B200 AI GPUs. Each DGX B200 system has 8 x B200 AI GPUs with up to 1.4TB of HBM3 memory and up to 64TB/sec of memory bandwidth. NVIDIA's new DGX B200 AI server can pump out 72 petaFLOPS of training performance, and 144 petaFLOPS of inference performance.

OpenAI Sam Altman is well aware of the advancements of NVIDIA's new Blackwell GPU architecture, recently saying: "Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We're excited to continue working with NVIDIA to enhance AI compute".

Continue reading: OpenAI gets one of the first engineering builds of NVIDIA's new Blackwell DGX B200 AI system (full post)

AI just won a Nobel Prize for its ability to predict protein structures

Jak Connor | Oct 9, 2024 8:15 AM CDT

Artificial intelligence systems have now become so sophisticated they are being awarded Nobel prizes for their academic achievements, and now AI has gained its second Nobel prize, but this time for protein prediction.

AI just won a Nobel Prize for its ability to predict protein structures

Geoffrey Hinton, a computer scientist whose work on deep learning is the foundation of all AI models currently used today, was awarded a Nobel prize, along with Princeton University professor John Hopfield. Both researchers were awarded the Nobel Prize in physics for their contributions to deep learning technologies, which have become the underpinning technology we now broadly call AI.

Now, AI has done it again, with a Nobel Prize being given to Demis Hassabis, the cofounder and CEO of Google DeepMind, and John M. Jumper, a director at DeepMind, for the creation of an AI capable of accurately predicting the structures of protein. Half of the Nobel Prize is awarded to Hassabis and Jumper, and the other half is awarded to David Baker, a professor of biochemistry at the University of Washington, who was recognized for his work on computational protein design. Each of the prize winners shares a $1 million pot.

Continue reading: AI just won a Nobel Prize for its ability to predict protein structures (full post)

NVIDIA, Foxconn to build Taiwan's fastest supercomputer: with Blackwell GB200 NVL72 AI servers

Anthony Garreffa | Oct 8, 2024 11:11 AM CDT

We knew it was coming, but now it's official: NVIDIA is teaming with Foxconn to build Taiwan's most powerful supercomputer powered by its new Blackwell AI GPU architecture.

NVIDIA, Foxconn to build Taiwan's fastest supercomputer: with Blackwell GB200 NVL72 AI servers

NVIDIA and Foxconn announced the new Hon Hai Kaohsiung Super Computing Center at its recent Hon Hai Tech Day, which will be built around NVIDIA's groundbreaking new Blackwell GPU architecture. The new AI supercomputer will feature GB200 NVL72 AI servers, with a total of 64 racks and 4608 Tensor Core GPUs.

The company is expecting to see over 90 exaflops of AI performance, making the new Taiwan-based supercomputer the fastest on the island. Foxconn has plans to use the supercomputer once it's operational, to power breakthroughs in cancer research, large language model development, and smart city innovations, positioning Taiwan as a global leader in AI-driven industries.

Continue reading: NVIDIA, Foxconn to build Taiwan's fastest supercomputer: with Blackwell GB200 NVL72 AI servers (full post)

AMD should be TSMC's next huge customer for Arizona: HPC AI chips made in the USA in 2025

Anthony Garreffa | Oct 8, 2024 1:44 AM CDT

AMD is reportedly set to make next-gen, high-performance HPC AI chips at TSMC's new fab in Arizona, joining as the second major company making next-gen chips... the other is Apple.

AMD should be TSMC's next huge customer for Arizona: HPC AI chips made in the USA in 2025

In a new post from insider Tim Culpan, who reports that AMD is "lined up to produce high-performance computing chips from TSMC Arizona, making the American fabless chip designer another client for the new US facility" according to his sources.

Culpan explains that production is already in the planning phase, with tape out and manufacturing of AMD's next-gen HPC chips expected to kick off at TSMC's 5nm process node in 2025. Apple is the first customer of TSMC's fresh new fab in Arizona, which will be producing some of the A16 processors that go inside of the new iPhone 16 family of handsets.

Continue reading: AMD should be TSMC's next huge customer for Arizona: HPC AI chips made in the USA in 2025 (full post)

Former Google CEO says AI will solve the climate issue, 'we're not organized to do it'

Kosta Andreadis | Oct 7, 2024 11:01 PM CDT

"We're not going to hit the climate goals anyway because we're not organized to do it." That's former Google CEO Eric Schmidt responding to a question about the rise in energy consumption due to the AI boom at SCSP's inaugural AI+Energy Summit.

Former Google CEO says AI will solve the climate issue, 'we're not organized to do it'

AI is putting a strain on energy grids everywhere due to the sheer amounts of power required to run complex generative AI systems, so it's a definite issue.

Eric Schmidt's response is somewhat cynical but indicative of the debate surrounding how governments, corporations, and people everywhere should be dealing with climate change and its potentially devastating impacts. His response wasn't simply a shoulder shrug, as Schmidt confirmed that energy concerns surrounding AI "will be a problem."

Continue reading: Former Google CEO says AI will solve the climate issue, 'we're not organized to do it' (full post)

Meta smart glasses can be used to secretly identify people's faces

Jak Connor | Oct 4, 2024 8:02 AM CDT

Meta's smart glasses have been converted into a facial recognition device that enables the user to identify random people in real time.

Meta smart glasses can be used to secretly identify people's faces

The conversion of Meta's smart glasses into a facial recognition device came from two Harvard students who called the glasses I-XRAY. Here's how it works. The students took advantage of the smart glasses livestreaming directly to Instagram feature and combined that with an AI program that monitors the video live stream to identify any faces within the video. Images of the faces are then captured and fed into public databases, which result in phone numbers, names, addresses, and more personal information being fed back to the wearer of the glasses through a phone app.

AnhPhu Nguyen posted the above video detailing the process of creating the glasses, and within the video, you can see them being used to identify classmates and, perhaps more shockingly, strangers in public, which the students pretended to know based on the information the device was able to obtain on the individual. For those concerned about the potential impact of releasing such a product to the market, fear not, as the students behind the project said the glasses were created to raise awareness about potential privacy issues with smart glasses.

Continue reading: Meta smart glasses can be used to secretly identify people's faces (full post)

NVIDIA CEO: Blackwell is in full production, as planned, and demand for Blackwell is 'insane'

Anthony Garreffa | Oct 3, 2024 1:22 AM CDT

NVIDIA CEO Jensen Huang spoke with CNBC earlier today, commenting on its Blackwell AI GPUs and that they're in full production as planned, and that the "demand is insane, everyone wants to be first, everyone wants to have the most".

NVIDIA CEO: Blackwell is in full production, as planned, and demand for Blackwell is 'insane'

Jensen told CNBC: "the thing that we have done with Blackwell, and what we have announced, this new AI infrastructure generation every single year, and so we're going to update our platform every single year and the reason foe that is if we can increase the performance as we've done with Hopper to Blackwell by 2-3x each year we're effectively increasing the revenues or the throughput of our customers on these infrastructures by a couple (2-3x) each year, decrease, or how you can think about it decreasing cost every 2 or 3 years".

Jensen continued: "reducing energy efficiency every single year, and so at a time when the technology is moving so fast, it gives us an opportunity to triple down and to really drive the innovation cycle, so that we can increase capabilities, increase our throughput, decrease our cost, decrease our energy consumption, and so we're on a path to do that, and everything's on track".

Continue reading: NVIDIA CEO: Blackwell is in full production, as planned, and demand for Blackwell is 'insane' (full post)

Rambus details HBM4 memory controller: up to 10Gb/s, 2.56TB/sec bandwidth, 64GB per stack

Anthony Garreffa | Oct 2, 2024 3:27 AM CDT

Rambus has provided more details on its upcoming HBM4 memory controller, which offers some huge upgrades over current HBM3 and HBM3 memory controllers.

Rambus details HBM4 memory controller: up to 10Gb/s, 2.56TB/sec bandwidth, 64GB per stack

JEDEC is still finalizing the HBM4 memory specifications, with Rambus teasing its next-gen HBM4 memory controller that will be prepared for next-gen AI and data center markets, continuing to expand the capabilities of existing HBM DRAM designs.

Rambus' new HBM4 controller will pump over 6.4Gb/s speeds per pin, which is faster than the first-gen HBM3 and has more bandwidth than faster HBM3E memory using the same 16-Hi stack and 64GB max capacity design. HBM4 starting bandwidth is at 1638GB/sec (1.63TB/sec) which is 33% faster than HBM3E and 2x faster than HBM3.

Continue reading: Rambus details HBM4 memory controller: up to 10Gb/s, 2.56TB/sec bandwidth, 64GB per stack (full post)

NVIDIA's new GB200 NVL72 AI server: 'highest-power-consuming server in HISTORY' with 132kW TDP

Anthony Garreffa | Oct 2, 2024 3:10 AM CDT

NVIDIA's upcoming GB200 NVL72 AI server development has some big challenges ahead of it, which mostly stem from the insane 132kW TDP requirement, making it the highest-power-consuming server in HISTORY.

NVIDIA's new GB200 NVL72 AI server: 'highest-power-consuming server in HISTORY' with 132kW TDP

In a new post on Medium, analyst and insider Ming-Chi Kuo said that NVIDIA has halted the development of its GB200 NVL36x2 AI server (the dual-rack 72 GPU version) which you can read more about in the links below. Moving back to the 'biggest challenges' in NVL72 development from the 132kW thermal design point (TDP), with NVIDIA and its supply chain requiring more time to solve "unprecedented technology issues".

Kuo points out that it's important to note that the TDP "refers to average power consumption during continuous operation. If poor design leads to peak power consumption (electrical design point (TDP) as NVIDIA calls it) exceeding TDP, two or more sidecars may be required. This would not only increase cooling design complexity and production difficulties but also negate NVL72's data center space-saving advantage".

Continue reading: NVIDIA's new GB200 NVL72 AI server: 'highest-power-consuming server in HISTORY' with 132kW TDP (full post)

Newsletter Subscription

Join the daily TweakTown Newsletter for a special insider look into new content and what is happening behind the scenes.

Newsletter Subscription