Artificial Intelligence News - Page 2

All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, impressive AI demos & plenty more - Page 2.

Follow TweakTown on Google News

NVIDIA H100 AI GPU lead times improve: 4-month wait is now 2-3 month wait

Anthony Garreffa | Apr 11, 2024 5:18 AM CDT

NVIDIA's shortage of Hopper H100 AI GPUs is improving, with the previous 4-month wait now turning into 8-12 weeks.

NVIDIA H100 AI GPU lead times improve: 4-month wait is now 2-3 month wait

It was just a few months ago that we reported that NVIDIA AI GPU shipments had been "greatly accelerated," according to analysts, with waiting times of 8-11 months for AI GPU deliveries reduced to just 3-4 months. Now that 4-month wait, is a 2-3 month wait.

In a new report from TrendForce, Dell is reportedly capitalizing on AI, with Dell Taiwan's General Manager saying on April 9 that the company is experiencing stronger server orders and demand in the Taiwanese market. This surge is thanks to AI needs within Taiwan's own corporate sector.

Continue reading: NVIDIA H100 AI GPU lead times improve: 4-month wait is now 2-3 month wait (full post)

Next-gen AI with 'human-level cognition' is on the brink of being released

Jak Connor | Apr 11, 2024 12:17 AM CDT

The next wave of powerful AI-powered chatbots is only just around the corner as Meta and OpenAI prepare for the release of GPT-5 and Llama 3, the large language models that power popular AI tools such as ChatGPT.

Next-gen AI with 'human-level cognition' is on the brink of being released

The underpinning technology powering popular AI tools such as ChatGPT and DALL-E will soon be getting an upgrade, according to recent reports citing progress updates from Meta and OpenAI, the two tech giants leading the charge when it comes to AI development.

Meta's president of global affairs Nick Clegg said the company is currently preparing to release Llama 3 to the public "Within the next month, actually less" and that this next-generation of Llama will arrive with a suite of new features that Meta promises will be much more impressive than the current model.

Continue reading: Next-gen AI with 'human-level cognition' is on the brink of being released (full post)

Google announces Arm-based CPU for AI called Axion, 50% more performance than current-gen x86

Kosta Andreadis | Apr 10, 2024 11:34 PM CDT

With all the big tech companies investing billions in AI data centers, research, and the creation of generative AI models and tools, many are looking to create their own hardware as an alternative to NVIDIA's chips - while competing with AMD, Intel, and new AI-chip players like Microsoft.

Google announces Arm-based CPU for AI called Axion, 50% more performance than current-gen x86

Google is entering the race with its own arm-based processor designed for the AI market. Like Google's tensor processing units (TPUs), which developers can access only via Google Cloud, the Arm-based CPU called Axiom will apparently deliver "superior performance to x86 chips."

How much extra performance? According to Google, Axiom offers 30% better performance than "general purpose Arm chips" and 50% better performance than "current generation x86 chips" as produced by Intel and AMD.

Continue reading: Google announces Arm-based CPU for AI called Axion, 50% more performance than current-gen x86 (full post)

Meta's next-gen in-house AI chip is made on TSMC's 5nm process, with LPDDR5 RAM, not HBM

Anthony Garreffa | Apr 10, 2024 10:32 PM CDT

Meta has just teased its next-gen AI chip -- MTIA -- which is an upgrade over its current MTIA v1 chip. The new MTIA chip is made on TSMC's newer 5nm process node, with the original MTIA chip made on 7nm.

Meta's next-gen in-house AI chip is made on TSMC's 5nm process, with LPDDR5 RAM, not HBM

The new Meta Training and Inference Accelerator (MTIA) chip is "fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity" that will be used for the unique requirements of Meta. We've seen the best AI GPUs on the planet using HBM memory -- with HBM3 used on NVIDIA's Hopper H100 and AMD Instinct MI300 series AI chips -- with Meta using low-power DRAM memory (LPDDR5) instead of server DRAM or LPDDR5 memory.

The social networking giant created its MTIA chip was the company's first-generation AI inference accelerator that the company designed in-house for Meta's AI workload in mind. The company says that their deep learning recommendation models are "improving a variety of experiences across our products".

Continue reading: Meta's next-gen in-house AI chip is made on TSMC's 5nm process, with LPDDR5 RAM, not HBM (full post)

AMD's upgraded Instinct MI350 with newer 4nm node, HBM3E rumored for later this year

Anthony Garreffa | Apr 10, 2024 10:08 PM CDT

AMD has already confirmed it has refreshed variants of its Instinct MI300 series AI and HPC processors in the second half of this year, with a tweaked Instinct MI350X featuring ultra-fast HBM3E memory.

AMD's upgraded Instinct MI350 with newer 4nm node, HBM3E rumored for later this year

AI GPU competitor NVIDIA has its current Hopper H100 AI GPU with HBM3 memory, while its newly announced H200 AI GPU features ultra-fast HBM3E memory -- the world's first AI GPU with HBM3E memory. The next-gen Blackwell B200 AI GPU ships with ultra-fast HBM3E memory as standard.

Market research firm TrendForce recently teased AMD's new Instinct MI350X. The firm says the new Instinct MI350X will feature chiplets made on TSMC's newer 4nm process node, which is an enhanced version of TSMC's 5nm-class process node. The new TSMC N4 process node will allow AMD to choose between increasing performance or lowering power consumption on its tweaked Instinct MI350X over the MI300 series AI GPU.

Continue reading: AMD's upgraded Instinct MI350 with newer 4nm node, HBM3E rumored for later this year (full post)

Intel CEO suggests AI will create the first 'one-person, billion-dollar company'

Jak Connor | Apr 10, 2024 11:33 AM CDT

Intel has recently held its Vision Keynote where the company's CEO Pat Gelsinger touched on the current state of the technology industry and how AI will be implemented into businesses and companies around the world.

Intel CEO suggests AI will create the first 'one-person, billion-dollar company'

Gelsinger took to the stage and began his discussion by describing this as the age of AI, and how he believes that in the not-so-distant future AI tools will begin interacting with other AI tools to complete tasks, which will result in entire departments becoming automated by AI bots. Gelsinger added that expanding this idea of having AI-automated departments may even result in the very first one-person, billion-dollar company, which is referred to as a "Unicorn".

As you can probably imagine, Gelsinger touted the power of Intel powering the mass adoption of AI throughout businesses and even at home, even going as far to say that he calls this the age where every company becomes an AI company, which will drive the semiconductor TAM [total addressable market] from approximately $600 billion to more than $1 trillion by the end of the decade.

Continue reading: Intel CEO suggests AI will create the first 'one-person, billion-dollar company' (full post)

Regulatory pressure mounts on AI firms to disclose copyrighted sources

Jak Connor | Apr 10, 2024 9:50 AM CDT

US Congressman Adam Schiff is attempting to force AI companies to outline any copyrighted data used to train AI models.

Regulatory pressure mounts on AI firms to disclose copyrighted sources

On April 9, Schiff introduced the Generative AI Copyright Disclosure Act that, if passed, would require AI companies to file all relevant data used to train their AI tools with the Register of Copyrights at least 30 days before the tool is introduced to the public.

The bill would also be retroactive, meaning any AI tool that is currently available to the public would fall under the same new requirement. If the company doesn't comply with the new laws, it will face a financial penalty from the Copyright Office proportionate to the company's size and violations.

Continue reading: Regulatory pressure mounts on AI firms to disclose copyrighted sources (full post)

OpenAI reportedly trained its best AI model on a million hours of YouTube data

Jak Connor | Apr 10, 2024 4:04 AM CDT

It was only a few days ago that YouTube's CEO put out a warning directed at OpenAI reminding the company that using any data acquired from its video platform will be a violation of its terms of use.

OpenAI reportedly trained its best AI model on a million hours of YouTube data

Now, reports are surfacing from The New York Times that OpenAI has trained its most advanced AI model, GPT-4, with more than a million hours of transcribed YouTube videos, according to sources that spoke to the newspaper and told it audio and video transcripts were fed into the company's latest AI model. Moreover, these sources also said that Google, the owner of YouTube, has also used audio and video transcripts to train its AI models, both of which are clear violations of YouTube's terms of use.

A spokesperson for Google, Matt Bryant, told the NYT that any "unauthorized scraping or downloading of YouTube content" is prohibited. It should be noted that the NYT has filed a lawsuit against OpenAI and Microsoft for copyright infringement, alleging the company took the newspaper's content without permission.

Continue reading: OpenAI reportedly trained its best AI model on a million hours of YouTube data (full post)

Intel announces Gaudi 3 AI accelerator: 128GB HBM2e at up to 3.7TB/sec, up to 900W power

Anthony Garreffa | Apr 9, 2024 8:01 PM CDT

Intel has just unveiled its next-gen Gaudi 3 AI accelerator, which features two 5nm dies made by TSMC, featuring 64 Tensor Cores (5th Generation), 128GB of HBM2e memory, and up to 900W of power on air or water-cooling.

Intel announces Gaudi 3 AI accelerator: 128GB HBM2e at up to 3.7TB/sec, up to 900W power

Intel has 32 Tensor Cores on each chip, for a total of 64 Tensor Cores. Each chip features 48MB of SRAM, for a total of 96MB of SRAM per full package. The SRAM on the Intel Gaudi 3 AI accelerator features 12.8TB/sec of bandwidth, supported by the HBM memory on the Gaudi 3. The 128GB of HBM2e memory features up to 3.7TB/sec of memory bandwidth.

The previous-gen Intel Gaudi 2 AI accelerator featured 96GB of HBM, so the new Gaudi 3 has a bigger 128GB HBM2e capacity with up to 3.7TB/sec of memory bandwidth compared to just 2.45TB/sec of memory bandwidth on Gaudi 2.

Continue reading: Intel announces Gaudi 3 AI accelerator: 128GB HBM2e at up to 3.7TB/sec, up to 900W power (full post)

Elon Musk says AGI will be smarter than the smartest humans by 2025, 2026 at the latest

Anthony Garreffa | Apr 9, 2024 1:47 AM CDT

Elon Musk has predicted that the development of artificial intelligence will get to the stage of being smarter than the smartest humans by 2025, and if not, by 2026.

Elon Musk says AGI will be smarter than the smartest humans by 2025, 2026 at the latest

In an explosive interview on X Spaces, the Tesla and SpaceX boss told Norway wealth fund CEO Nicolai Tangen that IA was constrained by electricity supply and that the next-gen version of Grok, the AI chatbot from Musk's xAI startup, was expected to finish training by May, next month.

When discussing the timeline of developing AGI, or artificial general intelligence, Musk said: "If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it's probably next year, within two years". A monumental amount of AI GPU power will be pumped into training Musk's next-gen Grok 3, with 100,000 x NVIDIA H100 AI GPUs required for training.

Continue reading: Elon Musk says AGI will be smarter than the smartest humans by 2025, 2026 at the latest (full post)