Artificial Intelligence - Page 43

All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more - Page 43.

Follow TweakTown on Google News

Intel CEO suggests AI will create the first 'one-person, billion-dollar company'

Jak Connor | Apr 10, 2024 11:33 AM CDT

Intel has recently held its Vision Keynote where the company's CEO Pat Gelsinger touched on the current state of the technology industry and how AI will be implemented into businesses and companies around the world.

Intel CEO suggests AI will create the first 'one-person, billion-dollar company'

Gelsinger took to the stage and began his discussion by describing this as the age of AI, and how he believes that in the not-so-distant future AI tools will begin interacting with other AI tools to complete tasks, which will result in entire departments becoming automated by AI bots. Gelsinger added that expanding this idea of having AI-automated departments may even result in the very first one-person, billion-dollar company, which is referred to as a "Unicorn".

As you can probably imagine, Gelsinger touted the power of Intel powering the mass adoption of AI throughout businesses and even at home, even going as far to say that he calls this the age where every company becomes an AI company, which will drive the semiconductor TAM [total addressable market] from approximately $600 billion to more than $1 trillion by the end of the decade.

Continue reading: Intel CEO suggests AI will create the first 'one-person, billion-dollar company' (full post)

Regulatory pressure mounts on AI firms to disclose copyrighted sources

Jak Connor | Apr 10, 2024 9:50 AM CDT

US Congressman Adam Schiff is attempting to force AI companies to outline any copyrighted data used to train AI models.

Regulatory pressure mounts on AI firms to disclose copyrighted sources

On April 9, Schiff introduced the Generative AI Copyright Disclosure Act that, if passed, would require AI companies to file all relevant data used to train their AI tools with the Register of Copyrights at least 30 days before the tool is introduced to the public.

The bill would also be retroactive, meaning any AI tool that is currently available to the public would fall under the same new requirement. If the company doesn't comply with the new laws, it will face a financial penalty from the Copyright Office proportionate to the company's size and violations.

Continue reading: Regulatory pressure mounts on AI firms to disclose copyrighted sources (full post)

OpenAI reportedly trained its best AI model on a million hours of YouTube data

Jak Connor | Apr 10, 2024 4:04 AM CDT

It was only a few days ago that YouTube's CEO put out a warning directed at OpenAI reminding the company that using any data acquired from its video platform will be a violation of its terms of use.

OpenAI reportedly trained its best AI model on a million hours of YouTube data

Now, reports are surfacing from The New York Times that OpenAI has trained its most advanced AI model, GPT-4, with more than a million hours of transcribed YouTube videos, according to sources that spoke to the newspaper and told it audio and video transcripts were fed into the company's latest AI model. Moreover, these sources also said that Google, the owner of YouTube, has also used audio and video transcripts to train its AI models, both of which are clear violations of YouTube's terms of use.

A spokesperson for Google, Matt Bryant, told the NYT that any "unauthorized scraping or downloading of YouTube content" is prohibited. It should be noted that the NYT has filed a lawsuit against OpenAI and Microsoft for copyright infringement, alleging the company took the newspaper's content without permission.

Continue reading: OpenAI reportedly trained its best AI model on a million hours of YouTube data (full post)

Intel announces Gaudi 3 AI accelerator: 128GB HBM2e at up to 3.7TB/sec, up to 900W power

Anthony Garreffa | Apr 9, 2024 8:01 PM CDT

Intel has just unveiled its next-gen Gaudi 3 AI accelerator, which features two 5nm dies made by TSMC, featuring 64 Tensor Cores (5th Generation), 128GB of HBM2e memory, and up to 900W of power on air or water-cooling.

Intel announces Gaudi 3 AI accelerator: 128GB HBM2e at up to 3.7TB/sec, up to 900W power

Intel has 32 Tensor Cores on each chip, for a total of 64 Tensor Cores. Each chip features 48MB of SRAM, for a total of 96MB of SRAM per full package. The SRAM on the Intel Gaudi 3 AI accelerator features 12.8TB/sec of bandwidth, supported by the HBM memory on the Gaudi 3. The 128GB of HBM2e memory features up to 3.7TB/sec of memory bandwidth.

The previous-gen Intel Gaudi 2 AI accelerator featured 96GB of HBM, so the new Gaudi 3 has a bigger 128GB HBM2e capacity with up to 3.7TB/sec of memory bandwidth compared to just 2.45TB/sec of memory bandwidth on Gaudi 2.

Continue reading: Intel announces Gaudi 3 AI accelerator: 128GB HBM2e at up to 3.7TB/sec, up to 900W power (full post)

Elon Musk says AGI will be smarter than the smartest humans by 2025, 2026 at the latest

Anthony Garreffa | Apr 9, 2024 1:47 AM CDT

Elon Musk has predicted that the development of artificial intelligence will get to the stage of being smarter than the smartest humans by 2025, and if not, by 2026.

Elon Musk says AGI will be smarter than the smartest humans by 2025, 2026 at the latest

In an explosive interview on X Spaces, the Tesla and SpaceX boss told Norway wealth fund CEO Nicolai Tangen that IA was constrained by electricity supply and that the next-gen version of Grok, the AI chatbot from Musk's xAI startup, was expected to finish training by May, next month.

When discussing the timeline of developing AGI, or artificial general intelligence, Musk said: "If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it's probably next year, within two years". A monumental amount of AI GPU power will be pumped into training Musk's next-gen Grok 3, with 100,000 x NVIDIA H100 AI GPUs required for training.

Continue reading: Elon Musk says AGI will be smarter than the smartest humans by 2025, 2026 at the latest (full post)

Elon Musk says training next-gen Grok 3 will require 100,000 NVIDIA H100 AI GPUs

Anthony Garreffa | Apr 9, 2024 12:05 AM CDT

Elon Musk has talked about training his next-gen Grok 3 AI chatbot, saying it will require an insane 100,000 NVIDIA H100 AI GPUs.

Elon Musk says training next-gen Grok 3 will require 100,000 NVIDIA H100 AI GPUs

During a recent X spaces chat, the SpaceX and Tesla boss said that Grok 2 used around 20,000 NVIDIA H100 AI GPUs to train, but the new Grok 3 training will require a monster 100,000 separate NVIDIA H100 AI GPUs, a mammoth amount of AI compute power.

Musk said that the upcoming Grok model and beyond will require 100,000 NVIDIA H100 AI GPUs, so we can expect Grok 4 to require an unimaginable amount of AI GPU compute power. NVIDIA has now announced its new Blackwell B200 AI GPU, which I'm sure Elon has been eyeing off... which will be pumped out later this year, and flood the market in 2025 with brute force AI GPU performance.

Continue reading: Elon Musk says training next-gen Grok 3 will require 100,000 NVIDIA H100 AI GPUs (full post)

Samsung announces it has manufactured a sample of 16-stack HBM for AI GPUs

Anthony Garreffa | Apr 8, 2024 11:33 PM CDT

Samsung has announced that it has manufactured a sample of its brand-new 16-stack HBM. The company manufactured the chip with hybrid bonding last week.

Samsung announces it has manufactured a sample of 16-stack HBM for AI GPUs

Samsung vice president Kim Dae-woo said the South Korean giant made its new 16-layer HBM with HBM3 but plans to use HBM4 to improve productivity. Due to alignment issues, Samsung was expected to use hybrid bonding for just one or two stacks of the HBM chip but applied the hybrid bonding technique to all 16 stacks.

The 16-stack HBM memory sample was made using equipment from Samsung's fab equipment subsidiary, Semes, with Kim noting that Samsung considered using hybrid bonding or thermal compression non-conductive film for HBM4, which Samsung will be sampling in 2025 and mass producing next-gen HBM4 memory in 2026.

Continue reading: Samsung announces it has manufactured a sample of 16-stack HBM for AI GPUs (full post)

Elon Musk has between 30,000 and 350,000 x NVIDIA H100 AI GPUs training Tesla and xAI

Anthony Garreffa | Apr 8, 2024 7:33 PM CDT

We know SpaceX and Tesla boss Elon Musk loves hardware as much as he loves AI, so it's no surprise that he's posting on X that Tesla has the second-highest H100 AI GPU count in the world.

Elon Musk has between 30,000 and 350,000 x NVIDIA H100 AI GPUs training Tesla and xAI

In a reply to @thetechbrother on X who posted that Meta has 350,000+ NVIDIA H100 AI GPUs, with a bunch of other companies -- including Tesla -- and how many H100 AI GPUs they've got so far. Elon replied to that, saying "this is not accurate. Tesla would be second highest and X/xAI would be third if measured correctly".

Meta has 350,000+ NVIDIA H100 AI GPUs right now, so if Tesla had the second highest, the electric vehicle giant would have somewhere between 30,000 and 350,000 H100 AI GPUs. Lambda in the US is the second-highest on these charts, with 30,000 H100 AI GPUs in operation, but now we know from the horses mouth himself -- Elon Musk -- that Tesla is second there, with somewhere between 30K and 350K H100 AI GPUs. That's a lot of AI compute power.

Continue reading: Elon Musk has between 30,000 and 350,000 x NVIDIA H100 AI GPUs training Tesla and xAI (full post)

Samsung wins advanced chip packaging order from NVIDIA for AI GPUs, TSMC isn't enough

Anthony Garreffa | Apr 7, 2024 10:33 PM CDT

Samsung has reportedly won a contract with NVIDIA to provide the AI GPU giant with advanced 2.5D packaging.

Samsung wins advanced chip packaging order from NVIDIA for AI GPUs, TSMC isn't enough

The news is coming from TheElec, with their sources saying Samsung's Advanced Package (AVP) team will be providing an interposer and I-Cube -- its 2.5D package -- to NVIDIA. Other companies will produce the High-Bandwidth Memory (HBM) and GPU wafers, with the 2.5D packaging housing the chip dies-CPU, GPU, I/O, HBM, and others-placed horizontally onto the interposer.

Samsung calls its 2.5D packaging technology I-Cube, while TSMC calls its 2.5D packaging CoWoS (Chip-on-Wafer-on-Substrate). NVIDIA's entire fleet of A100 and H200 series AI GPUs uses 2.5D packaging, and more importantly, the monster new 208 billion transistor Blackwell B200 AI GPU uses the same advanced packaging.

Continue reading: Samsung wins advanced chip packaging order from NVIDIA for AI GPUs, TSMC isn't enough (full post)

OpenAI's Sam Altman and Jony Ive are teaming up on a new personal AI device, but they need cash

Oliver Haslam | Apr 6, 2024 3:30 PM CDT

Jony Ive, Apple's former head of design, is reportedly working with OpenAI CEO Sam Altman on a new AI-powered personal device with the pair now seeking funding for the new project.

OpenAI's Sam Altman and Jony Ive are teaming up on a new personal AI device, but they need cash

The news, shared by The Information, means that the pair have teamed up on what could be a new device similar to the Humane AI pin or something along those lines. Notably, Altman is also a major investor in Humane so there are clear links there.

Details on exactly what the product will be are hard to come by right now but it apparently won't be like a smartphone, something that will surely be music to the ears of Apple's executives. This also isn't the first time that we've heard that the pair are working together after information surfaced last fall. However, things seem to have now progressed somewhat with the two people now thought to be seeking funding to the tune of $1 billion.

Continue reading: OpenAI's Sam Altman and Jony Ive are teaming up on a new personal AI device, but they need cash (full post)

Newsletter Subscription