Live from Computex Taipei 2025 - Stay updated with the latest news and product reveals!

Artificial Intelligence - Page 72

Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 72

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

ASRock's new 'world's smallest' server rack features NVIDIA's beefed-up H200 Superchips for AI

Anthony Garreffa | Feb 29, 2024 10:08 PM CST

ASRock has just unveiled the "world's smallest" rack that has NVIDIA's latest GH200 Superchips inside, ready for AI deployment in edge environments with more efficiency, and smaller -- but still uber-powerful -- racks for AI use.

ASRock's new 'world's smallest' server rack features NVIDIA's beefed-up H200 Superchips for AI

The new ASRock Rack MECAI-GH200 is the smallest server rack featuring NVIDIA's new Grace Hopper GH200 Superchip AI module, which is a huge achievement for the team. There are two variants of the NVIDIA Grace Hopper Superchip: first, the GH200 GPU with a single Grace chip with 72 Neoverse V2 cores and HBM3 memory.

The second model is the Grace Supership, featuring two Grace CPUs each with 72 cores (for a total of 144 cores) mixed with LPDDR5X memory. NVIDIA uses its in-house NVLink interconnect technology to get the integration through the on-board components, too.

Continue reading: ASRock's new 'world's smallest' server rack features NVIDIA's beefed-up H200 Superchips for AI (full post)

SK hynix VP wants to become 'total AI memory provider' for future-gen AI GPUs with HBM

Anthony Garreffa | Feb 29, 2024 8:01 PM CST

SK hynix wants to rule the AI world, it seems, with its Vice President Son Ho-young outlining his plans and ambitions for the future of HBM memory made by the company inside of future-gen AI hardware.

SK hynix VP wants to become 'total AI memory provider' for future-gen AI GPUs with HBM

Son Ho-young, vice president of SK Hynix, provided his thoughts and ambitions on taking a huge role in the unstoppable AI era on February 27, where he said: "Like the company has persisted in developing HBM, confident in its value, I too will continue to devote efforts to developing next-generation AI memory technology to lead the rapidly changing AI era."

As it stands, SK hynix is sold out of its entire HBM memory supply for this year -- with HBM3 on the market, HBM3e dropping soon, and HBM4 entering mass production this year -- Son's comments make sense. SK hynix makes some of the world's best memory, and the world's best memory is one of the key parts of AI GPU hardware.

Continue reading: SK hynix VP wants to become 'total AI memory provider' for future-gen AI GPUs with HBM (full post)

Real-life 'Willy Wonka Experience' results in police being called by furious parents

Jak Connor | Feb 29, 2024 9:29 AM CST

An event in Scotland that was marketed as a "Willy Wonka Experience" resulted in pandemonium after parents who arrived at the location were more than disappointed, even to the point where they called the local authorities.

Real-life 'Willy Wonka Experience' results in police being called by furious parents

This real-life "Willy Wonka Experience" was hosted by a company called House of Illuminati, which used AI-generated images to market the event to families for a price of $44 a ticket. The experience was described as "immersive" by the company and was meant to be based on "Wonka", the newly released Willy Wonka movie starring Timothee Chalamet. When ticket buyers arrived at the location, they quickly began to understand that they were scammed as the event didn't even get close to what the marketing materials were portraying.

Instead of a seemingly mystical landscape, families were inside a warehouse that was filled with what only appeared to be cheap props, foldable chairs/tables, and weak actors. 19-year-old Eva Stewart, who attended the event, spoke to the BBC and said the House of Illuminati marketed the Willy Wonka Experience as an event filled with "optical illusions and big chocolate fountains and sweets," but what was there was "practically an abandoned, empty warehouse, with hardly anything in it."

Continue reading: Real-life 'Willy Wonka Experience' results in police being called by furious parents (full post)

Intel plans on shipping 100 million CPUs for next-gen AI PCs by 2025

Anthony Garreffa | Feb 28, 2024 5:13 PM CST

Intel has had some big announcements at the Mobile World Congress (MWC) in Barcelona, Spain, this week: the company has just said it plans on shipping 100 million AI PCs in 2025.

Intel plans on shipping 100 million CPUs for next-gen AI PCs by 2025

That makes sense, considering we're expecting to see 40 million AI PCs shipped this year and then 60 million expected by analysts in 2025. Intel is getting behind hardware partners and software developers to make the future of AI PCs happen, with its tight relationship with Microsoft helping along the way with the integration of Copilot into Windows 11 and the next-gen Windows OS coming this year.

Intel VP David Feng told Nikkei Asia: "We are in the business of selling performance [of chips], selling the performance of CPU and GPU, and the whole package of chipsets. Now we are truly in the business of selling experiences. ... I am describing something that can only be brought to life by software, so there is an increasing need for having collaborations with application developers".

Continue reading: Intel plans on shipping 100 million CPUs for next-gen AI PCs by 2025 (full post)

NVIDIA AI GPU customers 'offloading' chips, selling hard-to-buy excess AI GPU hardware

Anthony Garreffa | Feb 27, 2024 11:07 PM CST

NVIDIA has greatly improved AI GPU shipments over the last few months, which were swelling out to 8-11 months and now down to a better 3-4 month waiting period. But some AI GPU customers are offloading their high-end AI chips... yeah, they're selling the H100 AI GPUs they've paid for.

NVIDIA AI GPU customers 'offloading' chips, selling hard-to-buy excess AI GPU hardware

Why? Some companies purchased oodles of NVIDIA's flagship H100 Tensor Core AI GPU, noting that it's now easier to rent AI processing power from AI cloud providers like Amazon Web Services, Google Cloud, and Microsoft Azure.

A new report from The Information states that some companies are reselling their NVIDIA H100 AI GPUs or reducing their future AI GPU orders due to the scarcity of AI GPUs and the huge costs of maintaining unused hardware. If we rewind to just 6 months ago, companies were tripping over themselves trying to get as many of the most powerful AI GPUs made. Still, even though increased AI GPU availability, reduced waiting times, and next-gen AI GPU hardware are on the horizon, companies are selling their AI GPUs.

Continue reading: NVIDIA AI GPU customers 'offloading' chips, selling hard-to-buy excess AI GPU hardware (full post)

AMD confirms ultra-fast HBM3e memory is coming to Instinct MI300 refresh AI GPU

Anthony Garreffa | Feb 27, 2024 10:08 PM CST

AMD has teased it's working on a refresh of its just-released Instinct MI300 AI GPU family, which will feature the latest ultra-fast HBM3e memory standard that Micron and Samsung are gearing up into.

AMD confirms ultra-fast HBM3e memory is coming to Instinct MI300 refresh AI GPU

We heard rumors of AMD's next-gen Instinct MI400 series AI GPU just a couple of days ago, along with a refreshed MI300 AI GPU with faster HBM3e memory, and now AMD's own Chief Technology Officer, Mark Papermaster, has confirmed just that. A refreshed AMD Instinct MI300 AI GPU is on the way, with HBM3e memory.

AMD Chief Technology Officer Mark Papermaster said during a presentation at the Arete Investor Webinar Conference, which Seeking Alpha reported on: "We are not standing still. We made adjustments to accelerate our roadmap with both memory configurations around the MI300 family, derivatives of MI300, the generation next. [...] So, we have 8-Hi stacks. We architected for 12-Hi stacks. We are shipping with MI300 HBM3. We have architected for HBM3E".

Continue reading: AMD confirms ultra-fast HBM3e memory is coming to Instinct MI300 refresh AI GPU (full post)

Samsung teases industry-first 36GB HBM3e 12-Hi memory stack, coming soon

Anthony Garreffa | Feb 27, 2024 9:36 PM CST

Samsung has just announced it completed the development of its new 12-Hi 36GB HBM3e memory stacks; on the heels of Micron's announcement, it's started mass production of its 8-Hi 24GB HBM3e memory... what an announcement.

Samsung teases industry-first 36GB HBM3e 12-Hi memory stack, coming soon

Samsung's new codename Shinebolt HBM3e memory features 12-Hi 36GB HBM3e stacks with 12 x 24Gb memory devices placed on a logic die featuring a 1024-bit memory interface. Samsung's new 36GB HBM3e memory modules feature 10GT/s transfer rates, offering next-gen AI GPUs up to 1.28TB/sec of memory bandwidth per stack, the industry's highest per-device (or per-module) memory bandwidth.

Yongcheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics, said in the press release: "The industry's AI service providers are increasingly requiring HBM with higher capacity, and our new HBM3E 12H product has been designed to answer that need. This new memory solution forms part of our drive toward developing core technologies for high-stack HBM and providing technological leadership for the high-capacity HBM market in the AI era".

Continue reading: Samsung teases industry-first 36GB HBM3e 12-Hi memory stack, coming soon (full post)

NVIDIA is the 'GPU cartel', will delay shipments if AI GPU customers talk to AI GPU competitors

Anthony Garreffa | Feb 27, 2024 8:37 PM CST

NVIDIA is reportedly delaying AI GPU shipments if it finds out its customer is talking to AI GPU competitors like AMD or Intel, according to a new report from The Wall Street Journal.

NVIDIA is the 'GPU cartel', will delay shipments if AI GPU customers talk to AI GPU competitors

Jonathan Ross, CEO of rival chip startup Groq, told The Wall Street Journal: "A lot of people that we meet with say that if NVIDIA were to hear that we were meeting, they would disavow it. The problem is you have to pay NVIDIA a year in advance, and you may get your hardware in a year, or it may take longer, and it's, 'Aw shucks, you're buying from someone else, and I guess it's going to take a little longer.'"

Ex-NVIDIA GeForce and ex-AMD Radeon GPU boss Scott Herkelman chimed in on X, where posted: "This happens more than you expect, NVIDIA does this with DC customers, OEMs, AIBs, press, and resellers. They learned from GPP to not put it into writing. They just don't ship after a customer has ordered. They are the GPU cartel and they control all supply".

Continue reading: NVIDIA is the 'GPU cartel', will delay shipments if AI GPU customers talk to AI GPU competitors (full post)

Micron announces HBM3e memory enters volume production, ready for NVIDIA's new H200 AI GPU

Anthony Garreffa | Feb 26, 2024 6:30 PM CST

Micron has just announced it has started volume production of its bleeding-edge HBM3e memory, with the company's HBM3e known good stack dies (KGSDs) shipping as part of NVIDIA's upcoming H200 AI GPU.

Micron announces HBM3e memory enters volume production, ready for NVIDIA's new H200 AI GPU

NVIDIA's new beefed-up H200 AI GPU will feature up to 141GB of ultra-fast HBM3e memory from Micron, which will be from its mass-produced 24GB 8-Hi HBM3e memory, with data transfer rates of 9.2GT/s and a peak memory bandwidth of over 1.2TB/sec per GPU. This is a 44% increase in memory bandwidth over HBM3, which provides the extra AI grunt the H200 has over the H100 AI GPU and its regular HBM3 memory.

The 141GB of HBM3e memory on the NVIDIA H200 AI GPU will feature up to 4.8TB/sec of memory bandwidth, which is up from the 80GB of HBM3 memory and up to 3.35TB/sec of memory bandwidth on the H100 AI GPU.

Continue reading: Micron announces HBM3e memory enters volume production, ready for NVIDIA's new H200 AI GPU (full post)

OpenAI's recent Sora text-to-video tech has blown China away, 'cold water' on their AI dreams

Anthony Garreffa | Feb 26, 2024 5:40 PM CST

OpenAI blew everyone on this planet out of the water with its surprise Sora text-to-video AI service, which has forced China's entire AI industry to work out how to deal with this, as it has the country feeling like they've lost a battle they thought they had a huge chance in.

OpenAI's recent Sora text-to-video tech has blown China away, 'cold water' on their AI dreams

China has been at the forefront of the global AI race, but the country has been on the baback footince the release of ChatGPT back in 2022 by OpenAI, and now text-to-video Sora is teased... China is speechless. It thought it was succeeding in AI, but it's so far away it's not even in the game.

The country has countless storage and data to feed into its AI, with functions like facial recognition being superior to many countries. But, the huge advancements in generative AI by other countries -- the US for example -- using text, images, and videos... have changed the AI completely, putting China in a very lagging light.

Continue reading: OpenAI's recent Sora text-to-video tech has blown China away, 'cold water' on their AI dreams (full post)

Newsletter Subscription