Artificial Intelligence - Page 29

Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 29

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

SK hynix starts mass production of 12-layer HBM3E memory: 36GB capacity per module @ 9.6Gbps

Anthony Garreffa | Sep 26, 2024 6:22 AM CDT

SK hynix has announced volume production of its new 12-layer HBM3E memory, with up to 36GB capacities and speeds of 9.6Gbps.

SK hynix starts mass production of 12-layer HBM3E memory: 36GB capacity per module @ 9.6Gbps

The South Korean memory leader announced it has started mass production of the world's first 12-layer HBM3E memory with 36GB, the largest capacity of existing HBM to date. SK hynix plans to supply mass-produced 12-layer HBM3E memory chips to companies (NVIDIA) within the next 12 months, and only 6 months after launching 8-layer HBM3E to customers for the first time in the industry in March 2024.

SK hynix is the key to the world of AI chips, with NVIDIA using its HBM3 and HBM3E memory inside of its Hopper H100 and H200 AI GPUs, with HBM3E also used in its new Blackwell AI GPUs. SK hynix has been leading the industry with HBM, with its new 12-layer HBM3E memory chips boosted up to 9.6Gbps of bandwidth, the highest memory speed on the market.

Continue reading: SK hynix starts mass production of 12-layer HBM3E memory: 36GB capacity per module @ 9.6Gbps (full post)

Avatar director James Cameron joins the board of Stability AI, will use AI in future filmmaking

Anthony Garreffa | Sep 25, 2024 9:11 AM CDT

Stability AI has just announced that legendary filmmaker, technology innovator, and visual effects pioneer James Cameron has joined its Board of Directors.

Avatar director James Cameron joins the board of Stability AI, will use AI in future filmmaking

Stability AI is the team behind the infamous Stable Diffusion AI model, with Cameron stepping up into the board of Stability AI as a driving force in cutting-edge technology with visionary storytelling, said the AI startup in its press release.

Cameron joining the Stability AI team represents a "represents a significant step forward in Stability AI's mission to transform visual media. Both Cameron and Stability AI operate at the intersection of emerging technology and creativity. Cameron's artist-centric perspective, paired with his business and technical acumen, will support Stability AI in continuing to unlock new opportunities to empower creators to tell stories in ways once unimaginable".

Continue reading: Avatar director James Cameron joins the board of Stability AI, will use AI in future filmmaking (full post)

Intel's new Gaudi 3 AI accelerator launched: cheaper than NVIDIA H100 AI GPU, but also slower

Anthony Garreffa | Sep 25, 2024 8:17 AM CDT

Intel has officially launched its Gaudi 3 AI accelerator, with the new AI chip coming in slower than NVIDIA's dominant H100 and its new HBM3E-fueled H200 AI GPUs, meaning Intel is aiming its Gaudi 3 by pushing that it's cheaper, and has a lower total cost of ownership (TCO).

Intel's new Gaudi 3 AI accelerator launched: cheaper than NVIDIA H100 AI GPU, but also slower

Inside, the new Intel Gaudi 3 AI accelerator features two chiplets with 64 tensor processor cores (TPCs, 256x256 MAC structure with FP32 accumulators), eight matrix multiplication engines (MMEs, 256-bit wide vector processor), and 96MB of on-die SRAM cache with a 19.2TB/s bandwidth.

Gaudi 3 also features 24 x 200GbE networking interfaces and 14 media engines, with the media engines capable of handling H.265, H.264, and VP9 to support vision processing. Intel's new Gaudi 3 AI accelerator features 128GB of HBM2E memory with up to 3.67TB/sec of memory bandwidth.

Continue reading: Intel's new Gaudi 3 AI accelerator launched: cheaper than NVIDIA H100 AI GPU, but also slower (full post)

Analyst: Apple has been a 'little disingenuous with its marketing' for AI features on iPhone 16

Anthony Garreffa | Sep 22, 2024 9:11 PM CDT

Apple has released its new iPhone 16 family of handsets, with the iPhone 16, iPhone 16 Pro, and iPhone 16 Pro Max available across the planet right now... but... where are all those AI features that hte company promised with Apple Intelligence?

Analyst: Apple has been a 'little disingenuous with its marketing' for AI features on iPhone 16

The on-device AI functionality that I've even seen advertised on TV, where they've got Bella Ramsey from The Last of Us looking literally useless without her iPhone and Apple Intelligence. The Hollywood actor... acts... as if she hasn't read the script given to her, so she gets a summary of it from Apple Intelligence, like that's meant to make you buy a multi-thousand-dollar new iPhone 16.

Anyway, now there's analyst Mark Gurman talking about the useless AI features in Apple's new iPhone 16 handsets and that some of them won't be fully-baked until 2025, and some of them are radically behind competing AI on the market. Gurman said that Apple has been a "little disingenuous with its marketing" as Apple claimed the iPhone 16 was the first model "built from the ground up for Apple Intelligence".

Continue reading: Analyst: Apple has been a 'little disingenuous with its marketing' for AI features on iPhone 16 (full post)

Modder hacks ChatGPT onto a TI-84 calculator: calls it the 'Ultimate Cheating Device'

Anthony Garreffa | Sep 22, 2024 6:10 PM CDT

A modder has created what he calls "The Ultimate Cheating Device" with a regular TI-84 calculator hacked to run ChatGPT, which is perfect for students who want to ninja an AI-powered calculator into the class.

Modder hacks ChatGPT onto a TI-84 calculator: calls it the 'Ultimate Cheating Device'

YouTuber ChromaLock uploaded the video that is embedded above, with some hardware modifications applied and the use of some open-source software modified for the TI-84 that he made, allowing the calculator to run ChatGPT. The modder has uploaded the software required onto GitHub, under the TI-32 repository, which is described as "a mod for the TI-84 Plus Silver Edition and TI-84 Plus C Silver Edition calculators to give them Internet access and add other features, like test mode breakout and camera support".

A microcontroller small enough to fit inside of the TI-84 shell with all of its components is the hardest step, after that the software modification is applied. All of the required TI-84 software features require the use of a link port to connect to bulky external devices, so if students were doing that it would be too obvious... but hardware mods + ChatGPT installed? Game changer.

Continue reading: Modder hacks ChatGPT onto a TI-84 calculator: calls it the 'Ultimate Cheating Device' (full post)

Microsoft signs deal with owner of Three Mile Island: nuclear power for its AI data centers

Anthony Garreffa | Sep 20, 2024 5:35 PM CDT

Microsoft has just signed a 20-year deal with Constellation Energy, the owner of the Three Mile Island nuclear plant in Pennsylvania, and once revived, will provide clean energy to Microsoft and its AI data center and cloud computing needs for 20 years.

Microsoft signs deal with owner of Three Mile Island: nuclear power for its AI data centers

Three Mile Island has two nuclear reactors: the first with a capacity of 906 MW was shut down back in 1979 after the "Three Mile Island nuclear incident" that I'm sure you've heard of (if not, you should read into it). The other, has a capacity of 819 MW and was closed in 2019 over economic issues, but will now be restarted thanks to the deal with Microsoft.

Constellation Energy will invest $1.6 billion in restarting the Three Mile Island nuclear reactor, a process that had been in development since early 2023 when the company looked at the feasibility of bringing the nuclear reactor back online. After it decided to go ahead and restart the nuclear reactor, it began talking with potential buyers... with Microsoft showing immediate interest, and now the deal is inked (and for 20 years).

Continue reading: Microsoft signs deal with owner of Three Mile Island: nuclear power for its AI data centers (full post)

TikTok parent company ByteDance to have 2 custom AI chips made on TSMC 5nm process in 2026

Anthony Garreffa | Sep 19, 2024 9:33 PM CDT

TikTok parent company ByteDance is developing not one but two new AI GPUs that are reportedly being made on TSMC 5nm process node, and will enter mass production in 2026.

TikTok parent company ByteDance to have 2 custom AI chips made on TSMC 5nm process in 2026

The information is coming from The Information, where according to their sources ByteDance will reduce its reliance on NVIDIA for its AI hardware, all the while staying in the lines of US export regulations. ByteDance's new AI GPUs are in the design phase, with one of them being for AI training and the other for AI inference.

ByteDance's new AI GPUs are said to be made on TSMC 4N/5N process nodes, which is similar to the 4NP process node that TSMC uses to make NVIDIA's new Blackwell AI GPUs. The TikTok parent company reportedly spent over $2 billion buying over 200,000+ NVIDIA H20 AI GPUs (which cost around $10,000 per H20 AI GPU) this year alone, with many of the AI GPUs not yet delivered by NVIDIA.

Continue reading: TikTok parent company ByteDance to have 2 custom AI chips made on TSMC 5nm process in 2026 (full post)

NVIDIA CEO Jensen Huang says 'We can't do computer graphics anymore' without AI

Kosta Andreadis | Sep 16, 2024 12:02 AM CDT

At the recent Goldman Sachs Communacopia and Technology Conference, NVIDIA CEO Jensen Huang was asked about exciting use cases for AI and responded with a nod to DLSS and other RTX technologies. "In our company, we use it for computer graphics," Jensen replied. "We can't do computer graphics anymore without artificial intelligence."

NVIDIA CEO Jensen Huang says 'We can't do computer graphics anymore' without AI

NVIDIA DLSS, or Deep Learning Super Sampling, is a crucial part of modern PC gaming on GeForce graphics cards, where the AI-powered upscaler boosts performance by generating new pixels that hit a target resolution and frame rate. With the arrival of the GeForce RTX 40 Series, NVIDIA expanded this to add Frame Generation, where AI generates entire frames. Throw in the impressive AI-powered Ray Reconstruction, and it's the reason why games like Black Myth: Wukong is playable with all settings maxed out.

"We compute one pixel, we infer the other 32. I mean, it's incredible," Jensen continues. "And so we hallucinate, if you will, the other 32, and it looks temporally stable, it looks photorealistic, and the image quality is incredible, the performance is incredible, the amount of energy we save - computing one pixel takes a lot of energy. That's computation."

Continue reading: NVIDIA CEO Jensen Huang says 'We can't do computer graphics anymore' without AI (full post)

Larry Ellison, Elon Musk, and Jensen Huang had dinner: both 'begging Jensen for GPUs'

Anthony Garreffa | Sep 14, 2024 6:30 AM CDT

Oracle CTO Larry Ellison recently had dinner with SpaceX and Tesla CEO Elon Musk, as well as NVIDIA CEO Jensen Huang, with Ellison saying both he and Elon were "begging" Jensen for GPUs. Check it out:

Larry Ellison, Elon Musk, and Jensen Huang had dinner: both 'begging Jensen for GPUs'

Ellison said: "I went to dinner with Elon Musk, Jensen Huang, and I would describe the dinner as Oracle and me and Elon begging Jensen for GPUs. Please take our money... no no, take more of it. You're not taking eough. We need you to take more of our money, please".

It was recently revealed that Oracle would be spending over $100 billion on 2000+ data centers in the future, with 130,000+ of NVIDIA's new Blackwell AI GPUs powering its new AI supercluster, with the company talking about requiring 3 nuclear power plants just to power the supercluster.

Continue reading: Larry Ellison, Elon Musk, and Jensen Huang had dinner: both 'begging Jensen for GPUs' (full post)

NVIDIA CEO Jensen Huang touts the 'beginning of a new industrial revolution'

Jak Connor | Sep 13, 2024 12:01 AM CDT

NVIDIA CEO Jensen Huang has spoken to CNBC after meeting with the Biden administration and tech executives at the White House about the future of AI development and what it will take to produce it.

NVIDIA CEO Jensen Huang touts the 'beginning of a new industrial revolution'

Huang joined other major tech executives at the White House who are also assisting in the development of AI, with reports confirming the presence of OpenAI's CEO Sam Altman, Anthropic CEO Dario Amodei, Google President Ruth Porat, Amazon's cloud chief Matt Harman, and Microsoft President Brad Smith. On the other side of the fence were government officials from various agencies, such as Commerce Secretary Gina Raimondo, National Security Advisor Jake Sullivan, and Energy Secretary Gennifer Granholm.

What was the meeting about? Huang explained that the conference focused on how the US government could assist in the development of data centers across the US, particularly in the form of implementing initiatives that ultimately would lead the US to maintain its lead in the global AI race. Companies require a significant amount of energy to power these massive data centers, which was also a topic of the meeting as the Energy Department will help data center owners find clean and reliable power sources along with new resources in the form of loans, grants, and tax credits.

Continue reading: NVIDIA CEO Jensen Huang touts the 'beginning of a new industrial revolution' (full post)

Meta nearly finished with 100,000+ NVIDIA H100 AI GPU cluster: online in October or November

Anthony Garreffa | Sep 12, 2024 3:03 AM CDT

Meta Platforms is reportedly putting the "final touches" on one of its new AI supercomputers, powered with over 100,000+ NVIDIA H100 AI GPUs.

Meta nearly finished with 100,000+ NVIDIA H100 AI GPU cluster: online in October or November

In a new report from The Information, the new AI supercomputer from Meta will feature 100,000+ NVIDIA H100 AI GPUs and will be located "somewhere" in the US. The new supercomputing cluster will train the next version of Meta's Llama model: Llama 4. Meta's new 100K+ NVIDIA H100 AI supercomputer cluster will be fully completed by October or November, says The Information.

Meta's new AI supercomputer with its 100,000+ NVIDIA H100 AI GPUs reportedly costed over $2 billion for the H100 AI GPU chips alone, which means Mark Zuckerberg is signing some fat cheques to NVIDIA. Speaking of which, NVIDIA CEO Jensen Huang recently spoke with Meta CEO Mark Zuckerberg, where Jensen said Meta now has 600,000+ NVIDIA H100 AI GPUs to which Zuck replied saying that Meta were "good customers for NVIDIA".

Continue reading: Meta nearly finished with 100,000+ NVIDIA H100 AI GPU cluster: online in October or November (full post)

NVIDIA CEO on Blackwell AI GPU demand: it 'is so great, everyone wants to be first'

Anthony Garreffa | Sep 12, 2024 2:33 AM CDT

NVIDIA CEO Jensen Huang has said that there is a mad scramble for companies to get their hands on the limited supply of Blackwell AI GPUs, and that is frustrating some companies, while raising tensions with others.

NVIDIA CEO on Blackwell AI GPU demand: it 'is so great, everyone wants to be first'

Jensen Huang was recently speaking to the audience at the Goldman Sachs Group Inc. technology conference in San Francisco, where he said: "The demand on it is so great, and everyone wants to be first and everyone wants to be most. We probably have more emotional customers today. Deservedly so. It's tense. We're trying to do the best we can".

He continued, adding that TSMC's "agility and their capability to respond to our needs is just incredible" said Jensen, adding: "And so we use them because they're great, but if necessary, of course, we can always bring up others".

Continue reading: NVIDIA CEO on Blackwell AI GPU demand: it 'is so great, everyone wants to be first' (full post)

Oracle CEO says company is spending $100+ billion on 2000+ data centers, NVIDIA gets 40% of it

Anthony Garreffa | Sep 11, 2024 8:08 PM CDT

Oracle is pushing all-in with the data center market, promising to spend $100+ billion over the next 4 years, with NVIDIA to get 40% of that $100B as it's the global leader in AI GPUs.

Oracle CEO says company is spending $100+ billion on 2000+ data centers, NVIDIA gets 40% of it

Right now, Oracle has 162 cloud data centers in operation and under construction across the world, explains Oracle chairman and CTO, Larry Ellison. He said: "the largest of these datacenters is 800 megawatts and will contain acres of NVIDIA GPU Clusters for training large scale AI models".

He continued, adding: "Oracle could operate up to 2000 data centers in the future, a significant increase from the 162 currently in operation". It's not just this news, but the Oracle CEO added: "So we're in the middle of designing a data center that's north of the gigawatt that has -- but we found the location and the power place we look at it, they've already got building permits for 3 nuclear reactors. These are the small modular nuclear reactors to power the data center. This is how crazy it's getting. This is what's going on".

Continue reading: Oracle CEO says company is spending $100+ billion on 2000+ data centers, NVIDIA gets 40% of it (full post)

Oracle to use 130,000+ NVIDIA Blackwell AI GPUs supercluster, powered by 3 nuclear reactors

Anthony Garreffa | Sep 11, 2024 7:27 PM CDT

Oracle has announced it's spending over $100 billion on 2000+ new data centers, expanding on the 160 data centers in operation, with NVIDIA getting 40% of that business for AI hardware. Not only that, but not one, not two, but three nuclear reactors could power the new Blackwell AI GPU supercluster.

Oracle to use 130,000+ NVIDIA Blackwell AI GPUs supercluster, powered by 3 nuclear reactors

NVIDIA and Oracle will be launching zettascale OCI superclusters with over 100,000 AI GPUs, with new infrastructure to accelerate AI training and deployment of generative AI models. NVIDIA GB200 liquid-cooled bare-metal instances for large-scale AI applications will be introduced, with Oracle to offer NVIDIA HGX H200 Tensor Core GPUs, connecting up to 65,536 AI GPUs for real-time inference.

Oracle CEO said: "So we're in the middle of designing a data center that's north of the gigawatt that has -- but we found the location and the power place we look at it, they've already got building permits for 3 nuclear reactors. These are the small modular nuclear reactors to power the data center. This is how crazy it's getting. This is what's going on".

Continue reading: Oracle to use 130,000+ NVIDIA Blackwell AI GPUs supercluster, powered by 3 nuclear reactors (full post)

OpenAI could skyrocket ChatGPT subscription to $2000 per month for next-gen Strawberry AI model

Anthony Garreffa | Sep 10, 2024 6:03 AM CDT

OpenAI is reportedly considering heavily increasing the cost of its subscription-based services for its AI chatbots, with the current $20 monthly fee for its ChatGPT Plus service possibly seeing a new ceiling of $2000 per month for its next-gen AI model Strawberry.

OpenAI could skyrocket ChatGPT subscription to $2000 per month for next-gen Strawberry AI model

In a new report by The Information, we could see OpenAI charging as much as $2000 per month for its AI chatbots, especially if we see some radical upgrades out of Strawberry, its latest AI model. Its new Strawberry model has been referred to as "GPT-Next" and should roll out before the end of 2024, according to the latest reports (more on that below).

Strawberry will reportedly have "System 2 thinking" which will allow GPT-Next to take the time to deliberate and reason through problems, versus just predicting longer and longer sets of tokens to complete its responses. System 2 thinking has impressive results: scoring over 90% on the MATH benchmark, a collection of advanced mathematical problems.

Continue reading: OpenAI could skyrocket ChatGPT subscription to $2000 per month for next-gen Strawberry AI model (full post)

Rambus unveils industry-first HBM4 controller IP, ready to super-speed next-gen AI workloads

Anthony Garreffa | Sep 10, 2024 4:34 AM CDT

Rambus has just unveiled the industry-first HBM4 controller IP that will accelerate next-generation AI workloads.

Rambus unveils industry-first HBM4 controller IP, ready to super-speed next-gen AI workloads

The new Rambus HBM4 controller enables a new generation of HBM memory deployments for cutting-edge AI accelerators, graphics, and HPC applications. Rambus' new HBM4 controller supports the JEDEC spec of 6.4Gbps, supporting operations of up to 10Gbps with a throughput of 2.56TB/sec to each memory device. The new Rambus HBM4 controller IP can be paired with third-party of customer PHY solutions to instantiate a HBM4 memory subsystem.

Neeraj Paliwal, SVP and general manager of Silicon IP, at Rambus, said: "With Large Language Models (LLMs) now exceeding a trillion parameters and continuing to grow, overcoming bottlenecks in memory bandwidth and capacity is mission-critical to meeting the real-time performance requirements of AI training and inference. As the leading silicon IP provider for AI 2.0, we are bringing the industry's first HBM4 Controller IP solution to the market to help our customers unlock breakthrough performance in their state-of-the-art processors and accelerators".

Continue reading: Rambus unveils industry-first HBM4 controller IP, ready to super-speed next-gen AI workloads (full post)

AI mysteriously starts crying out loud like a human confusing a user

Jak Connor | Sep 10, 2024 12:02 AM CDT

AI music generators have become all the rage since the explosion in popularity of AI-generation tools, but now we are starting to hear the oddities that can come out of AI generation.

AI mysteriously starts crying out loud like a human confusing a user

A Reddit user has posted a short clip that was created using the music generation software known as Suno. The user wrote that with the 24-second clip, the AI sounded like it was crying at the end of the video and that this crying wasn't part of the prompt that created the clip. The user wasn't alone, as another commented that an AI-generated song that has since been posted on Spotify features an outburst at the end where users can audibly hear "No!".

So, what could be causing these strange and seemingly emotional outbursts? Well, these AI systems are designed to create music based on the keywords provided in the user's prompt. It appears that Suno is attempting to create a human-esc outro, which can often feature fading pieces of audio and sometimes single words or sounds.

Continue reading: AI mysteriously starts crying out loud like a human confusing a user (full post)

YouTube announces new tools to detect AI-generated content

Jak Connor | Sep 8, 2024 10:20 AM CDT

YouTube has announced it's working on a new set of tools designed to detect AI-generated content across its platform, and that these tools have been created with the intention of protecting the likeness of creators.

YouTube announces new tools to detect AI-generated content

In a new post on the official YouTube blog, the company's Vice President of Creator Productors at YouTube, Amjad Hanif, explained how the new tools represent YouTube's commitment to "responsible AI development," and part of that is regulating the content on its platform that AI is helping create. The blog post reveals the video platform has created a new tool that is capable of detecting the signing voice of a musician, or the musician's "likeness".

The same principle has been applied to another tool that's designed to identify AI-generated content showing the faces of actors, creators, athletes, political figures, and more. The tools are meant to be guardrails for how YouTube is going to deal with the influx of AI-generated video content on its platform when AI-generation tools eventually make it into more people's daily devices. That isn't to say AI-generated content isn't already being posted on YouTube because it certainly is, and at an increasing rate.

Continue reading: YouTube announces new tools to detect AI-generated content (full post)

TSMC and Samsung co-developing a bufferless HBM4 memory chip, its first partnership in AI

Anthony Garreffa | Sep 6, 2024 8:00 AM CDT

Samsung has announced it is partnering with TSMC on co-developing bufferless HBM4 memory chips for future AI chips at the SEMICON Taiwan 2024 forum on Thursday.

TSMC and Samsung co-developing a bufferless HBM4 memory chip, its first partnership in AI

Samsung is the world's largest memory chipmaker, partnering with TSMC, the world's largest contract chip manufacturer, with the South Korean and Taiwan semiconductor giants working together on bufferless HBM4 memory in order to strengthen their positions in the constantly-evoling AI chip market.

Dan Kochpatcharin, the head of Ecosystem and Alliance Management at TSMC, said during SEMICON Taiwan 2024 that the two companies were developing a bufferless HBM4 chip. Samsung makes its own HBM4, with TSMC forming a "triangular alliance" with SK hynix and NVIDIA on future HBM and AI designs. SK hynix is second to Samsung (and also native to South Korea) but this new development between Samsung + TSMC is very interesting.

Continue reading: TSMC and Samsung co-developing a bufferless HBM4 memory chip, its first partnership in AI (full post)

China has 'renting services' for NVIDIA AI GPUs, cheaper than the US at as low as $6 per hour

Anthony Garreffa | Sep 6, 2024 7:19 AM CDT

Chinese cloud service providers have "renting services" where they rent out their hardware stack, with prices that are radically cheaper than those available in the United States... as low as $6 per hour.

China has 'renting services' for NVIDIA AI GPUs, cheaper than the US at as low as $6 per hour

The Financial Times reports that Chinese CSPs (cloud service providers) are renting out their AI GPU hardware, with small Chinese CSPs providing companies with an AI server that packs 8 x NVIDIA A100 AI GPUs that costs around $6 per hour to rent them out. If you're in the US, that would cost you around 50% more at $10 per hour.

US sanctions to the side, NVIDIA's newer H100 and older A100 AI GPUs are easily available in China, which is why the rental costs are so much lower than other regions. It's estimated that over 100,000 x NVIDIA H100 AI GPUs are in China right now, and they're openly being sold on Chinese marketplaces, smuggled all across the country according to one Chinese startup founder.

Continue reading: China has 'renting services' for NVIDIA AI GPUs, cheaper than the US at as low as $6 per hour (full post)

Newsletter Subscription