Artificial Intelligence - Page 33

Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 33

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

Your next Taco Bell drive-thru order may be taken by an AI not a human

Jak Connor | Aug 1, 2024 10:01 AM CDT

It's only a matter of time before artificial intelligence-powered devices integrate into different facets of society, and considering these impressive AI tools are powered by Large Language Models (LLMs), it only makes sense the integration begins with roles that require communication with humans.

Your next Taco Bell drive-thru order may be taken by an AI not a human

One of the fundamental aspects of LLMs and, by extension, these AI-powered chatbots is to bridge the gap between human and machine communication. Microsoft recently revealed it created an AI model that was so powerful that it could perfectly replicate the sound of an individual's voice to the point where it was indistinguishable from the human-generated voice. Microsoft deemed the technology too dangerous to release to the public.

On the other end of the spectrum, AI voice generators could be capable of replacing a monotonous job such as a drive-thru server. Taco Bell is seemingly one of the first to take this step, as the fast-food chain revealed in a new press release that its parent company, Yum! Brands has been testing "Voice AI" technology at more than 100 Taco Bell drive-thru locations across the US. The parent company plans on bringing the technology to "hundreds" of more locations.

Continue reading: Your next Taco Bell drive-thru order may be taken by an AI not a human (full post)

Activision made this epic Call of Duty: Warzone map open-source and available for AI training

Kosta Andreadis | Aug 1, 2024 1:58 AM CDT

To "expand the knowledge base of the gaming industry" and for non-commercial use and education, Activision has made the giant Caldera map from Call of Duty: Warzone open-source. The publisher and developer says this is a first-of-its-kind "data" release for Call of Duty and gaming, available in OpenUSD. It includes the geometry from the map plus "time samples showing how players move around the map."

Activision made this epic Call of Duty: Warzone map open-source and available for AI training

Okay, so at this point, you're probably wondering what unlimited access to a giant Call of Duty battle royale map brings to the table. Well, for one, it's a great tool for machine learning, research, and education in various fields while also helping to advance game development.

"In an era where AI training and the evolution of authoring tools are pivotal, the availability of production-proven maps is crucial," Activision explains. On that note, the Call of Duty: Warzone Caldera map is the most extensive and complex bits of environmental geometry ever released as an open-source data set.

Continue reading: Activision made this epic Call of Duty: Warzone map open-source and available for AI training (full post)

SK hynix to showcase its next-gen AI memory products like 12-layer HBM3E at FMS 2024

Anthony Garreffa | Jul 31, 2024 8:36 PM CDT

SK hynix has announced it will be attending FMS 2024, a global semiconductor memory event held in Santa Clara, California, where it will showcase advancements in its memory technologies and products and present "future visions" in the AI space.

SK hynix to showcase its next-gen AI memory products like 12-layer HBM3E at FMS 2024

The Future Memory and Storage was formerly known as the Flash Memory Summit (FMS) event for NAND providers mainly, was rebranded this year to invite a wider range of participants including DRAM and storage providers, thanks to the rocket-fueled demand in AI hardware.

SK hynix will give a keynote speech at FMS 2024 next week, where it will promote its competitiveness in leading the AI memory solution industry, with the company pointing out just "as it did with the announcement of the development of the industry's highest 321-layer NAND at FMS last year".

Continue reading: SK hynix to showcase its next-gen AI memory products like 12-layer HBM3E at FMS 2024 (full post)

'Friend' the new $99 wearable AI that can bully you when you're down

Jak Connor | Jul 31, 2024 7:53 AM CDT

With the rise of artificial intelligence products, we are starting to see the first companies trying to implement AI into wearable hardware.

'Friend' the new $99 wearable AI that can bully you when you're down

The first few iterations of the technology combination came in the form of a $699 device called the Human AI Pin, which was pinned onto a user's shirt, and the Rabbit R1, another pinnable AI device. Both of these devices didn't attract much attention at all, but the space of wearable AI devices is still very new, and we have just gotten our first look at potentially a new contender in the space. Introducing Friend, the wearable AI device that is capable of mocking you.

While that may sound like a joke, it certainly isn't, as the promotional video for the product, which has, at the time of writing, exceeded 90,000 views from a channel that has less than 500 subscribers, contains a scene where the AI mocks the wearer. So, here's how it works. The new AI wearable is called Friend, and instead of it being a pin its a medallion attached to a lanyard and worn around the neck of the wearer.

Continue reading: 'Friend' the new $99 wearable AI that can bully you when you're down (full post)

Microsoft officially denounces AI deepfake abuse with a plea to the US government

Jak Connor | Jul 31, 2024 6:02 AM CDT

Microsoft has publicly come out and denounced AI-generation tools being used to create deepfake images that are then used to commit crimes such as fraud, abuse, manipulation.

Microsoft officially denounces AI deepfake abuse with a plea to the US government

Unfortunately, the demographics that are victims of this form of abuse are children and the elderly, and according to a recent blog post by Microsoft Vice Chair and President Brad Smith, the US government needs to step in and implement new regulations that hold the creators of deepfake content with nefarious purposes accountable for their actions.

Smith explains that AI-generated deepfakes are realistic and extremely easy for anyone to make. Unfortunately, due to their accessibility, the technology, while being built with the intention to conduct research and assist in people's workflows/projects, is increasingly being used to commit fraud, abuse, and other crimes. Smith not only called up regulators for new laws to protect victims of AI deepfakes but also the private sector to acknowledge its responsibility to "prevent the misuse of AI."

Continue reading: Microsoft officially denounces AI deepfake abuse with a plea to the US government (full post)

Samsung still needs to wait 2-4 months for its HBM3E memory to be approved for NVIDIA AI GPUs

Anthony Garreffa | Jul 31, 2024 3:33 AM CDT

Samsung has been hitting problem after problem with developing new HBM memory chips for the ever-expanding HBM market, where it still needs another 2-4 months before the South Korean giant will get approved by NVIDIA to use its new HBM3E memory on its AI GPUs.

Samsung still needs to wait 2-4 months for its HBM3E memory to be approved for NVIDIA AI GPUs

In a new Bloomberg report, Samsung has made "important headway in its comeback, including winning the long-awaited approval" from NVIDIA for its HBM3 memory to be used on NVIDIA's leading AI GPUs.

Samsung is now anticipating approval for its next-gen HBM3 memory in the next 2-4 months, according to people familiar with the matter who "asked not to be identified discussing internal developments".

Continue reading: Samsung still needs to wait 2-4 months for its HBM3E memory to be approved for NVIDIA AI GPUs (full post)

NVIDIA CEO says Meta has 600,000 H100 AI GPUs, Meta are 'good customers for NVIDIA' says Zuck

Anthony Garreffa | Jul 31, 2024 1:44 AM CDT

Meta's long-term vision for AGI (artificial general intelligence) involved the use of 600,000 x NVIDIA H100 AI GPUs, something NVIDIA CEO Jensen Huang teased Meta CEO Mark Zuckerberg at SIGGRAPH 2024 this week Check it out:

NVIDIA CEO says Meta has 600,000 H100 AI GPUs, Meta are 'good customers for NVIDIA' says Zuck

In the video, NVIDIA CEO Jensen Huang says that Meta is "coming up on 100K H100s" to which Zuck replies that yeah, they're "good customers" for NVIDIA and "that's why you invited me to this Q&A", said Zuck in return to Jensen.

600,000 x NVIDIA H100 AI GPUs at an average cost of $30,000 each combines for a total of $15 billion in AI GPU purchases from Meta to NVIDIA alone. We've seen gigantic GPU clusters from Elon Musk and his xAI startup, with its new Memphis Supercluster needing some insane portable power generators just to get 32,000 x NVIDIA H100 AI GPUs operational in the cluster.

Continue reading: NVIDIA CEO says Meta has 600,000 H100 AI GPUs, Meta are 'good customers for NVIDIA' says Zuck (full post)

NVIDIA starts sampling next-gen Blackwell AI GPUs, mass production is still on track

Anthony Garreffa | Jul 30, 2024 7:02 PM CDT

NVIDIA CEO Jensen Huang has confirmed that engineering samples of its next-gen Blackwell AI GPUs will be sent out "all over the world".

NVIDIA starts sampling next-gen Blackwell AI GPUs, mass production is still on track

The news comes from NVIDIA CEO Jensen Huang himself during a chat at SIGGRAPH 2024 this week, with Jensen saying: "This week, we are sending out engineering samples of Blackwell all over the world".

SIGGRAPH is more tailored towards the "software side of things" so NVIDIA didn't go into the nitty-gritty of its AI hardware, but it did tease that the engineering samples of Blackwell will be headed out this week. That's a very good sign, as it shows that Blackwell AI GPUs are very, very close to being in customers' hands.

Continue reading: NVIDIA starts sampling next-gen Blackwell AI GPUs, mass production is still on track (full post)

SK hynix says HBM3E expected to make up more than half of HBM shipments in 2024

Anthony Garreffa | Jul 30, 2024 8:26 AM CDT

SK hynix has announced that its new fifth-generation HBM3E memory is expected to make up for over half its HBM shipments in 2024.

SK hynix says HBM3E expected to make up more than half of HBM shipments in 2024

During the company's Q2 2024 earnings call on July 25, SK hynix vice president and chief financial officer Kim Woo-hyun said: "We significantly expanded HBM3E shipments in the second quarter as demand was in full swing. In the third quarter, HBM3E shipments will significantly exceed HBM3 shipments, and HBM3E to account for more than half of our total HBM shipments in 2024, we expect".

He continued: "We have already provided HBM3E 12-layer product samples to major customers and will start volume production in the third quarter as planned. With a full product portfolio from HBM2E to HBM3E 12-speed, SK hynix plans to continue its competitive advantage in the HBM market".

Continue reading: SK hynix says HBM3E expected to make up more than half of HBM shipments in 2024 (full post)

NVIDIA CEO Jensen Huang on next wave of AI: physical AI, has a 'three-body problem'

Anthony Garreffa | Jul 30, 2024 2:22 AM CDT

NVIDIA CEO Jensen Huang has said that the next wave of AI is "physical AI" which will require three computer systems to make happen: AI, robotics, and Omniverse.

NVIDIA CEO Jensen Huang on next wave of AI: physical AI, has a 'three-body problem'

At the recent SIGGRAPH 2024 event, NVIDIA CEO Jensen Huang discussed the next wave of AI, something he calls "physical AI" and that it currently has a three-computer problem, or a three-body problem (shout out to the Netflix series "3 Body Problem" which is fantastic).

Jensen said: "Generative AI, the first wave of, it, of course, is all the pioneers. And we know many of the pioneers: OpenAI, Anthropic, Google, Microsoft, a whole bunch of amazing doing this. X is doing this. xAI is doing this. Amazing companies doing this. The next wave of AI, we did talk about, which is enterprise".

Continue reading: NVIDIA CEO Jensen Huang on next wave of AI: physical AI, has a 'three-body problem' (full post)

Meta's AI Studio lets you create AI friends or an AI twin that can post, chat, and respond

Kosta Andreadis | Jul 30, 2024 1:56 AM CDT

This is a sign of the end times or the latest example of AI moving at a pace that is almost impossible for people to predict. With the recent launch of Meta's open-source and powerful Llama 3.1 AI model, the company hasn't skipped a beat and is currently rolling out its new AI Studio tool in the US.

Meta's AI Studio lets you create AI friends or an AI twin that can post, chat, and respond

What is AI Studio? Well, it's described as "a place for people to create, share, and discover AIs to chat with," with no tech skills required. Integrated into Instagram, Messenger, and WhatsApp, these AIs are custom chatbots with a twist. They're your online friends, community, specialists, or even digital twins that can be trained to become you - respond to messages, post content, and even "generate memes."

"You can use a wide variety of prompt templates or start from scratch to make an AI that teaches you how to cook, helps you with your Instagram captions, or generates memes to make your friends laugh - the possibilities are endless," Meta writes in the announcement post.

Continue reading: Meta's AI Studio lets you create AI friends or an AI twin that can post, chat, and respond (full post)

NVIDIA CEO believes 'Everybody will have an AI assistant' and it will transform every job

Kosta Andreadis | Jul 30, 2024 12:58 AM CDT

"Everybody will have an AI assistant," NVIDIA CEO Jensen Huang said at SIGGRAPH 2024. "Every single company, every single job within the company, will have AI assistance." This is a bold statement, to be sure, but not a surprising one considering the state of the industry.

NVIDIA CEO believes 'Everybody will have an AI assistant' and it will transform every job

SIGGRAPH is a professional graphics conference, and yes, this year, AI was not only on the menu but also included in every dish. From new microservices for 3D modeling to physics, materials, and robotics, generative AI is driving innovation. At SIGGRAPH, NVIDIA also announced that the world's largest advertising company was using generative AI as part of the Omniverse to create content for Coca-Cola - arguably the gold standard for brand advertising.

So, where does the AI assistant fit in? At this year's show, NVIDIA discussed the concept of digital agents, which are digital AIs trained on specific data. For example, an AI modeled after everything you've ever written, said, or done at work (that is measurable) could then become a personal AI assistant.

Continue reading: NVIDIA CEO believes 'Everybody will have an AI assistant' and it will transform every job (full post)

Apple did NOT use any NVIDIA AI GPUs to train its AI models, used Google TPU chips

Anthony Garreffa | Jul 29, 2024 10:20 PM CDT

Apple has said it has been using Google TPU chips to train its AI software infrastructure, which will power its upcoming suite of Apple Intelligence, AI tools, and features.

Apple did NOT use any NVIDIA AI GPUs to train its AI models, used Google TPU chips

In a new research paper from Apple, the company detailed the hardware and software infrastructure of its AI tools and features without any mention of NVIDIA hardware whatsoever. Apple said in its research paper that to train its new AI models, it used two different TPUs from Google that are organized in large clusters of chips.

Apple used 2048 x TPUv5p chips from Google for the AI model that will work on the iPhone and other devices, while the company used 8192 x TPUv4 chips for its server AI model. NVIDIA doesn't design TPUs but instead makes GPUs for gaming, workstations, AI GPUs, and more.

Continue reading: Apple did NOT use any NVIDIA AI GPUs to train its AI models, used Google TPU chips (full post)

Researchers tease CRAM tech: over 1000x reduction in AI processing energy requirements

Anthony Garreffa | Jul 29, 2024 9:56 PM CDT

The power required to run complex, massive clusters of high-performance AI GPUs continues to skyrocket with the power of the AI chips themselves, but new research has a reduction in energy consumption required by AI processing by at least 1000x.

Researchers tease CRAM tech: over 1000x reduction in AI processing energy requirements

In a new peer-reviewed paper, a group of engineering researchers at the University of Minnesota Twin Cities have showed an AI efficiency-boosting technology, which is in lamens terms a shortcut in the regular practice of AI computations that massively reduces energy consumption for those workloads.

AI computing sees data transferred between components processing it (logic) and where data is stored (memory and storage). The moving around of this data back and forth is the main factor in power consumption being 200x higher than the energy used in the computation, according to this research.

Continue reading: Researchers tease CRAM tech: over 1000x reduction in AI processing energy requirements (full post)

OpenAI is in talks with a chip maker that is bigger than AMD and Intel combined

Jak Connor | Jul 29, 2024 5:35 AM CDT

NVIDIA is the company powering the tech industry's massive push into artificial intelligence-powered systems, as the green team is making the incredible hardware that makes it possible for these impressive tools and features to exist.

OpenAI is in talks with a chip maker that is bigger than AMD and Intel combined

NVIDIA's dominance in this market was achieved by providing the best hardware for training AI systems, which briefly made the green team the most valuable company on the planet, taking the crown of long-standing tech giants such as Amazon and Microsoft. NVIDIA has since moved down to third place but remains the dependent player in the world of AI-focused microprocessors. With the push into AI many developers want to continue training their creations but don't necessarily want to rely on or keep fueling the massive beast that is NVIDIA.

A new report from The Information provides an example: Microsoft and OpenAI are in talks with several chip designs to create a new AI chip to rival NVIDIA. One of those companies was Broadcom, which is ranked the 13th most valuable company in the world for its solutions in semiconductors and software infrastructure.

Continue reading: OpenAI is in talks with a chip maker that is bigger than AMD and Intel combined (full post)

Elon Musk and X's Grok AI now scrapes every post from every user unless they opt out

Kosta Andreadis | Jul 29, 2024 5:02 AM CDT

X, formerly known as Twitter and the digital echo chamber for Elon Musk and his politics, also has a chatbot and powerful AI called Grok. Created by xAI, alongside hundreds of thousands of high-powered NVIDIA GPUs, Grok is described as an AI with a "rebellious streak" that will deliver candid, unfiltered responses.

Elon Musk and X's Grok AI now scrapes every post from every user unless they opt out

X recently updated its terms and settings for all users. By default, it uses all X data for training. Grok now has access to everybody's posts, including yours, if you're on X. This move follows Meta and is understandable, given that massive amounts of raw data are a key ingredient for training and creating complex AI models like Grok.

Several AI companies and models have been under fire lately, with reports indicating that some have been scraping YouTube and other public forums to train AI. According to a Microsoft executive, if it's online, it's free to scrape. So, yes, X, Elon, and Zuckerberg are not alone in looking to social media platforms for AI training. The good news is that you can opt-out.

Continue reading: Elon Musk and X's Grok AI now scrapes every post from every user unless they opt out (full post)

Meta's huge 16,384 NVIDIA H100 AI GPU cluster: HBM3 memory crashed half of Llama 3 training

Anthony Garreffa | Jul 28, 2024 11:56 PM CDT

Meta has been training on its new Llama 3 405B model on a cluster of 16,384 x NVIDIA H100 80GB AI GPUs. Half of the issues during its 54-day training run were caused by the onboard HBM3 memory.

Meta's huge 16,384 NVIDIA H100 AI GPU cluster: HBM3 memory crashed half of Llama 3 training

Meta released a new study detailing its Llama 3 405B model training, which took 54 days with the 16,384 NVIDIA H100 AI GPU cluster. During that time, 419 unexpected component failures occurred, with an average of one failure every 3 hours. In half of those failures, GPUs or their onboard HBM3 memory were the blame.

In a system with truck loads of components like CPUs, motherboards, RAM, SSDs, GPUs, power systems, cooling systems, a supercomputer is exotic and ultimately powerful, but it's completely normal for issues to happen every few hours. But, it's how developers work on those issues and get the system to remain operational no matter what local breakdowns are happening.

Continue reading: Meta's huge 16,384 NVIDIA H100 AI GPU cluster: HBM3 memory crashed half of Llama 3 training (full post)

AMD Amuse is a new AI image generation tool that runs locally on Ryzen and Radeon PCs

Kosta Andreadis | Jul 28, 2024 10:28 PM CDT

AMD has introduced Amuse 2.0, a new AI image generation tool currently in Beta. Amuse is a fully local experience, meaning it doesn't require plugging into the cloud. It requires either an AMD Ryzen AI 300 Series processor or a Radeon RX 7000 Series graphics card to run. Thanks to their AI-based XDNA architecture, it also runs on systems with AMD's mobile Ryzen 8040 processors.

AMD Amuse is a new AI image generation tool that runs locally on Ryzen and Radeon PCs

This is important because Amuse includes AMD XDNA Super Resolution integration, which upscales lower-resolution images on AMD mobile devices. According to AMD, it "increases output size by 2X at the end of the image generation stage."

As seen in other AI image generation tools, Amuse uses Stable Diffusion models from Stability AI to create its images. Amuse takes the widely used and available Stable Diffusion AI models and creates a "painless, easy to use and optimized end-user experience" for its customers. In creating Amuse, AMD partnered with the New Zealand-based TensorStack to help develop the user-friendly UI.

Continue reading: AMD Amuse is a new AI image generation tool that runs locally on Ryzen and Radeon PCs (full post)

Amazon is 'racing' to make next-gen AI chips faster, and cheaper than NVIDIA

Anthony Garreffa | Jul 28, 2024 4:44 AM CDT

Amazon is currently letting around half a dozen engineers test out a "closely guarded" new server design through its paces, according to the latest reports.

Amazon is 'racing' to make next-gen AI chips faster, and cheaper than NVIDIA

In a new article from RReuters, the outlet reports that the server in question was "packed" with Amazon's artificial intelligence (AI) chips that will compete with the likes of NVIDIA and its market-leading AI GPUs. The news is coming directly from Amazon executive Rami Sinno, said on Friday to Reuters during a visit to the Amazon AI chip lab.

Amazon is developing its own AI processors to limit its future reliance on more expensive NVIDIA AI GPU offerings and the so-called NVIDIA tax. Amazon uses AI all across its Amazon Web Services (AWS) with the company planning to spend $100 billion on data centers used for AI in the future.

Continue reading: Amazon is 'racing' to make next-gen AI chips faster, and cheaper than NVIDIA (full post)

US DOE wants to make Discovery: world's fastest supercomputer, 3-5x faster than Frontier

Anthony Garreffa | Jul 28, 2024 4:20 AM CDT

The US Department of Energy (DoE) is working on its next-generation Discovery supercomputer, which will be a whopping 3-5x faster than its existing Frontier supercomputer.

US DOE wants to make Discovery: world's fastest supercomputer, 3-5x faster than Frontier

The DoE has put out requests for proposals for its new Discovery supercomputer, with interested parties having until August 30, 2024 to submit their proposals. The next-generation Discovery supercomputer would be delivered to the Oak Ridge Leadership Facility (OLCF) at the Oak Ridge National Laboratory (ORNL) in Tennessee, by early 2028.

Discovery will succeed the Frontier supercomputer, which is the world's fastest supercomputer on the biannual Top500 list, which lists the world's fastest supercomputers -- of which Frontier is #1 -- for the fifth consecutive time in May 2024.

Continue reading: US DOE wants to make Discovery: world's fastest supercomputer, 3-5x faster than Frontier (full post)

Newsletter Subscription