Artificial Intelligence - Page 46

Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 46

Follow TweakTown on Google News

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

ZOTAC unveils new AI-powered ZBOX Mini PCs with Intel and AMD AI CPU options

Anthony Garreffa | Mar 28, 2024 7:32 PM CDT

ZOTAC has just announced three brand-new compact form-factor Mini AI PC systems powered by the latest processors and NPUs for AI workloads from Intel and AMD.

ZOTAC unveils new AI-powered ZBOX Mini PCs with Intel and AMD AI CPU options

The new ZOTAC Mini AI PC systems feature Intel Core Ultra "Meteor Lake" and AMD Ryzen 7840HS "Hawk Point" APUs, both with integrated NPUs (Neural Processing Units) that are used for AI workloads. First, we've got ZOTAC's new ZBOX M Series PC with Intel's latest Core Ultra 7 155H and Core Ultra 5 125H "Meteor Lake" CPUs.

The ZOTAC ZBOX Edge MI672 and MI652 feature a beautiful low-profile design that looks fantastic. Thanks to the LPE cores inside the Meteor Lake CPU, it's also power efficient. Intel includes integrated Arc graphics that pack up to 2x the performance of previous-gen chips, so you can enjoy some light-level gaming on the ZBOX Edge MI672/MI652 Mini AI PC systems.

Continue reading: ZOTAC unveils new AI-powered ZBOX Mini PCs with Intel and AMD AI CPU options (full post)

Scientists are using AI to make beer taste even better

Jak Connor | Mar 28, 2024 3:01 AM CDT

Artificial intelligence is being used around the world to create some pretty incredible things, such as photorealistic video from text prompts, but it's also being used to make beer taste even better than it already does.

Scientists are using AI to make beer taste even better

A new study published in the scientific journal Nature details Belgian researchers taking a machine learning model and feeding it 180,000 online beer reviews, along with feedback from a panel of 16 people, to create a new AI system that is capable of predicting how to make taste as good as possible. The panel sampled 250 beers for 50 attributes over three years, taking into account variables such as bitterness, sweetness, alcohol content, and malt aroma.

The newly trained model was then asked to improve the taste of beer by providing the best composition. The team of researchers then made changes to already commercially available beers before they were given to the sampling panel. The panel responded by giving the AI-altered beer a much higher score. It should be noted that creating beer is much more than just identifying the best ingredients, as the skill of the brewer is a massive factor in the end result.

Continue reading: Scientists are using AI to make beer taste even better (full post)

Quanta Computer to make NVIDIA GB200-based AI servers for Google, Amazon, and Meta

Anthony Garreffa | Mar 28, 2024 12:32 AM CDT

Quanta Computer is one of the largest OEM suppliers in the world, with new contracts won to build NVIDIA GB200 AI systems for the likes of Google, Amazon AWS, Meta, and some B200-based AI systems for Microsoft.

Quanta Computer to make NVIDIA GB200-based AI servers for Google, Amazon, and Meta

The company will have its first GB200 AI servers in testing in July or August "at the earliest" reports UDN, with mass production expected in September. Quanta holds "large OEM orders" for GB200 servers from Google, Amazon AWS, and Meta which are provided as complete AI cabinets. Microsoft ordered some B200 servers, which means Quanta is building next-gen AI systems for all four major US cloud servers in one set of orders.

NVIDIA's new GB200 cabinet AI servers cost around $2-3 million each, so we can expect some major revenues for NVIDIA in the second half of this year once these orders are processed. UDN reports that Quanta is "optimistic" that as a shortage of materials in the supply chain gets better, AI server shipments will increase as soon as May or June, while the second half of 2024 is expected to be an "explosive period".

Continue reading: Quanta Computer to make NVIDIA GB200-based AI servers for Google, Amazon, and Meta (full post)

Microsoft's Copilot AI will run locally on AI PCs that have at least 40 TOPS of NPU performance

Kosta Andreadis | Mar 27, 2024 9:02 PM CDT

We already know that in the age of the Windows AI PC, devices are set to arrive with dedicated Copilot keys. Microsoft's AI is on track to be integrated with all parts of the operating system. Today, Intel executives at Intel's AI Summit in Taipei confirmed that Copilot AI services will soon run locally on PCs - as long as they meet a certain performance threshold.

Microsoft's Copilot AI will run locally on AI PCs that have at least 40 TOPS of NPU performance

That threshold is 40 TOPS of performance on the Neural Processing Unit (NPU) found in an AI PC. Based on this information, which Tom's Hardware confirmed, the definition of an AI PC has been given a specific hardware or performance baseline. Anything below 40 TOPS will still be able to run Copilot AI tasks and processes in the cloud, but it will serve as a way to differentiate hardware-based 'AI PCs' versus could-only devices.

Though it might take a generation or two to get there, Intel says it has next-gen products lined up that will fall into this 40 TOPS of performance category. Currently, The Meteor Lake NPU in Core Ultra chips offers only 10 TOPS performance, with AMD's Ryzen Hawk Point offering 16 TOPS.

Continue reading: Microsoft's Copilot AI will run locally on AI PCs that have at least 40 TOPS of NPU performance (full post)

NVIDIA's new Hopper H200 AI GPU tested: 3x faster GenAI with TensorRT-LLM in MLPerf 4.0 results

Anthony Garreffa | Mar 27, 2024 8:32 PM CDT

NVIDIA might have just announced its next-generation Blackwell B200 AI GPU, but the beefed-up Hopper H200 AI GPU is smashing performance records in the very latest MLPerf 4.0 results.

NVIDIA's new Hopper H200 AI GPU tested: 3x faster GenAI with TensorRT-LLM in MLPerf 4.0 results

NVIDIA's optimizations on TensorRT-LLM have been a non-stop chain of progression since the company released its AI Software suite last year. There were major performance increases from MLPerf 3.1 results to MLPerf 4.0, with NVIDIA amplifying Hopper's AI performance.

Using these new TensorRT-LLM optimizations, NVIDIA has pulled out a huge 2.4x performance leap with its current H100 AI GPU in MLPerf Inference 3.1 to 4.0 with GPT-J tests using an offline scenario. With server-based scenarios using GPT-J, NVIDIA's current H100 AI GPU had a huge 2.9x increase in MLPerf 3.1 to 4.0 performance.

Continue reading: NVIDIA's new Hopper H200 AI GPU tested: 3x faster GenAI with TensorRT-LLM in MLPerf 4.0 results (full post)

OpenAI Sora video tool large-scale deployment uses 720,000 NVIDIA H100 GPUs worth $21.6 billion

Anthony Garreffa | Mar 26, 2024 9:33 PM CDT

OpenAI's impressive new text-to-video tool, Sora, loves some GPU compute power. New numbers from Factorial Funds estimate that 720,000 x NVIDIA H100 AI GPUs would be needed for peak times on Sora.

OpenAI Sora video tool large-scale deployment uses 720,000 NVIDIA H100 GPUs worth $21.6 billion

720,000 x NVIDIA AI GPUs is a monumental amount of AI GPU computing power. With each costing around $30,000 x 720,000, that's $21.6 billion. Not only is it a mountain of money, but the amount of power at 700W per GPU is astounding, too, totaling 504,000,000W of power. Yeah, that's a lot of power.

Factorial Funds estimated that Sora used between 4,200 and 10,500 NVIDIA H100 AI GPUs for one month, with a single H100 AI GPU capable of generating a one-minute video in about 12 minutes, or around 5 x one-minute videos per hour.

Continue reading: OpenAI Sora video tool large-scale deployment uses 720,000 NVIDIA H100 GPUs worth $21.6 billion (full post)

OpenAI releases stunning Sora-created videos made by artists and directors

Jak Connor | Mar 26, 2024 8:31 AM CDT

Since OpenAI announced its new AI video generation platform, Sora, the company has been slowly releasing videos created by its new model, increasing the hype surrounding the tool before it becomes available to the public.

OpenAI releases stunning Sora-created videos made by artists and directors

OpenAI has now taken to its blog to share a new selection of Sora-created videos, but this time around, they have been made by a select group of filmmakers, artists, advertising agencies, and musicians. OpenAI is attempting to demonstrate the capabilities of Sora when its in the hands of creatives, and how the upcoming text-to-video tool can be used by creatives to help bring ideas to reality.

As you can probably imagine, all of the lucky individuals who were able to use Sora to create some content praised the capabilities of the new tool, with there being no mention of what video content Sora was trained on to be able to create the new content.

Continue reading: OpenAI releases stunning Sora-created videos made by artists and directors (full post)

3DMark creators have a new $5000 AI Image Generation Benchmark tool to test GPU performance

Kosta Andreadis | Mar 26, 2024 7:33 AM CDT

UL Solutions, the creators of the popular 3DMark benchmark suite for PC gamers, is expanding its professional range of UL Procyon benchmarks with the arrival of the Procyon AI Image Generation Benchmark. Built around the Stable Diffusion AI model, this new benchmark measures the generative AI performance of a modern GPU.

3DMark creators have a new $5000 AI Image Generation Benchmark tool to test GPU performance

In the benchmark results, part of UL's announcement, we see the GeForce RTX 4060 Ti deliver a score of 1080 - not that we have any reference to know how good or bad that is. Regardless, it's a powerful tool that supports multiple inference engines (Intel OpenVINO, NVIDIA TensorRT, and ONNX runtime with DirectML).

It also measures CPU and GPU temperatures, clock speeds, and usage. However, don't expect this to appear on Steam like 3DMark. Procyon AI Image Generation Benchmark is aimed at the professional space; it costs $5000 USD for an annual site license.

Continue reading: 3DMark creators have a new $5000 AI Image Generation Benchmark tool to test GPU performance (full post)

NVIDIA's LATTE3D generative AI model creates game-ready 3D models and objects in seconds

Kosta Andreadis | Mar 26, 2024 6:32 AM CDT

NVIDIA showcased a new AI-powered technology that could revolutionize game development and the PC modding community. Described as a "virtual 3D printer," LATTE3D turns text prompts into 3D objects within seconds. The company notes that the 3D creations are created in a popular format that can be slotted into virtual environments for games or other applications with a few clicks.

NVIDIA's LATTE3D generative AI model creates game-ready 3D models and objects in seconds

"A year ago, it took an hour for AI models to generate 3D visuals of this quality - and the current state of the art is around 10 to 12 seconds," said Sanja Fidler, vice president of AI research at NVIDIA. "We can now produce results an order of magnitude faster, putting near-real-time text-to-3D generation within reach for creators across industries."

NVIDIA adds that by running on a single GPU like the NVIDIA RTX A6000, 3D shapes, animals, and objects are created instantly.

Continue reading: NVIDIA's LATTE3D generative AI model creates game-ready 3D models and objects in seconds (full post)

Google, Intel, Qualcomm fighting NVIDIA's dominance on AI GPUs, with CUDA software alternative

Anthony Garreffa | Mar 25, 2024 11:55 PM CDT

NVIDIA is absolutely dominating the AI GPU business with a purported 90% of the AI market taken by Team Green, but now a coalition of tech companies, including Google, Intel, and Qualcomm, are fighting back... they want to take down CUDA a notch or three.

Google, Intel, Qualcomm fighting NVIDIA's dominance on AI GPUs, with CUDA software alternative

In a new report by Reuters, the site says that Google, Intel, and Qualcomm have "plans to loosen NVIDIA chokehold by going after the chip giant's secret weapon: the software that keeps developed tied to NVIDIA chips," which is CUDA. Reuters continues, adding "they are part of an expanding group of financiers and companies hacking away at NVIDIA's dominance in AI".

Vinesh Sukumar, Qualcomm's head of AI and machine learning, told Reuters: "We're actually showing developers how you migrate out from an NVIDIA platform".

Continue reading: Google, Intel, Qualcomm fighting NVIDIA's dominance on AI GPUs, with CUDA software alternative (full post)

TinyCorp's new TinyBox AI system: AMD AI GPU system starts at $15K, NVIDIA GPU starts at $25K

Anthony Garreffa | Mar 25, 2024 11:33 PM CDT

TinyCorp has just announced that it will offer new TinyBox AI systems with either AMD or NVIDIA AI hardware inside. AMD systems will start at $15,000, while NVIDIA hardware inside will start at $25,000.

TinyCorp's new TinyBox AI system: AMD AI GPU system starts at $15K, NVIDIA GPU starts at $25K

The company offers its new TinyBox AI systems with 6 x AMD Radeon RX 7900 XTX graphics cards inside, starting at $15,000 per system. Meanwhile, TinyCorp offers a TinyBox AI system with 6 x NVIDIA GeForce RTX 4090 graphics cards, with this monster AI system starting at $25,000.

TinyCorp has been experiencing issues with AMD hardware, posting about its issues and new AI systems on X. The company explained that it was going to sell just AMD-powered systems, but due to various issues, it was forced to offer NVIDIA chips for AI GPU usage.

Continue reading: TinyCorp's new TinyBox AI system: AMD AI GPU system starts at $15K, NVIDIA GPU starts at $25K (full post)

Broadcom shows off absolutely gigantic AI chip, new XPU design for 'consumer AI company'

Anthony Garreffa | Mar 25, 2024 6:03 PM CDT

Broadcom has been silently working on what appears to be one of the largest processors ever made, with 12 stacks of HBM memory, making this mystery XPU a bigger beast than NVIDIA's just-announced Blackwell B200 AI GPU.

Broadcom shows off absolutely gigantic AI chip, new XPU design for 'consumer AI company'

In some new photos posted to X by our friend Patrick Moorhead, founder of top-ranked, technology analyst and advisory firm Moor Insights and Strategy. He snapped a photo with Frank Ostojic, who runs Broadcom's custom silicon group, and their new third-gen XPU design from a large "consumer AI company". You can see the picture of the XPU above.

Broadcom's mysterious third-gen XPU design features 12 stacks of HBM memory, which makes it bigger than NVIDIA's new Blackwell B200 AI GPU that the company unveiled at GTC 2024 last week.

Continue reading: Broadcom shows off absolutely gigantic AI chip, new XPU design for 'consumer AI company' (full post)

OpenAI is trying to sell its video-generation AI software to Hollywood

Jak Connor | Mar 25, 2024 6:15 AM CDT

OpenAI recently revealed its upcoming video-generation software named Sora, which is set to revolutionize how videos are made through users simply typing out what kind of video they want created and then waiting for it to be generated.

OpenAI is trying to sell its video-generation AI software to Hollywood

The unveiling of Sora was certainly mind-blowing as the examples that were shown were almost indistinguishable from human-created videos. To be completely fair, the Sora-created videos did feature some tell-tale signs of AI-generated content such as errors with physics, human hands, and people walking. But, at a glance, or without knowing what these tell-tale signs are, the videos were would pass completely undetected by viewers.

Now, Bloomberg has reported that OpenAI, the creators of Sora, are now taking the new AI-powered tool to directors and film studios in Hollywood. The report doesn't state which film studios or directors have showcased the new technology, but an OpenAI spokesperson did confirm the company is trying to collaborate with the industry.

Continue reading: OpenAI is trying to sell its video-generation AI software to Hollywood (full post)

NVIDIA's full-spec Blackwell B200 AI GPU uses 1200W of power, up from 700W on Hopper H100

Anthony Garreffa | Mar 24, 2024 7:38 PM CDT

NVIDIA revealed its next-generation Blackwell B200 AI GPU at its recent GTC 2024 (GPU Technology Conference) event but left out some details that we're now discovering... like the new AI GPU consuming up to a whopping 1200W of power.

NVIDIA's full-spec Blackwell B200 AI GPU uses 1200W of power, up from 700W on Hopper H100

The new information on the Blackwell AI GPUs comes directly from NVIDIA SVP and GPU Architect, Jonah Albe, along with Ian Buck, the VP of Hyperscale and HPC at NVIDIA. Jonah pointed out that NVIDIA's new Blackwell GPU uses a completely different microarchitecture to Hopper, with Blackwell featuring 2nd Generation Transformer Engine Technology that adds both FP4 and FP6 compute formats, which along with new software optimizations that NVIDIA has made, unleashes Blackwell to be the fastest AI chip on the planet.

Blackwell has a 32% increase in FP64 compute performance with B200 versus Hopper H100, with Blackwell being an AI GPU first and foremost, FP64 compute performance isn't as important from an AI workload standpoint, where the lower you go, the faster the AI inferencing and training capabilities become.

Continue reading: NVIDIA's full-spec Blackwell B200 AI GPU uses 1200W of power, up from 700W on Hopper H100 (full post)

Google confirms AI can predict the most common natural disaster 7 days before it happens

Jak Connor | Mar 22, 2024 1:35 AM CDT

Google has announced that its AI system is capable of predicting the most common natural disaster up to seven days before it happens.

Google confirms AI can predict the most common natural disaster 7 days before it happens

The new research has been published in the scientific journal Nature and details a new machine-learning model that has been trained on historical event data, river level readings, elevation and terrain readings, and any other relevant information that is necessary to arrive at a prediction. Following the model being trained, it was then strained by running "hundreds of thousands" of simulations of flooding events occurring in each location. The result of the training and simulations is that the model is now capable of predicting riverine floods up to seven days in advance, according to Google.

Google states that the use of this AI-powered model will help solve the riverine flooding problem on a global scale. Notably, the model was able to successfully predict a flood seven days in advance in some cases, but on average landed on five days. Furthermore, Google has said that this new technology has "reliability of currently-available global nowcasts from zero to five days."

Continue reading: Google confirms AI can predict the most common natural disaster 7 days before it happens (full post)

Scientists busted publishing AI-generated papers in academic journals

Jak Connor | Mar 22, 2024 12:48 AM CDT

A new report from 404 Media has highlighted at least several instances of scientific journals publishing papers that were seemingly generated using artificial intelligence-powered tools such as ChatGPT.

Scientists busted publishing AI-generated papers in academic journals

The report states that AI-generated papers are being published in academic journals, which has raised the question of the impact of AI-powered tools on academia as a whole. The report cites Google Scholar, a journal database, and when searching this database with phrases such as "As of my last knowledge update" and "I don't have access to real-time data," two phrases commonly used by AI in its responses to prompts from users, more than 100 studies become listed.

It's unclear if these papers were entirely generated by AI, or AI was used to assist their creation. However, 404 Media reports at least one paper appears to be flagrantly submitted to a respected chemistry journal, Surfaces and Interfaces. The paper was published after peer review and didn't even remove the AI-powered chatbot's introduction.

Continue reading: Scientists busted publishing AI-generated papers in academic journals (full post)

Micron's entire HBM supply sold out for 2024, and a majority of 2025 supply already allocated

Anthony Garreffa | Mar 21, 2024 7:02 PM CDT

Micron has announced that it has sold out of its HBM3E memory supply for 2024, and that most of its HBM3E memory has been allocated for 2025.

Micron's entire HBM supply sold out for 2024, and a majority of 2025 supply already allocated

We are to expect Micron's latest HBM3E memory to be inside of NVIDIA's beefed-up H200 AI GPU, with the US company competing against HBM rivals in South Korea with Samsung and SK hynix. Micron CEO Sanjay Mehrotra talked about HBM supply in a recent earnings call, where we're finding out the new information.

Sanjay Mehrotra, chief executive of Micron, said: "Our HBM is sold out for calendar 2024, and the overwhelming majority of our 2025 supply has already been allocated. We continue to expect HBM bit share equivalent to our overall DRAM bit share sometime in calendar 2025. We are on track to generate several hundred million dollars of revenue from HBM in fiscal 2024 and expect HBM revenues to be accretive to our DRAM and overall gross margins starting in the fiscal third quarter".

Continue reading: Micron's entire HBM supply sold out for 2024, and a majority of 2025 supply already allocated (full post)

Samsung AGI Computing Labs in US and South Korea to build completely new semiconductor for AGI

Anthony Garreffa | Mar 20, 2024 9:33 PM CDT

Samsung has just announced its started the development of next-generation artificial general intelligence (AGI) dedicated semiconductors through its AGI Computing Labs in the USA and South Korea.

Samsung AGI Computing Labs in US and South Korea to build completely new semiconductor for AGI

Kyung Kye-hyun, president of Samsung Electronics' Device Solutions Division, said on his own social media on March 19: "I am pleased to announce the establishment of Samsung Semiconductor's AIG Computing Labs in both the United States and Korea".

Dr. Woo Dong-gyuk is a former developer of Google's Tensor Processing Unit (TPU) as one of the three that designed the TPU platform for the search giant, is now running the AGI Computing Lab and is recruiting more staff to help with Samsung Semiconductor's journeys into completely new semiconductor technology for the future of AGI. He said: "We will create a completely new type of semiconductor specifically designed to meet the astonishing processing requirements of future AGI".

Continue reading: Samsung AGI Computing Labs in US and South Korea to build completely new semiconductor for AGI (full post)

NVIDIA is qualifying Samsung's new HBM3E chips, will use them for future B200 AI GPUs

Anthony Garreffa | Mar 20, 2024 9:11 PM CDT

NVIDIA CEO Jensen Huang told the press during a media briefing at GTC 2024 that "HBM memory is very complicated and the value added is very high. We are spending a lot of money on HBM". Jensen added: "Samsung is very good, a very good company".

NVIDIA is qualifying Samsung's new HBM3E chips, will use them for future B200 AI GPUs

SK hynix gobbles up most of the advanced HBM3 and HBM3E memory needs for NVIDIA and its growing arsenal of AI GPUs with the Hopper H100, H200, and new Blackwell B100 and B200 AI GPUs all using HBM memory. Jensen continued: "The upgrade cycle for Samsung and SK Hynix is incredible. As soon as NVIDIA starts growing, they grow with us. I value our partnership with SK Hynix and Samsung very incredibly".

The news directly from the CEO of NVIDIA that it will be using HBM memory supplied by Samsung saw the South Korean company's shares jump by 5.6% on Wednesday.

Continue reading: NVIDIA is qualifying Samsung's new HBM3E chips, will use them for future B200 AI GPUs (full post)

Meta orders NVIDIA's next-gen Blackwell B200 AI GPUs, shipments expected later this year

Anthony Garreffa | Mar 20, 2024 8:37 PM CDT

Meta has purchased NVIDIA's new Blackwell B200 AI GPUs to train its Llama models, according to Meta CEO Mark Zuckerberg. The company is also training a third-generation of its Llama model on two GPU clusters that it announced last week, each of them packing around 24,000 of NVIDIA's current-gen Hopper H100 AI GPUs.

Meta orders NVIDIA's next-gen Blackwell B200 AI GPUs, shipments expected later this year

The news is coming from a new report by Reuters, which said that Meta will continue using its current H100-powered AI GPU clusters to train its current-gen Llama 3 model, but will use NVIDIA's new Blackwell B200 AI GPUs to train future generations of the model, according to a Meta spokesperson.

NVIDIA announced its new Blackwell B200 AI GPU at its GPU Technology Conference (GTC) event this week, offering gigantic improvements to all things AI.

Continue reading: Meta orders NVIDIA's next-gen Blackwell B200 AI GPUs, shipments expected later this year (full post)

Newsletter Subscription