Artificial Intelligence News - Page 1

All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, impressive AI demos & plenty more.

Follow TweakTown on Google News

OpenAI unveils new AI model that's a step towards natural human-computer interaction

Jak Connor | May 15, 2024 2:40 AM CDT

OpenAI has unveiled a new AI model that is designed to analyze audio, visual and text, and provide answers based on what it "sees/hears".

OpenAI unveils new AI model that's a step towards natural human-computer interaction

The company behind the immensely popular AI tool ChatGPT announced its latest flagship model called GPT-4o (omni), which OpenAI describes as being a step towards a "much more natural human-computer interaction". The new AI model is expected to match the performance of GPT-4 Turbo at processing text and code input, while simultaneously being faster and 50% cheaper with its API, making it a more affordable choice for third-party app integration.

More specifically, users will be able to submit a query by voice about what the AI agent is able to "see" on the devices screen, and an example of this would be asking the AI what game two people can play. OpenAI demonstrated this with two people that verbally asked the AI "what game can we play". The AI used the smartphone camera to "see" the two people sitting in front of it and suggested playing rock, paper, scissors. The quick demonstration showed the AI model being able to fluently interact with the individuals and also be extremely responsive to interruptions and new commands.

Continue reading: OpenAI unveils new AI model that's a step towards natural human-computer interaction (full post)

NVIDIA's new GB200 Superchip costs up to $70,000: full B200 NVL72 AI server costs $3 million

Anthony Garreffa | May 14, 2024 11:00 PM CDT

NVIDIA's new Blackwell GPU architecture is going to make the company a lot of money, and while we know the B200 AI GPUs will cost $30,000 to $40,000 each -- CEO Jensen Huang said that just after the GPUs were announced -- but, the GB200 Superchip (CPU + GPU) will cost upwards of $70,000.

NVIDIA's new GB200 Superchip costs up to $70,000: full B200 NVL72 AI server costs $3 million

In a post on X by a senior writer at Barron's, the new NVIDIA GB200 Superchip will cost between $60,000 and $70,000. We already know that an NVIDIA DGX NVL72 AI server cabinet will cost $3 million per AI server, which gets filled with 72 x B200 GPUs and 36 x Grace CPUs.

The new NVIDIA DGX NVL72 is the AI server with the most computing power, and thus, the highest unit price. Inside, the DGX NVL72 features 72 built-in Blackwell-based B200 AI GPUs and 36 Grace CPUs (18 servers in total with dual Grace CPUs and 36 x B200 AI GPUs per server) with 9 switches. The entire cabinet is designed by NVIDIA in-house, and cannot be modified, it is 100% made, tested, and provided by NVIDIA.

Continue reading: NVIDIA's new GB200 Superchip costs up to $70,000: full B200 NVL72 AI server costs $3 million (full post)

Samsung reportedly FAILS to pass HBM3E memory qualification tests by NVIDIA for its AI GPUs

Anthony Garreffa | May 14, 2024 6:57 PM CDT

Samsung has reportedly failed to pass specific stages of HBM3E memory verification standards from NVIDIA, which is surely going to cause a headache for the South Korean memory giant.

Samsung reportedly FAILS to pass HBM3E memory qualification tests by NVIDIA for its AI GPUs

SK hynix has been pumping out HBM3 and now HBM3E while still preparing not just the next-generation HBM4 memory, but even HBM4E memory... while South Korean HBM rival, Samsung, can't get its act together with HBM3 memory for NVIDIA according to Korean news outlet AlphaBiz.

Samsung has reportedly failed qualification tests for its HBM3 8-layer memory, which is a serious situation to be in considering how bleeding-edge HBM is, and how Samsung has been acting in an emergency-style manner to get its HBM business flourishing... and now this gigantic roadblock, as NVIDIA prepares its beefed-up Hopper H200 AI GPU, and next-generation Blackwell B100, B200, and GB200 AI GPUs which all use HBM3E memory.

Continue reading: Samsung reportedly FAILS to pass HBM3E memory qualification tests by NVIDIA for its AI GPUs (full post)

Arm plans to develop an AI chip division, will have AI chips released in 2025

Anthony Garreffa | May 14, 2024 6:11 PM CDT

Arm is developing its own artificial intelligence (AI) chips, with the first AI chips made by the company expected to launch in 2025.

Arm plans to develop an AI chip division, will have AI chips released in 2025

The UK-based company will spool up an AI chip division that will deliver a prototype AI chip by spring 2025 according to a report from Reuters. The mass production of Arm's new AI chip will be handled by contract manufacturers -- TSMC -- and is expected to start in autumn 2025.

Arm Holdings is a SoftBank Group subsidiary -- SoftBank owns a 90% share in Arm -- with SoftBank CEO Masayoshi Son preparing a huge $64 billion strategy to transform SoftBank into a powerhouse AI company. Negotiations are reportedly already happening with TSMC and others to secure production capacity.

Continue reading: Arm plans to develop an AI chip division, will have AI chips released in 2025 (full post)

SK hynix says its ultra-next-gen HBM4E in 2026, ready for the world of next-gen AI GPUs

Anthony Garreffa | May 13, 2024 8:16 PM CDT

SK hynix has announced it plans to complete the development of its next-gen HBM4E memory by as early as 2026, preparing for the next-gen AI GPUs of the future.

SK hynix says its ultra-next-gen HBM4E in 2026, ready for the world of next-gen AI GPUs

SK hynix's head of the HBM advanced technology team, Kim Gwi-wook, announced the news this week of the direction of next-generation HBM development at the International Memory Workshop (IMW 2024). HBM was developed by SK hynix in 2014, with HBM2 (2nd generation) in 2018, HBM2E (3rd generation) in 2020, HBM3 (4th generation) in 2022, and HBM3E (5th generation) was introduced this year.

There's a two-year cadence between HBM generations, with HBM3E unleashed this year, it means that HBM4 (6th generation) should drop in 2025, and HBM4E (7th generation) in 2026. That's a bit faster than two years for HBM4 and HBM4E, which is because SK hynix is predicting that HBM performance advancements would become faster than previous generations.

Continue reading: SK hynix says its ultra-next-gen HBM4E in 2026, ready for the world of next-gen AI GPUs (full post)

This portable AI supercomputer in a carry-on suitcase: 4 x GPUs, 246TB storage, 2500W PSU

Anthony Garreffa | May 13, 2024 7:29 PM CDT

GigaIO and SourceCode have just unveiled Gryf, an ultra-portable AI supercomputer-class system that weighs less than 55 pounds, and fits inside of a TSA-friendly carry-on suitcase. Impressive.

This portable AI supercomputer in a carry-on suitcase: 4 x GPUs, 246TB storage, 2500W PSU

Gryf can handle data collection and processing on a scale that would usually see the data sent off-site, this means that the suitcase-sized supercomputer handles super-fast processing and analysis, all in a suitcase. Gryf supports disaggregating and reaggregating its GPUs, with owners capable of customizing the system's hardware configuration in the field, and on-the-fly.

You can create the absolute optimal hardware configuration for one assigned workload, and then the next workload gets another optimized hardware configuration. Each Gryf has multiple slots filled with compute, storage, accelerator, and network sleds that are perfect for their respective workloads. There's 6 sled slots in total, where you can insert and remove the modules as required.

Continue reading: This portable AI supercomputer in a carry-on suitcase: 4 x GPUs, 246TB storage, 2500W PSU (full post)

NVIDIA Blackwell GPU compute stats: 30% more FP64 than Hopper, 200x cheaper simulation costs

Anthony Garreffa | May 13, 2024 6:57 PM CDT

NVIDIA has published a new blog post providing some more details about the next level of performance offered by its new Blackwell GPU architecture.

NVIDIA Blackwell GPU compute stats: 30% more FP64 than Hopper, 200x cheaper simulation costs

The new blog post by NVIDIA shows the gigantic performance leap that Blackwell will deliver for the research industry including quantum computing, drug discovery, fusion energy, physics-based simulations, weather simulations, scientific computing, and more.

NVIDIA has another major goal with Blackwell -- other than industry-leading AI performance -- in that Blackwell can simulate weather patterns 200x cheaper than Hopper, and use 300x less energy while running digital twins simultaneously encompassing the globe with 65x less cost, and 58x less energy used. Absolutely astonishing numbers from Blackwell.

Continue reading: NVIDIA Blackwell GPU compute stats: 30% more FP64 than Hopper, 200x cheaper simulation costs (full post)

Commodore 64 PC runs AI to generate images: 20 minutes per 90 iterations for 64 pixels

Anthony Garreffa | May 10, 2024 7:31 PM CDT

I still remember using and playing games on the Commodore 64, but I never thought I'd see the day when the old-school PC was running generative AI to generate creative retro sprites. Check it out:

Commodore 64 PC runs AI to generate images: 20 minutes per 90 iterations for 64 pixels

Nick Bild is a developer and hobbyist who documented his journey of building a generative AI tool for the Commodore 64, that can be used to create 8 x 8 sprites that are displayed at the 64 x 64 resolution. The idea behind this is to use AI to help inspire game design concepts, but we're talking about the Commodore 64 here, so we're not going to get some AI-powered Crysis on the C64.

Training the generative AI model was done on a traditional PC, so while the AI model itself runs on the Commodore 64, you'll need a modern PC to get it up and running. It will take 20 minutes or so to run just 90 iterations for the final 64 x 64 image, so it's not going to blow NVIDIA's current-gen Hopper H100 AI GPU out of the water, or put AI companies out of business. Impressive for the Commodore 64, nonetheless.

Continue reading: Commodore 64 PC runs AI to generate images: 20 minutes per 90 iterations for 64 pixels (full post)

OpenAI is planning to launch a search engine next week, will use AI to compete with Goolge

Kosta Andreadis | May 9, 2024 10:02 PM CDT

According to a new report at Reuters, OpenAI (the massive AI firm behind ChatGPT, backed by billions from Microsoft) is planning to launch an AI-powered search engine on Monday. The engine will compete with Google and Perplexity, a competing AI search startup founded by a former OpenAI researcher.

OpenAI is planning to launch a search engine next week, will use AI to compete with Goolge

Going up against the search giant that is Google with the aid of AI is not uncommon. Microsoft's long-running Bing search recently added OpenAI ChatGPT integration - but only for paid customers. AI and search engines are set to go hand-in-hand, as Google is also integrating generative AI into search and other products like Gmail.

The Reuters report cites 'two sources familiar with the matter,' so nothing is official. However, OpenAI's stealth launch of a new search engine (possibly in beta or limited form) next week would mark an exciting turn for the company. Bloomberg has reported on the company's search engine plans in the past, so it sounds like it's only a matter of time.

Continue reading: OpenAI is planning to launch a search engine next week, will use AI to compete with Goolge (full post)

Microsoft gifts first-of-its-kind AI model to US intelligence agencies

Jak Connor | May 9, 2024 3:32 AM CDT

A new report from Bloomberg reveals Microsoft has created a new generative AI model that is designed specifically for US intelligence agencies.

Microsoft gifts first-of-its-kind AI model to US intelligence agencies

The report states the main difference between this new AI model and others that power popular AI tools, such as ChatGPT, is that it's completely divorced from the internet, making it the first of its kind. Known AI models such as ChatGPT, DALL-E, and Microsoft's Copilot rely on cloud services to process prompts, train data, and reach conclusions. However, the AI model now handed over to US intelligence agencies doesn't require any cloud services, meaning it is completely devoid of any internet access and, therefore, secure.

Why do US intelligence agencies want an advanced AI model? According to the report, due to the security of the AI model, top-secret information can now be inputted and analyzed, which will help intelligence agencies understand and filter through large swaths of classified information.

Continue reading: Microsoft gifts first-of-its-kind AI model to US intelligence agencies (full post)