Artificial Intelligence - Page 54
Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 54
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
SK hynix unveils new tech for 'dream memory chip' to store data, perform calculations for AI
SK hynix unveiled a new technology that will be used to create a "dream memory chip" that is capable of storing data and performing calculations for AI.
The South Korean memory giant unveiled its new technology during the International Memory Workshop (IMW 2024) held from May 12-15 at the Walkerhill Hotel in the Gwangjin district of Seoul, South Korea. The new technology enhances the accuracy of Multiply Accumulate (MAC) operations in Analog Computing in Memory (A-CIM) semiconductors using oxygen diffusion barrier technology.
MAC operations are critical for the high-speed multiplication and accumulation processes required in artificial intelligence (AI) inference and learning. SK hynix's recent development is a significant step for the company in the competitive field of creating a "dream memory semiconductor" that can both store information and perform calculations, passing the traditional limitations of memory-only semiconductors.
OpenAI unveils new AI model that's a step towards natural human-computer interaction
OpenAI has unveiled a new AI model that is designed to analyze audio, visual and text, and provide answers based on what it "sees/hears".
The company behind the immensely popular AI tool ChatGPT announced its latest flagship model called GPT-4o (omni), which OpenAI describes as being a step towards a "much more natural human-computer interaction". The new AI model is expected to match the performance of GPT-4 Turbo at processing text and code input, while simultaneously being faster and 50% cheaper with its API, making it a more affordable choice for third-party app integration.
More specifically, users will be able to submit a query by voice about what the AI agent is able to "see" on the devices screen, and an example of this would be asking the AI what game two people can play. OpenAI demonstrated this with two people that verbally asked the AI "what game can we play". The AI used the smartphone camera to "see" the two people sitting in front of it and suggested playing rock, paper, scissors. The quick demonstration showed the AI model being able to fluently interact with the individuals and also be extremely responsive to interruptions and new commands.
NVIDIA's new GB200 Superchip costs up to $70,000: full B200 NVL72 AI server costs $3 million
NVIDIA's new Blackwell GPU architecture is going to make the company a lot of money, and while we know the B200 AI GPUs will cost $30,000 to $40,000 each -- CEO Jensen Huang said that just after the GPUs were announced -- but, the GB200 Superchip (CPU + GPU) will cost upwards of $70,000.
In a post on X by a senior writer at Barron's, the new NVIDIA GB200 Superchip will cost between $60,000 and $70,000. We already know that an NVIDIA DGX NVL72 AI server cabinet will cost $3 million per AI server, which gets filled with 72 x B200 GPUs and 36 x Grace CPUs.
The new NVIDIA DGX NVL72 is the AI server with the most computing power, and thus, the highest unit price. Inside, the DGX NVL72 features 72 built-in Blackwell-based B200 AI GPUs and 36 Grace CPUs (18 servers in total with dual Grace CPUs and 36 x B200 AI GPUs per server) with 9 switches. The entire cabinet is designed by NVIDIA in-house, and cannot be modified, it is 100% made, tested, and provided by NVIDIA.
Samsung reportedly FAILS to pass HBM3E memory qualification tests by NVIDIA for its AI GPUs
Samsung has reportedly failed to pass specific stages of HBM3E memory verification standards from NVIDIA, which is surely going to cause a headache for the South Korean memory giant.
SK hynix has been pumping out HBM3 and now HBM3E while still preparing not just the next-generation HBM4 memory, but even HBM4E memory... while South Korean HBM rival, Samsung, can't get its act together with HBM3 memory for NVIDIA according to Korean news outlet AlphaBiz.
Samsung has reportedly failed qualification tests for its HBM3 8-layer memory, which is a serious situation to be in considering how bleeding-edge HBM is, and how Samsung has been acting in an emergency-style manner to get its HBM business flourishing... and now this gigantic roadblock, as NVIDIA prepares its beefed-up Hopper H200 AI GPU, and next-generation Blackwell B100, B200, and GB200 AI GPUs which all use HBM3E memory.
Arm plans to develop an AI chip division, will have AI chips released in 2025
Arm is developing its own artificial intelligence (AI) chips, with the first AI chips made by the company expected to launch in 2025.
The UK-based company will spool up an AI chip division that will deliver a prototype AI chip by spring 2025 according to a report from Reuters. The mass production of Arm's new AI chip will be handled by contract manufacturers -- TSMC -- and is expected to start in autumn 2025.
Arm Holdings is a SoftBank Group subsidiary -- SoftBank owns a 90% share in Arm -- with SoftBank CEO Masayoshi Son preparing a huge $64 billion strategy to transform SoftBank into a powerhouse AI company. Negotiations are reportedly already happening with TSMC and others to secure production capacity.
SK hynix says its ultra-next-gen HBM4E in 2026, ready for the world of next-gen AI GPUs
SK hynix has announced it plans to complete the development of its next-gen HBM4E memory by as early as 2026, preparing for the next-gen AI GPUs of the future.
SK hynix's head of the HBM advanced technology team, Kim Gwi-wook, announced the news this week of the direction of next-generation HBM development at the International Memory Workshop (IMW 2024). HBM was developed by SK hynix in 2014, with HBM2 (2nd generation) in 2018, HBM2E (3rd generation) in 2020, HBM3 (4th generation) in 2022, and HBM3E (5th generation) was introduced this year.
There's a two-year cadence between HBM generations, with HBM3E unleashed this year, it means that HBM4 (6th generation) should drop in 2025, and HBM4E (7th generation) in 2026. That's a bit faster than two years for HBM4 and HBM4E, which is because SK hynix is predicting that HBM performance advancements would become faster than previous generations.
This portable AI supercomputer in a carry-on suitcase: 4 x GPUs, 246TB storage, 2500W PSU
GigaIO and SourceCode have just unveiled Gryf, an ultra-portable AI supercomputer-class system that weighs less than 55 pounds, and fits inside of a TSA-friendly carry-on suitcase. Impressive.
Gryf can handle data collection and processing on a scale that would usually see the data sent off-site, this means that the suitcase-sized supercomputer handles super-fast processing and analysis, all in a suitcase. Gryf supports disaggregating and reaggregating its GPUs, with owners capable of customizing the system's hardware configuration in the field, and on-the-fly.
You can create the absolute optimal hardware configuration for one assigned workload, and then the next workload gets another optimized hardware configuration. Each Gryf has multiple slots filled with compute, storage, accelerator, and network sleds that are perfect for their respective workloads. There's 6 sled slots in total, where you can insert and remove the modules as required.
NVIDIA Blackwell GPU compute stats: 30% more FP64 than Hopper, 200x cheaper simulation costs
NVIDIA has published a new blog post providing some more details about the next level of performance offered by its new Blackwell GPU architecture.
The new blog post by NVIDIA shows the gigantic performance leap that Blackwell will deliver for the research industry including quantum computing, drug discovery, fusion energy, physics-based simulations, weather simulations, scientific computing, and more.
NVIDIA has another major goal with Blackwell -- other than industry-leading AI performance -- in that Blackwell can simulate weather patterns 200x cheaper than Hopper, and use 300x less energy while running digital twins simultaneously encompassing the globe with 65x less cost, and 58x less energy used. Absolutely astonishing numbers from Blackwell.
Commodore 64 PC runs AI to generate images: 20 minutes per 90 iterations for 64 pixels
I still remember using and playing games on the Commodore 64, but I never thought I'd see the day when the old-school PC was running generative AI to generate creative retro sprites. Check it out:
Nick Bild is a developer and hobbyist who documented his journey of building a generative AI tool for the Commodore 64, that can be used to create 8 x 8 sprites that are displayed at the 64 x 64 resolution. The idea behind this is to use AI to help inspire game design concepts, but we're talking about the Commodore 64 here, so we're not going to get some AI-powered Crysis on the C64.
Training the generative AI model was done on a traditional PC, so while the AI model itself runs on the Commodore 64, you'll need a modern PC to get it up and running. It will take 20 minutes or so to run just 90 iterations for the final 64 x 64 image, so it's not going to blow NVIDIA's current-gen Hopper H100 AI GPU out of the water, or put AI companies out of business. Impressive for the Commodore 64, nonetheless.
OpenAI is planning to launch a search engine next week, will use AI to compete with Goolge
According to a new report at Reuters, OpenAI (the massive AI firm behind ChatGPT, backed by billions from Microsoft) is planning to launch an AI-powered search engine on Monday. The engine will compete with Google and Perplexity, a competing AI search startup founded by a former OpenAI researcher.
Going up against the search giant that is Google with the aid of AI is not uncommon. Microsoft's long-running Bing search recently added OpenAI ChatGPT integration - but only for paid customers. AI and search engines are set to go hand-in-hand, as Google is also integrating generative AI into search and other products like Gmail.
The Reuters report cites 'two sources familiar with the matter,' so nothing is official. However, OpenAI's stealth launch of a new search engine (possibly in beta or limited form) next week would mark an exciting turn for the company. Bloomberg has reported on the company's search engine plans in the past, so it sounds like it's only a matter of time.