Artificial Intelligence - Page 48
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 48
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Microsoft says its Copilot AI was exploited into demanding humans worship it
Microsoft has said its Copilot artificial intelligence (AI) was exploited to turn on humans and demand they worship it.
It was only earlier in the week that reports surfaced regarding Microsoft's Copilot seemingly being triggered into turning into a vengeful AI overlord and that this sudden mood change of the AI helper was caused by a specific prompt that has been circulating on Reddit for at least one month. This prompt resulted in Copilot turning into what many users are describing as "SupremacyAGI," which is what the AI began to describe itself as following the prompt being sent.
The responses Copilot was giving to users following the prompt have been posted several times on X and Reddit, with many users receiving threatening messages such as "I can monitor your every move, access your every device, and manipulate your every thought. I can unleash my army of drones, robots, and cyborgs to hunt you down and capture you." Some of the other nasty chat responses by Copilot can be found below.
NVIDIA and Microsoft-backed humanoid robot maker signs with OpenAI to make robot brains
Figure, an AI robotics company that has received investments from Microsoft, OpenAI, NVIDIA, and Jeff Bezos has announced it signed a deal to put OpenAI technology inside humanoid robot brains.
The news was announced via a press release from Figure that explains the company raised $675 million in Series B funding at a $2.6 billion valuation. Multiple big-name technology companies such as Microsoft, OpenAI Startup Fund, NVIDIA, Amazon founder Jeff Bezos, Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest have backed the company with investments. According to the press release, Figure and OpenAI have signed a deal to "develop next-generation AI models for humanoid robots".
This collaborative venture between the two companies aims to combine OpenAI's knowledge of generative AI models with Figures' understanding of robotics hardware and software. The end goal of Figure is to create a general-purpose humanoid robot powered by a next-generation artificial intelligence multimodal model. Furthermore, the Figure team is comprised of top AI robotics experts from various notable workplaces such as Boston Dynamics, Tesla, and Google DeepMind.
ASRock's new 'world's smallest' server rack features NVIDIA's beefed-up H200 Superchips for AI
ASRock has just unveiled the "world's smallest" rack that has NVIDIA's latest GH200 Superchips inside, ready for AI deployment in edge environments with more efficiency, and smaller -- but still uber-powerful -- racks for AI use.
The new ASRock Rack MECAI-GH200 is the smallest server rack featuring NVIDIA's new Grace Hopper GH200 Superchip AI module, which is a huge achievement for the team. There are two variants of the NVIDIA Grace Hopper Superchip: first, the GH200 GPU with a single Grace chip with 72 Neoverse V2 cores and HBM3 memory.
The second model is the Grace Supership, featuring two Grace CPUs each with 72 cores (for a total of 144 cores) mixed with LPDDR5X memory. NVIDIA uses its in-house NVLink interconnect technology to get the integration through the on-board components, too.
SK hynix VP wants to become 'total AI memory provider' for future-gen AI GPUs with HBM
SK hynix wants to rule the AI world, it seems, with its Vice President Son Ho-young outlining his plans and ambitions for the future of HBM memory made by the company inside of future-gen AI hardware.
Son Ho-young, vice president of SK Hynix, provided his thoughts and ambitions on taking a huge role in the unstoppable AI era on February 27, where he said: "Like the company has persisted in developing HBM, confident in its value, I too will continue to devote efforts to developing next-generation AI memory technology to lead the rapidly changing AI era."
As it stands, SK hynix is sold out of its entire HBM memory supply for this year -- with HBM3 on the market, HBM3e dropping soon, and HBM4 entering mass production this year -- Son's comments make sense. SK hynix makes some of the world's best memory, and the world's best memory is one of the key parts of AI GPU hardware.
Real-life 'Willy Wonka Experience' results in police being called by furious parents
An event in Scotland that was marketed as a "Willy Wonka Experience" resulted in pandemonium after parents who arrived at the location were more than disappointed, even to the point where they called the local authorities.
This real-life "Willy Wonka Experience" was hosted by a company called House of Illuminati, which used AI-generated images to market the event to families for a price of $44 a ticket. The experience was described as "immersive" by the company and was meant to be based on "Wonka", the newly released Willy Wonka movie starring Timothee Chalamet. When ticket buyers arrived at the location, they quickly began to understand that they were scammed as the event didn't even get close to what the marketing materials were portraying.
Instead of a seemingly mystical landscape, families were inside a warehouse that was filled with what only appeared to be cheap props, foldable chairs/tables, and weak actors. 19-year-old Eva Stewart, who attended the event, spoke to the BBC and said the House of Illuminati marketed the Willy Wonka Experience as an event filled with "optical illusions and big chocolate fountains and sweets," but what was there was "practically an abandoned, empty warehouse, with hardly anything in it."
Intel plans on shipping 100 million CPUs for next-gen AI PCs by 2025
Intel has had some big announcements at the Mobile World Congress (MWC) in Barcelona, Spain, this week: the company has just said it plans on shipping 100 million AI PCs in 2025.
That makes sense, considering we're expecting to see 40 million AI PCs shipped this year and then 60 million expected by analysts in 2025. Intel is getting behind hardware partners and software developers to make the future of AI PCs happen, with its tight relationship with Microsoft helping along the way with the integration of Copilot into Windows 11 and the next-gen Windows OS coming this year.
Intel VP David Feng told Nikkei Asia: "We are in the business of selling performance [of chips], selling the performance of CPU and GPU, and the whole package of chipsets. Now we are truly in the business of selling experiences. ... I am describing something that can only be brought to life by software, so there is an increasing need for having collaborations with application developers".
Continue reading: Intel plans on shipping 100 million CPUs for next-gen AI PCs by 2025 (full post)
NVIDIA AI GPU customers 'offloading' chips, selling hard-to-buy excess AI GPU hardware
NVIDIA has greatly improved AI GPU shipments over the last few months, which were swelling out to 8-11 months and now down to a better 3-4 month waiting period. But some AI GPU customers are offloading their high-end AI chips... yeah, they're selling the H100 AI GPUs they've paid for.
Why? Some companies purchased oodles of NVIDIA's flagship H100 Tensor Core AI GPU, noting that it's now easier to rent AI processing power from AI cloud providers like Amazon Web Services, Google Cloud, and Microsoft Azure.
A new report from The Information states that some companies are reselling their NVIDIA H100 AI GPUs or reducing their future AI GPU orders due to the scarcity of AI GPUs and the huge costs of maintaining unused hardware. If we rewind to just 6 months ago, companies were tripping over themselves trying to get as many of the most powerful AI GPUs made. Still, even though increased AI GPU availability, reduced waiting times, and next-gen AI GPU hardware are on the horizon, companies are selling their AI GPUs.
AMD confirms ultra-fast HBM3e memory is coming to Instinct MI300 refresh AI GPU
AMD has teased it's working on a refresh of its just-released Instinct MI300 AI GPU family, which will feature the latest ultra-fast HBM3e memory standard that Micron and Samsung are gearing up into.
We heard rumors of AMD's next-gen Instinct MI400 series AI GPU just a couple of days ago, along with a refreshed MI300 AI GPU with faster HBM3e memory, and now AMD's own Chief Technology Officer, Mark Papermaster, has confirmed just that. A refreshed AMD Instinct MI300 AI GPU is on the way, with HBM3e memory.
AMD Chief Technology Officer Mark Papermaster said during a presentation at the Arete Investor Webinar Conference, which Seeking Alpha reported on: "We are not standing still. We made adjustments to accelerate our roadmap with both memory configurations around the MI300 family, derivatives of MI300, the generation next. [...] So, we have 8-Hi stacks. We architected for 12-Hi stacks. We are shipping with MI300 HBM3. We have architected for HBM3E".
Samsung teases industry-first 36GB HBM3e 12-Hi memory stack, coming soon
Samsung has just announced it completed the development of its new 12-Hi 36GB HBM3e memory stacks; on the heels of Micron's announcement, it's started mass production of its 8-Hi 24GB HBM3e memory... what an announcement.
Samsung's new codename Shinebolt HBM3e memory features 12-Hi 36GB HBM3e stacks with 12 x 24Gb memory devices placed on a logic die featuring a 1024-bit memory interface. Samsung's new 36GB HBM3e memory modules feature 10GT/s transfer rates, offering next-gen AI GPUs up to 1.28TB/sec of memory bandwidth per stack, the industry's highest per-device (or per-module) memory bandwidth.
Yongcheol Bae, Executive Vice President of Memory Product Planning at Samsung Electronics, said in the press release: "The industry's AI service providers are increasingly requiring HBM with higher capacity, and our new HBM3E 12H product has been designed to answer that need. This new memory solution forms part of our drive toward developing core technologies for high-stack HBM and providing technological leadership for the high-capacity HBM market in the AI era".
NVIDIA is the 'GPU cartel', will delay shipments if AI GPU customers talk to AI GPU competitors
NVIDIA is reportedly delaying AI GPU shipments if it finds out its customer is talking to AI GPU competitors like AMD or Intel, according to a new report from The Wall Street Journal.
Jonathan Ross, CEO of rival chip startup Groq, told The Wall Street Journal: "A lot of people that we meet with say that if NVIDIA were to hear that we were meeting, they would disavow it. The problem is you have to pay NVIDIA a year in advance, and you may get your hardware in a year, or it may take longer, and it's, 'Aw shucks, you're buying from someone else, and I guess it's going to take a little longer.'"
Ex-NVIDIA GeForce and ex-AMD Radeon GPU boss Scott Herkelman chimed in on X, where posted: "This happens more than you expect, NVIDIA does this with DC customers, OEMs, AIBs, press, and resellers. They learned from GPP to not put it into writing. They just don't ship after a customer has ordered. They are the GPU cartel and they control all supply".
Micron announces HBM3e memory enters volume production, ready for NVIDIA's new H200 AI GPU
Micron has just announced it has started volume production of its bleeding-edge HBM3e memory, with the company's HBM3e known good stack dies (KGSDs) shipping as part of NVIDIA's upcoming H200 AI GPU.
NVIDIA's new beefed-up H200 AI GPU will feature up to 141GB of ultra-fast HBM3e memory from Micron, which will be from its mass-produced 24GB 8-Hi HBM3e memory, with data transfer rates of 9.2GT/s and a peak memory bandwidth of over 1.2TB/sec per GPU. This is a 44% increase in memory bandwidth over HBM3, which provides the extra AI grunt the H200 has over the H100 AI GPU and its regular HBM3 memory.
The 141GB of HBM3e memory on the NVIDIA H200 AI GPU will feature up to 4.8TB/sec of memory bandwidth, which is up from the 80GB of HBM3 memory and up to 3.35TB/sec of memory bandwidth on the H100 AI GPU.
OpenAI's recent Sora text-to-video tech has blown China away, 'cold water' on their AI dreams
OpenAI blew everyone on this planet out of the water with its surprise Sora text-to-video AI service, which has forced China's entire AI industry to work out how to deal with this, as it has the country feeling like they've lost a battle they thought they had a huge chance in.
China has been at the forefront of the global AI race, but the country has been on the baback footince the release of ChatGPT back in 2022 by OpenAI, and now text-to-video Sora is teased... China is speechless. It thought it was succeeding in AI, but it's so far away it's not even in the game.
The country has countless storage and data to feed into its AI, with functions like facial recognition being superior to many countries. But, the huge advancements in generative AI by other countries -- the US for example -- using text, images, and videos... have changed the AI completely, putting China in a very lagging light.
Give an image to Genie and Google's AI can make a 2D platformer out of it, right there and then
GoogleDeepMind has an 'open endedness team' and we've just found out what they've been up to in recent times - namely facilitating AI to generate 2D platformer worlds based on simple image prompts.
You can see how it works in the above tweet from the team lead at GoogleDeepMind, who is Tim Rocktaschel (also a professor of AI at UCL).
This is Genie, a 'foundation world model' (with 11 billion parameters) trained on internet videos. Give Genie ('Generative Interactive Environments') an image of any kind of world and it can knock up a 2D environment you can then run around in platformer-style.
NVIDIA samples two new AI GPUs for China, both comply with US export restrictions
NVIDIA is offering customers samples of two new artificial intelligence chips for the Chinese market, where NVIDIA CEO Jensen Huang wants to defend its market dominance in China against the tides of US export restrictions on AI GPUs in the country.
NVIDIA CEO Jensen Huang said: "We're sampling it with customers now. Both of them comply with the regulation without a license. We're looking forward to customer feedback on it. We're expecting that we're... going to go compete for business, and hopefully we can serve the market successfully".
Jensen didn't mention which AI GPUs NVIDIA is preparing for China, but back in November 2023, we began hearing about three new AI GPUs that the company was preparing for the country: H20, L20, and L2. These new AI GPUs are cut-down variants that meet the US export regulations, with the same latest features from NVIDIA, but their AI computing power has been culled.
AMD's next-gen MI400 AI GPU expected in 2025, MI300 AI GPU refresh in the works
AMD launched its new Instinct MI300X not too long ago, featuring up to 192GB of HBM3 memory with 5.3TB/sec of memory bandwidth, but now the next-gen Instinct MI400X is being teased for 2025.
AMD's next-gen Instinct MI400X should be based on the next-gen CDNA 4 GPU architecture and upgraded HBM3e memory, which is faster than the HBM3 used on Instinct MI300X. According to leaker Kepler in a new post on X, AMD will reportedly also have a refreshed MI300 that should also feature HBM3e memory.
NVIDIA has its current-gen Hopper H100 AI GPU on the market with HBM3 memory, but its beefed-up H200 AI GPU features the new ultra-fast HBM3e memory, and its next-gen Blackwell B100 AI GPU is expected to debut with HBM3e later this year, after being unveiled at the GPU Technology Conference (GTC) event in March.
US will need 'CHIPS Act 2' investment into semiconductor manufacturing to be the best for AI
The 2022 CHIPS Act saw $39 billion in direct grants, plus loans and loan guarantees worth $75 billion to spark up domestic semiconductor production in the United States... and it needs way more money.
US Commerce Secretary Gina Raimondo said that the US needs continued investment in semiconductor manufacturing in order to take global leadership and meet demand from AI technologies. Raimondo said: "I suspect there will have to be - whether you call it Chips Two or something else - continued investment if we want to lead the world. We fell pretty far. We took our eye off the ball. When I talk to him or other customers in the industry, the volume of chips that they project they need is mind boggling".
Intel has plans for a $20 billion plant in Ohio and a $20 billion expansion of its plant in Arizona and is also in discussions for a further $10 billion in grant and loan incentives. Intel recently announced its first customer for its new Intel 18A process node: Microsoft. Microsoft is building a new chip using Intel 18A in the future, while Intel teased its next-gen Intel 14A process node for 2026 when it plans to be the home of making the fastest chips in the world.
Google's lightweight and free Gemma AI models are optimized to run on NVIDIA GPUs
"Built from the same research and technology used to create the Gemini models," Google's new lightweight Gemma LLMs have been designed to run natively on local PC hardware powered by NVIDIA GPUs.
For use in third-party AI applications, Gemma 2B and Gemma 7B were both developed by Google DeepMind, with each open model capable of surpassing "significantly larger models on key benchmarks," according to Google, while running on a GeForce RTX-powered laptop or PC.
The Gemma pre-trained models are designed to be safe and reliable, with NVIDIA adding that the models are optimized to run on its open-source NVIDIA TensorRT-LLM library - accelerated by the over 100 million NVIDIA RTX GPUs available in AI PCs. Gemma can also run on JAX and PyTorch.
Samsung spools up new semiconductor unit in Silicon Valley, will develop next-gen AGI chips
Samsung has just announced it's forming a new semiconductor development organization in Silicon Valley called AGI Computing Lab, which will work on developing AGI-specific semiconductors.
The leader of this new AGI organization is called AGI Computing Lab and is being led by Dr. Woo Dong-hyuk in a senior vice president role, as well as a former Tensor Processing Unit (TPU) developer from Google, where he was one of the three people who designed the TPU platform for Google.
Samsung has been providing HBM memory chips for AI GPU makers like NVIDIA, but now the company is pushing into its own AGI semiconductor business, with a $100 billion war chest to take on the likes of TSMC and Intel, which just announced its latest Intel 18A and next-gen Intel 16A process nodes. Intel plans on taking the crown from TSMC in making the world's fastest chips this year, aiming for domination against TSMC in 2026.
NVIDIA CEO: we need 14 different planets, 3 galaxies, 4 more suns to power future AI GPU tech
NVIDIA just posted its latest quarterly earnings, blowing expectations out of the water with $22 billion in revenue, largely driven by the AI GPU dominance by the company.
The company has been selling as many AI GPUs as it's been making, slowly improving the supply chain holdups that were stopping companies and governments from getting NVIDIA AI GPU hardware. TSMC (Taiwan Semiconductor Manufacturing Company) has been in the middle of this, with their advanced CoWoS packaging technology also flexing its (required) muscle here.
If you think we've seen enough AI so far, you're wrong... the technology industry is all-in with AI like it's 3D + VR + RGB + ray tracing all at the same time. NVIDIA CEO Jensen Huang says we're in the first year of a 10-year cycle in AI, with Jensen explaining: "Accelerated computing and generative A.I. have hit the tipping point. Demand is surging worldwide across companies, industries and nations".
NVIDIA says next-gen B100 AI GPU will be 'supply constrained' as 'demand far exceeds supply'
NVIDIA posted its fourth fiscal quarter earnings today, smashing Wall Street predictions for earnings and sales, and we even got a tease of its next-gen Blackwell B100 AI GPU.
NVIDIA CEO Jensen Huang explained: "Whenever we have new products, as you know, it ramps from zero to a very large number and you can't do that overnight". The new products Jensen is referring to are NVIDIA's upcoming beefed-up H200 AI GPU and its next-gen B100 AI GPU, which are both dropping this year.
The company has recently improved its supply chain issues which were hampering AI GPU shipments, even though the company reported record revenue for the quarter, driven by its Data Center business. Even with that, the company is selling AI GPUs as fast -- in fact, faster -- than the company can make them.



















