Artificial Intelligence - Page 71
Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 71
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
OpenAI's recent Sora text-to-video tech has blown China away, 'cold water' on their AI dreams
OpenAI blew everyone on this planet out of the water with its surprise Sora text-to-video AI service, which has forced China's entire AI industry to work out how to deal with this, as it has the country feeling like they've lost a battle they thought they had a huge chance in.
China has been at the forefront of the global AI race, but the country has been on the baback footince the release of ChatGPT back in 2022 by OpenAI, and now text-to-video Sora is teased... China is speechless. It thought it was succeeding in AI, but it's so far away it's not even in the game.
The country has countless storage and data to feed into its AI, with functions like facial recognition being superior to many countries. But, the huge advancements in generative AI by other countries -- the US for example -- using text, images, and videos... have changed the AI completely, putting China in a very lagging light.
Give an image to Genie and Google's AI can make a 2D platformer out of it, right there and then
GoogleDeepMind has an 'open endedness team' and we've just found out what they've been up to in recent times - namely facilitating AI to generate 2D platformer worlds based on simple image prompts.
You can see how it works in the above tweet from the team lead at GoogleDeepMind, who is Tim Rocktaschel (also a professor of AI at UCL).
This is Genie, a 'foundation world model' (with 11 billion parameters) trained on internet videos. Give Genie ('Generative Interactive Environments') an image of any kind of world and it can knock up a 2D environment you can then run around in platformer-style.
NVIDIA samples two new AI GPUs for China, both comply with US export restrictions
NVIDIA is offering customers samples of two new artificial intelligence chips for the Chinese market, where NVIDIA CEO Jensen Huang wants to defend its market dominance in China against the tides of US export restrictions on AI GPUs in the country.
NVIDIA CEO Jensen Huang said: "We're sampling it with customers now. Both of them comply with the regulation without a license. We're looking forward to customer feedback on it. We're expecting that we're... going to go compete for business, and hopefully we can serve the market successfully".
Jensen didn't mention which AI GPUs NVIDIA is preparing for China, but back in November 2023, we began hearing about three new AI GPUs that the company was preparing for the country: H20, L20, and L2. These new AI GPUs are cut-down variants that meet the US export regulations, with the same latest features from NVIDIA, but their AI computing power has been culled.
AMD's next-gen MI400 AI GPU expected in 2025, MI300 AI GPU refresh in the works
AMD launched its new Instinct MI300X not too long ago, featuring up to 192GB of HBM3 memory with 5.3TB/sec of memory bandwidth, but now the next-gen Instinct MI400X is being teased for 2025.
AMD's next-gen Instinct MI400X should be based on the next-gen CDNA 4 GPU architecture and upgraded HBM3e memory, which is faster than the HBM3 used on Instinct MI300X. According to leaker Kepler in a new post on X, AMD will reportedly also have a refreshed MI300 that should also feature HBM3e memory.
NVIDIA has its current-gen Hopper H100 AI GPU on the market with HBM3 memory, but its beefed-up H200 AI GPU features the new ultra-fast HBM3e memory, and its next-gen Blackwell B100 AI GPU is expected to debut with HBM3e later this year, after being unveiled at the GPU Technology Conference (GTC) event in March.
US will need 'CHIPS Act 2' investment into semiconductor manufacturing to be the best for AI
The 2022 CHIPS Act saw $39 billion in direct grants, plus loans and loan guarantees worth $75 billion to spark up domestic semiconductor production in the United States... and it needs way more money.
US Commerce Secretary Gina Raimondo said that the US needs continued investment in semiconductor manufacturing in order to take global leadership and meet demand from AI technologies. Raimondo said: "I suspect there will have to be - whether you call it Chips Two or something else - continued investment if we want to lead the world. We fell pretty far. We took our eye off the ball. When I talk to him or other customers in the industry, the volume of chips that they project they need is mind boggling".
Intel has plans for a $20 billion plant in Ohio and a $20 billion expansion of its plant in Arizona and is also in discussions for a further $10 billion in grant and loan incentives. Intel recently announced its first customer for its new Intel 18A process node: Microsoft. Microsoft is building a new chip using Intel 18A in the future, while Intel teased its next-gen Intel 14A process node for 2026 when it plans to be the home of making the fastest chips in the world.
Google's lightweight and free Gemma AI models are optimized to run on NVIDIA GPUs
"Built from the same research and technology used to create the Gemini models," Google's new lightweight Gemma LLMs have been designed to run natively on local PC hardware powered by NVIDIA GPUs.
For use in third-party AI applications, Gemma 2B and Gemma 7B were both developed by Google DeepMind, with each open model capable of surpassing "significantly larger models on key benchmarks," according to Google, while running on a GeForce RTX-powered laptop or PC.
The Gemma pre-trained models are designed to be safe and reliable, with NVIDIA adding that the models are optimized to run on its open-source NVIDIA TensorRT-LLM library - accelerated by the over 100 million NVIDIA RTX GPUs available in AI PCs. Gemma can also run on JAX and PyTorch.
Samsung spools up new semiconductor unit in Silicon Valley, will develop next-gen AGI chips
Samsung has just announced it's forming a new semiconductor development organization in Silicon Valley called AGI Computing Lab, which will work on developing AGI-specific semiconductors.
The leader of this new AGI organization is called AGI Computing Lab and is being led by Dr. Woo Dong-hyuk in a senior vice president role, as well as a former Tensor Processing Unit (TPU) developer from Google, where he was one of the three people who designed the TPU platform for Google.
Samsung has been providing HBM memory chips for AI GPU makers like NVIDIA, but now the company is pushing into its own AGI semiconductor business, with a $100 billion war chest to take on the likes of TSMC and Intel, which just announced its latest Intel 18A and next-gen Intel 16A process nodes. Intel plans on taking the crown from TSMC in making the world's fastest chips this year, aiming for domination against TSMC in 2026.
NVIDIA CEO: we need 14 different planets, 3 galaxies, 4 more suns to power future AI GPU tech
NVIDIA just posted its latest quarterly earnings, blowing expectations out of the water with $22 billion in revenue, largely driven by the AI GPU dominance by the company.
The company has been selling as many AI GPUs as it's been making, slowly improving the supply chain holdups that were stopping companies and governments from getting NVIDIA AI GPU hardware. TSMC (Taiwan Semiconductor Manufacturing Company) has been in the middle of this, with their advanced CoWoS packaging technology also flexing its (required) muscle here.
If you think we've seen enough AI so far, you're wrong... the technology industry is all-in with AI like it's 3D + VR + RGB + ray tracing all at the same time. NVIDIA CEO Jensen Huang says we're in the first year of a 10-year cycle in AI, with Jensen explaining: "Accelerated computing and generative A.I. have hit the tipping point. Demand is surging worldwide across companies, industries and nations".
NVIDIA says next-gen B100 AI GPU will be 'supply constrained' as 'demand far exceeds supply'
NVIDIA posted its fourth fiscal quarter earnings today, smashing Wall Street predictions for earnings and sales, and we even got a tease of its next-gen Blackwell B100 AI GPU.
NVIDIA CEO Jensen Huang explained: "Whenever we have new products, as you know, it ramps from zero to a very large number and you can't do that overnight". The new products Jensen is referring to are NVIDIA's upcoming beefed-up H200 AI GPU and its next-gen B100 AI GPU, which are both dropping this year.
The company has recently improved its supply chain issues which were hampering AI GPU shipments, even though the company reported record revenue for the quarter, driven by its Data Center business. Even with that, the company is selling AI GPUs as fast -- in fact, faster -- than the company can make them.
ChatGPT loses its marbles, talks gibberish, and worries one user by saying it's 'in the room'
ChatGPT appears to have misfired in spectacular fashion at times since yesterday, and although the gremlins in the works may now have been smoothed over by OpenAI, an official investigation into what caused the problems hasn't yet been concluded.
So, what did go wrong? Well, as you can see in the above post on X (formerly Twitter), the AI started talking complete nonsense, sprinkling in Spanglish, and generally acting highly erratically. There are quite a few reports of this kind of behavior, too.
This is the sort of thing we've seen from AI models before, but usually close to launch, when the products are only just out of testing, and still being honed.