Artificial Intelligence - Page 33
All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more - Page 33.
NVIDIA Blackwell GPU compute stats: 30% more FP64 than Hopper, 200x cheaper simulation costs
NVIDIA has published a new blog post providing some more details about the next level of performance offered by its new Blackwell GPU architecture.
The new blog post by NVIDIA shows the gigantic performance leap that Blackwell will deliver for the research industry including quantum computing, drug discovery, fusion energy, physics-based simulations, weather simulations, scientific computing, and more.
NVIDIA has another major goal with Blackwell -- other than industry-leading AI performance -- in that Blackwell can simulate weather patterns 200x cheaper than Hopper, and use 300x less energy while running digital twins simultaneously encompassing the globe with 65x less cost, and 58x less energy used. Absolutely astonishing numbers from Blackwell.
Commodore 64 PC runs AI to generate images: 20 minutes per 90 iterations for 64 pixels
I still remember using and playing games on the Commodore 64, but I never thought I'd see the day when the old-school PC was running generative AI to generate creative retro sprites. Check it out:
Nick Bild is a developer and hobbyist who documented his journey of building a generative AI tool for the Commodore 64, that can be used to create 8 x 8 sprites that are displayed at the 64 x 64 resolution. The idea behind this is to use AI to help inspire game design concepts, but we're talking about the Commodore 64 here, so we're not going to get some AI-powered Crysis on the C64.
Training the generative AI model was done on a traditional PC, so while the AI model itself runs on the Commodore 64, you'll need a modern PC to get it up and running. It will take 20 minutes or so to run just 90 iterations for the final 64 x 64 image, so it's not going to blow NVIDIA's current-gen Hopper H100 AI GPU out of the water, or put AI companies out of business. Impressive for the Commodore 64, nonetheless.
OpenAI is planning to launch a search engine next week, will use AI to compete with Goolge
According to a new report at Reuters, OpenAI (the massive AI firm behind ChatGPT, backed by billions from Microsoft) is planning to launch an AI-powered search engine on Monday. The engine will compete with Google and Perplexity, a competing AI search startup founded by a former OpenAI researcher.
Going up against the search giant that is Google with the aid of AI is not uncommon. Microsoft's long-running Bing search recently added OpenAI ChatGPT integration - but only for paid customers. AI and search engines are set to go hand-in-hand, as Google is also integrating generative AI into search and other products like Gmail.
The Reuters report cites 'two sources familiar with the matter,' so nothing is official. However, OpenAI's stealth launch of a new search engine (possibly in beta or limited form) next week would mark an exciting turn for the company. Bloomberg has reported on the company's search engine plans in the past, so it sounds like it's only a matter of time.
Microsoft gifts first-of-its-kind AI model to US intelligence agencies
A new report from Bloomberg reveals Microsoft has created a new generative AI model that is designed specifically for US intelligence agencies.
The report states the main difference between this new AI model and others that power popular AI tools, such as ChatGPT, is that it's completely divorced from the internet, making it the first of its kind. Known AI models such as ChatGPT, DALL-E, and Microsoft's Copilot rely on cloud services to process prompts, train data, and reach conclusions. However, the AI model now handed over to US intelligence agencies doesn't require any cloud services, meaning it is completely devoid of any internet access and, therefore, secure.
Why do US intelligence agencies want an advanced AI model? According to the report, due to the security of the AI model, top-secret information can now be inputted and analyzed, which will help intelligence agencies understand and filter through large swaths of classified information.
Continue reading: Microsoft gifts first-of-its-kind AI model to US intelligence agencies (full post)
Microsoft set to build $3.3 billion cloud campus to fuel AI growth
Microsoft has snapped up the same location Foxconn acquired to build an LCD panel manufacturing plant, but instead of panels, Microsoft will be constructing a data center.
After years of delays Foxconn failed to materialize the LCD manufacturing project at Mount Pleasant, Wisconsin, and over the years Microsoft has snapped up more and more of the land that was originally set aside for Foxconn's project, eventually resulting in the Taiwanese company pulling out and Microsoft scooping up the rest of the site.
Microsoft's proposal for a data center features infrastructure and community improvements to the local area, with promises that it will up-skill 100,000 residents across the state to be component at generative AI technologies such as Microsoft Copilot, train and certify 3,000 local AI software developers, and 1,000 cloud datacenter technicians. Moreover, Microsoft President Brad Smith, backed the push for AI as it has the potential to revolutionize manufacturing plants, assist workers and create more jobs.
Continue reading: Microsoft set to build $3.3 billion cloud campus to fuel AI growth (full post)
NVIDIA's next-gen R100 AI GPU: TSMC 3nm with CoWoS-L packaging, next-gen HBM4 in Q4 2025
NVIDIA is still cooking its new Blackwell GPU architecture and B200 AI GPU, and while we've had teases of the next-gen Vera Rubin GPU, now we're hearing the next-gen R100 AI GPU will be in mass production in Q4 2025.
In a new post by industry insider Ming-Chi Kuo, NVIDIA's next-generation AI chip will enter mass production in Q4 2025 with the R-series and R100 AI GPU, with the system/rack solution to enter mass production in Q1 2026. NVIDIA's next-gen R100 will be made on TSMC's newer N3 process node, compared to B100 which uses TSMC N4P, with R100 using TSMC's newer CoWoS-L packaging (the same as B100).
NVIDIA's next-gen R100 AI GPU features around 4x reticle design, compared to the B100 with 3.3x reticle design, while the interposer size for R100 "has yet to be finalized," with Ming-Chi saying there are 2-3 options. R100 will feature 8 x HBM4 units, while GR200's new Grace CPU will use TSMC's N3 process (compared to TSMC's N5 for GH200 and GB200's Grace CPUs).
US government plans to prevent AI software like ChatGPT getting to China
The US government is reportedly preparing to make another move against China to prevent the nation from gaining access to the US's best artificial intelligence capabilities.
The Biden administration has already taken measures to prevent China from gaining AI supremacy by banning the exportation of specific high-end NVIDIA graphics cards, which are used to train the AI models, and proposing a rule that requires all US cloud companies to inform the government when foreign customers are using their cloud systems to train AI models.
According to reports, more guardrails are being considered by the Commerce Department, which plans on targeting the exportation of proprietary or closed-source AI models. The idea behind these new purported regulations is to prevent US-based AI giants such as the Microsoft-funded OpenAI, the company behind popular AI tool ChatGPT, or Google DeepMind, creators of Gemini, from taking their world leading AI models to global market and selling them to the highest bidder.
Meta AI boss confirms the company has purchased around $30 billion worth of NVIDIA AI GPUs
Meta has purchased 500,000 more AI GPUs for a total of 1 million AI GPUs, which is valued at $30 billion.
We're hearing about the gargantuan AI GPU hardware investment from Meta AI boss Yann LeCun at the AI Summit, also saying that upcoming variations of its Llama 3 large language model are on the way. LeCun emphasized the computational limitations and GPU costs as things that are slowing the progression of AI.
OpenAI CEO Sam Altman plans to spend $50 billion a year on AGI development (artificial general intelligence) by using 720,000 NVIDIA H100 AI GPUs that cost a hefty $21.6 billion. Microsoft is aiming for 1.8 million AI GPUs by the end of 2024, while OpenAI wants to have 10 million AI GPUs before the end of the year.
Sam Altman says AI will be able to 'know absolutely everything' about you
In a new interview with MIT Technology Review, OpenAI CEO Sam Altman revealed what the future will look like with artificial intelligence-powered systems becoming more engrained in our lives.
The CEO of one of the companies leading the charge in AI development began the interview by saying AI tools will replace the smartphone as the most dependent piece of technology in our daily lives, and the capabilities of AI will be so great that it will feel like "this thing that is off helping you." Altman went on to describe the perfect app for AI would be a "super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I've ever had, but doesn't feel like an extension."
Additionally, this level of AI could attempt things outside its known capabilities, fail, and then come back to the user with follow-up questions. The answers the user provides to those questions are then integrated into its second attempt at the task. So, what will power this crazy new AI? Altman believes there's a chance that we won't even need a specific piece of hardware to use this AI on the go, as the new app could simply access the cloud.
Apple is working on its own chip to run AI software in its data center servers
Apple is reportedly working on its own AI chip according to sources of The Wall Street Journal, which reports the Curpentino-based giant is building AI chips for its data centers to run new AI-based features that'll be announced at WWDC 2024 next month.
The World Wide Developers Conference (WWDC) is where APple will unveil its plans for the future of AI with its products and services, with the WSJ reporting: "Apple has been working on its own chip designed to run artificial intelligence software in data center servers, a move that has the potential to provide the company with a key advantage in the AI arms race".
The project is called ACDC which stands for Apple Chips in Data Center, but I can see some truly awesome marketing from Apple using the "ACDC" branding if they do it right. Come on, Apple... you know you're going to do it anyway.