Artificial Intelligence
All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more.
OpenAI safety researcher quits amid safety concerns about a human-level AI
An OpenAI safety researcher has shared a message on her Substack saying she is quitting her position at the company as she believes her goal of implementing humanity-protecting policies into the development of AI can be better achieved externally.
OpenAI has seen a selection of pivotal staff members leave the company recently, and now another has been added to the list. Rosie Campbell joined OpenAI in 2021 with the goal of implementing safety policies for AI development, and now, according to a Substack post, the AI safety researcher is departing the company, citing several internal changes such as workplace culture and the ability to perform what Campbell believes is the most fundamental part of her job - AI safety.
Campbell wrote in the Substack post that she was a member of OpenAI's Policy Research team, where she worked closely with Miles Brundage, a senior staffer who worked at OpenAI's Artificial General Intelligence (AGI) team, a team dedicated to making sure the world is prepared for AGI when it's achieved. Notably, Brundage left OpenAI in October and published a letter on Substack citing concerns with OpenAI's internal policies regarding AGI safety and writing there are "gaps" in the company's readiness policy.
OpenAI announce Shipmas with '12 days of OpenAI' with 12 livestreams, text-to-video Sora unveil
OpenAI has just announced its new "Shipmas" period, with new features, products, and demos for the next 12 days starting December 5.
The ChatGPT creator is expected to debut its much-anticipated text-to-video service codenamed Sora, as well as a new reasoning model according to sources "familiar with OpenAI's plans" according to The Verge. OpenAI CEO Sam Altman confirmed the 12 days of "Shipmas" event on-stage at The New York Times' DealBook conference on Wednesday morning, but didn't elaborate.
Leading up to the launch, OpenAI staffers were teasing some of the upcoming releases on X, with one of them posting "What's on your Christmas list?" while another posted "Got back just in time to put up the shipmas tree". Sora boss Bill Peebles responded to a staffer who said that OpenAI is "unbelievably back" to which he replied with a single word: "Correct".
Google announces generative video model 'Veo' to compete with OpenAI's impending Sora AI model
Google has just introduced its new generative AI video model, Veo, beating OpenAI's text-to-video service Sora, launching Veo into private preview on Google's in-house Vertex AI platform.
Google's new Veo model can generate "high-quality" 1080p resolution videos in multiple different visual and cinematic styles, all from text or image-based prompts. The search giant unveiled its text-to-video model a few months ago with generated clips that would be "beyond a minute" in length, but the company didn't specify... but now, these videos that were made by Veo are pretty astounding.
The latest version of Google Imagen 3 text-to-image generative will be online and available for all Google Cloud customers on Vertex "starting next week" says the company, which will see an expansion of its US-first release on Google's AI Text Kitchen in August 2024.
Amazon teases its next-gen Trainium3 AI accelerator is 4x faster than Trainium 3, drops in 2025
Amazon Web Services (AWS) has teased its next-gen Trainium3 AI accelerator at re:Invent on Tuesday, promising 4x higher performance than its current-gen Trainium2 chip.
The new Trainium3 AI accelerator is due in late 2025, with Gadi Hutt, director of product and customer engineering for AWS' Annapurna Labs team, expects the new AI accelerator to be the very first dedicated machine learning accelerator built on a 3nm process node (at TSMC) and hit a 40% improvement in efficiency over Trainium2.
Amazon hasn't been too clear on the exact performance of its Trainium3, but the 4x performance improvement figure is based on AWS' complete "UltraServer" configuration, which The Register reports is still in development. The outlet works out that the Trainium2 UltraServer features 64 accelerators, capable of 83.2 petaFLOPS of compute performance (unknown precision).
Meta 'taking an open approach' with nuclear energy, small modular reactors for AI datacenters
Meta is shifting into the warm arms of nuclear power for its AI training, with the company posting a new blog on its sustainability website saying that "we believe nuclear energy will play a pivotal role in the transition to a cleaner, more reliable, and diversified electric grid".
In the new post, Meta announces that it is going to release a request for proposals (RFP) to find nuclear energy developers to help them on their nuclear-powered journey. Meta aims to add 1-4 GW of new nuclear eneration capacity in the United States to be delivered "starting in the early 2030s".
Meta explains: "We are looking to identify developers that can help accelerate the availability of new nuclear generators and create sufficient scale to achieve material cost reductions by deploying multiple units, both to provide for Meta's future energy needs and to advance broader industry decarbonization. We believe working with partners who will ultimately permit, design, engineer, finance, construct, and operate these power plants will ensure the long-term thinking necessary to accelerate nuclear technology".
NVIDIA's next-gen Rubin AI GPU could be pushed up 6 months ahead of schedule with HBM4
NVIDIA's next-generation Rubin AI GPU architecture release rumored to be pulled up by 6 months, TSMC 3nm process expected, with ultra-fast next-gen HBM4 memory.
The new Rubin AI GPU architecture is the successor to the Blackwell GPU architecture, which is being used in the current fleet of B200 and GB200 chips, as well as the future GB300 series AI GPU that we're hearing more and more about lately. In a new report from UDN, we're hearing that NVIDIA is already working with supply chain partners in Taiwan on the Rubin AI GPU architecture and its new R100-powered AI servers.
Rubin was originally scheduled for 2026, but sources of UDN say that the company has launched the development of Rubin early, so that the AI boom can continue from one AI GPU chip to another (Blackwell to Rubin, and so on).
NVIDIA's next-gen GB300 AI platform in mid-2025: more perf than GB200, fully liquid-cooled
NVIDIA's beefed-up GB300 AI servers are expected to hit the market in mid-2025, rolling out with even more performance, faster (and more) 12-Hi HBM3E memory, and more.
In a new report from the UDN, we're learning that supply chain manufacturers have already started the process for next-gen NVIDIA GB200 AI servers, which will have massive power consumption increases over the already power-hungry GB200 AI servers.
We heard not too long ago in October 2024 that NVIDIA was reportedly rebranding its upcoming "Blackwell Ultra" AI GPUs to the B300 series, with B300 and GB300 chips using TSMC's new CoWoS-L advanced packaging. The B200 Ultra was reportedly renamed to the B300, while the GB200 Ultra has been renamed to GB300, while B200A Ultra and GB200A Ultra are now B300A and GB300A, respectively.
Elon Musk has priority access to NVIDIA GB200 AI GPU delivery in January 2025, costs $1.08B
Elon Musk has reportedly directly approached NVIDIA CEO Jensen Huang, offering a premium price to get priority access to its new GB200 AI servers, with a hefty $1.08 billion order.
In a new report from DigiTimes, industry sources have said that Elon Musk's xAI startup wants its hands on NVIDIA GB200 AI servers, and it doesn't want to wait: with CEO Elon Musk stepping in and making a call to NVIDIA CEO Jensen Huang, waving $1.08 billion around for priority access to the most powerful AI GPU silicon on the planet.
xAI's huge $1.08 billion order for NVIDIA GB200 AI GPUs will be manufactured by NVIDIA's key partner, Foxconn, and should be delivered in January 2025.
Panasonic resurrects its founder as an AI trained on thousands of recordings
Panasonic was originally founded in 1918 by Kōnosuke Matsushita under the name Matsushita Electric Housewares Manufacturing Works, and now the founder has been resurrected by the modern-day Panasonic.
Panasonic collaborated with the University of Tokyo to resurrect Matsushita who died in 1989. Engineers fed 3,000 recordings of Matsushita into an AI, along with any relevant writings, lectures, and interviews. Panasonic's Peace and Happiness through Prosperity (PHP) Institute, a think tank originally founded by Matsushita, plugged all of the relevant data into an AI and trained it to character an AI character that is designed to replicate Matsushita's way of thinking and speaking style.
What's the goal of this? Panasonic wants to use the AI replica of the company's founder as a consultant, querying it in difficult situations to see what Matsushita would do based on the current circumstances. Notably, Matsushita was renowned for his incredible management philosophies in business, and is incredibly celebrated in Japan for the transformation of what was once a business that sold lamps into what Panasonic is today.
Microsoft responds to claims all Word and Excel files are being used to train AI
Companies developing artificial intelligence tools require large swaths of data for AI training, and what better way to gather large quantities of data than by scraping it from people using popular applications or programs?
@nixCraft, an author of Cyberciti.biz has claimed Microsoft is participating in this type of scheme with Office, and it's Connected Experiences. According to nixCraft, Redmond's Connected Experiences feature automatically scraps data from Word and Excel files, and that data is used to train Microsoft's AI tools, such as Copilot. According to reports, this feature is turned on automatically, which means user-generated Word documents and Excel files are included in Microsoft's AI training dataset unless the user manually disables the feature.
However, following reports sourcing @nixCraft's claims, Microsoft has since responded, saying customer data within Microsoft 365 apps, which includes Word and Excel, isn't used to train the company's Large Language Models (LLMs), the underlying technology powering AI tools such as Copilot, or ChatGPT. Microsoft also added, "This setting only enables features requiring internet access like co-authoring a document."