Artificial Intelligence - Page 31
Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 31
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
NVIDIA CEO Jensen Huang to boost company headcount to 50K, plus 100 million AI assistants
NVIDIA CEO Jensen Huang has plans to increase NVIDIA headcount from 32,000 to 50,000 staffers, with 100 million AI assistants to "increase the company's overall output".
CNBC's Power Lunch picked up on a recent podcast with Jensen, where the media outlet reporrted that Jensen doesn't think that AI will eliminate jobs, and that the company is embracing AI (obviously, they're the AI leader) with 18,000+ more staffers and 1 million AI assistants.
What will the AI assistants do? They'll help run the new AI models and launch AI applications, which are being built by and around NVIDIA. NVIDIA CEO Jensen Huang also praised Elon Musk for the speed at which he's building a supercomputer in 19 days, which he says "typically takes about 3 years to build".
Analyst says NVIDIA Blackwell GPU production volume will hit 750K to 800K units by Q1 2025
NVIDIA's ramp into Blackwell appears to be "quite strong" with issues with initial Blackwell silicon are "totally behind us" says analyst firm Morgan Stanley.
Morgan Stanley analysts posted a note recently, upbeat on Blackwell's potential impact on NVIDIA's top line heading into the final months of 2025. The firm explains: "According to our checks of the GPU-testing supply chain, Blackwell chip output should be around 250,000-300,000 in [the fourth quarter], contributing $5 billion to $10 billion in revenue, which is still tracking [Morgan Stanley lead analyst] Joe Moore's bullish forecast".
The investment firm said that Blackwell chip volume could reach 750,000 to 800,000 units -- which is a huge 3x increase -- from Q4 2024. The firm also expects Hopper volume (including H200 and H20) to be around 1.5M units in Q4 2024, gradually ramping down to 1M units in Q1 2025. The firm added that B200 chip prices are around 60-70% higher than H200, Blackwell revenue should surpass Hopper by Q1 2025.
Phison president promises AI training and AI tuning with a $50K workstation system
Phison is promising that $1 million to $1.5 million AI workstations are a thing of the past, promising a new $50,000 workstation that's perfect for AI training and AI tuning.
In a recent chat with CRN, Phison General Manager and President Michael Wu explained: "We've changed that $1 million or $1.5 million investment, the minimum requirement to have a fine-tuning machine to create ChatGPT, to $50,000. You no longer need three DGX GPUs anymore. You can do it with a single workstation with four workstation GPUs and with two of our aiDAPTIV+ SSDs that are treated as virtual memory for the GPU".
He continued: "Furthermore, when we demonstrated a 70-billion parameter machine at NVIDIA's GTC, people didn't know how to use it. Nobody has an AI engineer. So we created a software tool called aiDAPTIV+ Pro Suite that lets you go from putting a PDF of your proprietary document to the system to fine tune the 70-billion parameter model to building a chatbot like ChatGPT. We are taking advantage of all the big investment that Meta has made on the open source Llama 3 to create a custom AI for you".
NVIDIA's next-gen Blackwell Ultra 'B300' AI GPU for GB300 AI servers: socketed design rumored
NVIDIA is rumored to move towards a socketed design for its next-gen GB300 AI servers, based on the upcoming Blackwell Ultra AI chips coming in 2025.
In a new report by TrendForce, we're learning that in the second half of 2025 to expect the B300 series to "become the mainstream product for NVIDIA. The main attraction of the B300 series is said to be its adaption of FP4, which is well-suited in inference scenarios".
This change in design is also expected to boost the yield rates of the B300 AI GPUs, with TrendForce noting that it "might probably reduce performance". The Economic Daily News says that using the socketed design will help simplify after-sales service and server board maintenance, as well as optimize the yield of computing board manufacturing.
MSI details exclusive 'AI Boost' feature that overclocks the NPU for a performance boost
MSI has officially rolled out a new feature on its new range of X870 and Z890 motherboards designed for Intel and AMD's new generation of CPUs, and the new feature enables more performance to be squeezed out of onboard NPUs.
The new motherboards for AMD's Ryzen 9000 series and the Intel Core Ultra (Series 2) come with brand-new chipsets that have a range of hardware improvements. To accompany these hardware improvements MSI has overhauled its BIOS interface into what its calling Click BIOS X, and during a recent tour of MSI's motherboard factory in Shenzhen, China, we were able to spend some time with some setups featuring Intel's new Arrow Lake CPUs and the new BIOS interface.
All of MSI's new range of motherboards come with the new BIOS interface, and one of the built-in features is AI Boost. This was a particularly impressive feature as the setting within the BIOS enabled overclocking for the NPU, which MSI claimed to improve AI performance and efficiency by up to 5%. According to MSI, enabling the new feature will provide the user with "faster data processing, enhanced AI performance, improved efficiency in AI tasks, better multi-tasking capabilities, and maximized hardware utilization."
OpenAI gets one of the first engineering builds of NVIDIA's new Blackwell DGX B200 AI system
OpenAI has just received one of the first engineering builds of the NVIDIA DGX B200 AI server, posting a picture of their new delivery on X:
Inside, the NVIDIA DGX B200 is a unified AI platform for training, fine-tuning, and inference using NVIDIA's new Blackwell B200 AI GPUs. Each DGX B200 system has 8 x B200 AI GPUs with up to 1.4TB of HBM3 memory and up to 64TB/sec of memory bandwidth. NVIDIA's new DGX B200 AI server can pump out 72 petaFLOPS of training performance, and 144 petaFLOPS of inference performance.
OpenAI Sam Altman is well aware of the advancements of NVIDIA's new Blackwell GPU architecture, recently saying: "Blackwell offers massive performance leaps, and will accelerate our ability to deliver leading-edge models. We're excited to continue working with NVIDIA to enhance AI compute".
AI just won a Nobel Prize for its ability to predict protein structures
Artificial intelligence systems have now become so sophisticated they are being awarded Nobel prizes for their academic achievements, and now AI has gained its second Nobel prize, but this time for protein prediction.
Geoffrey Hinton, a computer scientist whose work on deep learning is the foundation of all AI models currently used today, was awarded a Nobel prize, along with Princeton University professor John Hopfield. Both researchers were awarded the Nobel Prize in physics for their contributions to deep learning technologies, which have become the underpinning technology we now broadly call AI.
Now, AI has done it again, with a Nobel Prize being given to Demis Hassabis, the cofounder and CEO of Google DeepMind, and John M. Jumper, a director at DeepMind, for the creation of an AI capable of accurately predicting the structures of protein. Half of the Nobel Prize is awarded to Hassabis and Jumper, and the other half is awarded to David Baker, a professor of biochemistry at the University of Washington, who was recognized for his work on computational protein design. Each of the prize winners shares a $1 million pot.
NVIDIA, Foxconn to build Taiwan's fastest supercomputer: with Blackwell GB200 NVL72 AI servers
We knew it was coming, but now it's official: NVIDIA is teaming with Foxconn to build Taiwan's most powerful supercomputer powered by its new Blackwell AI GPU architecture.
NVIDIA and Foxconn announced the new Hon Hai Kaohsiung Super Computing Center at its recent Hon Hai Tech Day, which will be built around NVIDIA's groundbreaking new Blackwell GPU architecture. The new AI supercomputer will feature GB200 NVL72 AI servers, with a total of 64 racks and 4608 Tensor Core GPUs.
The company is expecting to see over 90 exaflops of AI performance, making the new Taiwan-based supercomputer the fastest on the island. Foxconn has plans to use the supercomputer once it's operational, to power breakthroughs in cancer research, large language model development, and smart city innovations, positioning Taiwan as a global leader in AI-driven industries.
AMD should be TSMC's next huge customer for Arizona: HPC AI chips made in the USA in 2025
AMD is reportedly set to make next-gen, high-performance HPC AI chips at TSMC's new fab in Arizona, joining as the second major company making next-gen chips... the other is Apple.
In a new post from insider Tim Culpan, who reports that AMD is "lined up to produce high-performance computing chips from TSMC Arizona, making the American fabless chip designer another client for the new US facility" according to his sources.
Culpan explains that production is already in the planning phase, with tape out and manufacturing of AMD's next-gen HPC chips expected to kick off at TSMC's 5nm process node in 2025. Apple is the first customer of TSMC's fresh new fab in Arizona, which will be producing some of the A16 processors that go inside of the new iPhone 16 family of handsets.
Former Google CEO says AI will solve the climate issue, 'we're not organized to do it'
"We're not going to hit the climate goals anyway because we're not organized to do it." That's former Google CEO Eric Schmidt responding to a question about the rise in energy consumption due to the AI boom at SCSP's inaugural AI+Energy Summit.
AI is putting a strain on energy grids everywhere due to the sheer amounts of power required to run complex generative AI systems, so it's a definite issue.
Eric Schmidt's response is somewhat cynical but indicative of the debate surrounding how governments, corporations, and people everywhere should be dealing with climate change and its potentially devastating impacts. His response wasn't simply a shoulder shrug, as Schmidt confirmed that energy concerns surrounding AI "will be a problem."