Artificial Intelligence - Page 8
All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more - Page 8.
AI in real-life usage: Can't win an argument with your partner? Get ChatGPT to do it for you
Fallen out with your partner? That's nothing new, all couples have disagreements - or even more full-on arguments at times - but one person's solution, namely turning to AI, has gone viral for reasons that, well, you'll see.
This comes to us courtesy of a post on Reddit by 'Drawss4scoress' on r/AmITheA**hole (or AITAH) where as you can guess, people ask whether they might be, shall we say - in the wrong.
To sum up the gist of this scenario, Drawss4scoress has been dating their girlfriend for eight months, and every time they argue, to quote the Redditor:
Dell PowerEdge XE9712: NVIDIA GB200 NVL72-based AI GPU cluster for LLM training, inference
Dell has just unleashed its new PowerEdge XE9712 with NVIDIA GB200 NVL72 AI servers, with 30x faster real-time LLM performance over the H100 AI GPU.
Dell Technologies' new AI Factory with NVIDIA sees the GB200 NVL72 AI server cabinet with 30x faster real-time LLM performance, lighting-fast connectivity with 72 x B200 AI GPUs connected and acting as one with NVLink technology. Dell points out that the liquid-cooled system maximizes your datacenter power utilization, while rapid deployment will see your AI cluster at-scale, with a "white glove experience" adds Dell.
We have 25x more efficiency than Hopper H100, 8K for LLM training with the highest performance delta at 8K+ GPU clusters, and 30x faster real-time trillion-parameter LLM inference compared to the H100 AI GPU.
Google shares photos of liquid-cooled NVIDIA Blackwell GB200 NVL racks for AI cloud platform
Google has teased some photos of using NVIDIA's new Blackwell GB200 NVL AI server racks for its AI cloud platform, using liquid-cooled GB200 AI GPUs. Check it out, because it's utterly gorgeous:
The official Google Cloud account shared the photo on X, with the US-based search giant showing off its first GB200 NVL-based server, deployed to power its AI cloud platform. Google is now deploying NVIDIA GB200 NVL racks for its AI cloud platform, showing off liquid-cooled GB200 high-performance AI GPUs: each of the GB200 chips feature 1 x Grace CPU and 1 x B200 AI GPU for up to 90 TFLOPs of FP64 compute performance.
Google is using custom GB200 NVL racks here, so we don't know what the configuration is exactly -- as the GB200 NVL72 packs 32 x Grace CPUs and 72 x B200 AI GPUs through a 72-GPU NVLink domain.
Parents of student that used AI to cheat sue school, claiming AI tools aren't bannable
It was only a matter of time before lawsuits began flying in the academic space for the use of artificial intelligence-powered tools, but who thought they would come from the parents of a child who was busted using the AI tools?
That's right, the parents of a child who was busted using AI tools to help them complete their history project are now suing the school after it disciplined the student. The parents claim that the disciplinary measures issued by the school have now harmed the students' chances of getting into prestigious universities such as Stanford. According to Dale and Jennifer Harries, the parents of the student, their son's punishment of a Saturday detention and a grade of 65 out of 100 on the history project has now impacted their son's future and his "exemplary record".
Hingham Public School in Massachusetts claimed the use of AI tools is prohibited, and that is stated within the student's handbook, "unauthorized use or close imitation of the language and thoughts of another author and the representation of them as one's own work." The district dealing with the case stated in a recent motion to dismiss that Harris' son received a "relatively lenient" punishment and that siding with the parents would only "invite dissatisfied parents and students to challenge day-to-day discipline, even grading of students, in state and federal courts."
NVIDIA CEO Jensen Huang calls Tesla and SpaceX boss Elon Musk 'superhuman'
NVIDIA CEO Jensen Huang has called Tesla and SpaceX boss Elon Musk "superhuman" as he set up xAI's new NVIDIA AI GPU-powered supercomputer in just 19 days... a process that normally takes 4 years.
Elon Musk's new xAI supercomputer is codenamed Colossus, and was built using a cluster of 100,000 x NVIDIA H100 AI GPUs. During an interview with B2g Pod, NVIDIA CEO Jensen Huang said that what Elon and xAI have done is nothing short of extraordinary.
Jensen said: "As far as I know, there's only one person in the world who could do that; Elon is singular in his understanding of engineering and construction and large systems and marshalling resources; it's just unbelievable".
NVIDIA CEO Jensen Huang talks about Elon Musk building world's largest supercomputer
NVIDIA CEO Jensen Huang has sat down for a long format conversation where he discussed NVIDIA's dominance in AI market and how AI will continue to become adopted into our daily lives.
The conversation begins with Huang explaining that AI models are eventually going to become more sophisticated and will eventually evolve into a personal assistant everyone will have access to in their pocket. Huang doesn't give a timeframe for when that will happen but does say it will arrive in some form or another "soon".
Given the context of the conversation, it can be assumed that the level of sophistication of this AI would be far superior to anything currently available that claims to be an AI personal assistant. An example would be the coming Siri overhaul with Apple Intelligence.
NVIDIA CEO Jensen Huang to boost company headcount to 50K, plus 100 million AI assistants
NVIDIA CEO Jensen Huang has plans to increase NVIDIA headcount from 32,000 to 50,000 staffers, with 100 million AI assistants to "increase the company's overall output".
CNBC's Power Lunch picked up on a recent podcast with Jensen, where the media outlet reporrted that Jensen doesn't think that AI will eliminate jobs, and that the company is embracing AI (obviously, they're the AI leader) with 18,000+ more staffers and 1 million AI assistants.
What will the AI assistants do? They'll help run the new AI models and launch AI applications, which are being built by and around NVIDIA. NVIDIA CEO Jensen Huang also praised Elon Musk for the speed at which he's building a supercomputer in 19 days, which he says "typically takes about 3 years to build".
Analyst says NVIDIA Blackwell GPU production volume will hit 750K to 800K units by Q1 2025
NVIDIA's ramp into Blackwell appears to be "quite strong" with issues with initial Blackwell silicon are "totally behind us" says analyst firm Morgan Stanley.
Morgan Stanley analysts posted a note recently, upbeat on Blackwell's potential impact on NVIDIA's top line heading into the final months of 2025. The firm explains: "According to our checks of the GPU-testing supply chain, Blackwell chip output should be around 250,000-300,000 in [the fourth quarter], contributing $5 billion to $10 billion in revenue, which is still tracking [Morgan Stanley lead analyst] Joe Moore's bullish forecast".
The investment firm said that Blackwell chip volume could reach 750,000 to 800,000 units -- which is a huge 3x increase -- from Q4 2024. The firm also expects Hopper volume (including H200 and H20) to be around 1.5M units in Q4 2024, gradually ramping down to 1M units in Q1 2025. The firm added that B200 chip prices are around 60-70% higher than H200, Blackwell revenue should surpass Hopper by Q1 2025.
Phison president promises AI training and AI tuning with a $50K workstation system
Phison is promising that $1 million to $1.5 million AI workstations are a thing of the past, promising a new $50,000 workstation that's perfect for AI training and AI tuning.
In a recent chat with CRN, Phison General Manager and President Michael Wu explained: "We've changed that $1 million or $1.5 million investment, the minimum requirement to have a fine-tuning machine to create ChatGPT, to $50,000. You no longer need three DGX GPUs anymore. You can do it with a single workstation with four workstation GPUs and with two of our aiDAPTIV+ SSDs that are treated as virtual memory for the GPU".
He continued: "Furthermore, when we demonstrated a 70-billion parameter machine at NVIDIA's GTC, people didn't know how to use it. Nobody has an AI engineer. So we created a software tool called aiDAPTIV+ Pro Suite that lets you go from putting a PDF of your proprietary document to the system to fine tune the 70-billion parameter model to building a chatbot like ChatGPT. We are taking advantage of all the big investment that Meta has made on the open source Llama 3 to create a custom AI for you".
NVIDIA's next-gen Blackwell Ultra 'B300' AI GPU for GB300 AI servers: socketed design rumored
NVIDIA is rumored to move towards a socketed design for its next-gen GB300 AI servers, based on the upcoming Blackwell Ultra AI chips coming in 2025.
In a new report by TrendForce, we're learning that in the second half of 2025 to expect the B300 series to "become the mainstream product for NVIDIA. The main attraction of the B300 series is said to be its adaption of FP4, which is well-suited in inference scenarios".
This change in design is also expected to boost the yield rates of the B300 AI GPUs, with TrendForce noting that it "might probably reduce performance". The Economic Daily News says that using the socketed design will help simplify after-sales service and server board maintenance, as well as optimize the yield of computing board manufacturing.