Artificial Intelligence - Page 28
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 28
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Meta is developing a search engine so it can decrease its reliance on Google and Microsoft
According to a new report, Meta has been indexing the web for at least eight months. The company aims to have its own search engine that can be called on and integrated into Meta AI. This will give its chatbot and other generative AI tools an alternative to Google Search and Microsoft Bing while decreasing the company's reliance on the services of its competitors.
Meta has been using web crawlers for a while, which are part of Facebook and 'crawl' the content and information of links shared on Meta's social media platform. This is different in that it would be used primarily for AI search, and as such, it would need to scrape and crawl the entire internet for information and training.
The company has yet to detail its search engine plans formally. Still, it recently announced a multi-year partnership with a news outlet, Reuters, for the Meta AI chatbot to use the source with citations when answering questions.
Google's new 'Project Jarvis' AI will help you with online research, do your shopping, and more
Google will be updating its Gemini AI model with a more powerful version later this year, which is to be expected. However, according to a new report (via The Information and Reuters), the update will include 'Project Jarvis'. It's not the AI butler that serves Tony Stark in the Marvel Cinematic Universe, but it's close.
According to the report, Project Jarvis is a new AI agent set to become part of Google's Chrome web browser. It can browse various websites for you, summarize the content, fill out web forms, and even make purchases. Now, if it can do all that, you'd have to assume that it could solve those "I'm not a robot" tests where you're tasked with finding buses and traffic lights in the internet-age equivalent of Where's Waldo.
Naturally, if it can go online and buy you a new pair of socks or an OLED TV, you can fine-tune its behavior and set restrictions.
NVIDIA CEO Jensen Huang joins King of Denmark to launch sovereign AI supercomputer
NVIDIA CEO Jensen Huang joined the King of Denmark to launch the country's largest sovereign AI supercomputer, which is aimed at breakthroughs in quantum computing, clean energy, biotechnology and other areas serving Danish society and the world.
Denmark's first AI supercomputer has been dubbed "Gefion" after a goddess in Danish mythology, as an NVIDIA DGX SuperPOD powered by 1528 NVIDIA H100 Tensor Core AI CPUs, interconnected using NVIDIA Quantum-2 InfiniBand networking.
Jensen said: "Gefion is going to be a factory of intelligence. And this factory of intelligence is a new industry that never existed before; it sits on top of the IT industry - we're inventing something fundamentally new. Denmark recognizes that to innovate in AI, the most impactful technology of our time, it must foster a domestic AI infrastructure and ecosystem. The Gefion supercomputer will supercharge the scientists of Denmark with local AI computing infrastructure to drive advancements in life sciences, climate research, and quantum computing".
India-based Reliance supercomputer to be powered by NVIDIA's new Blackwell AI GPUs
NVIDIA will be supplying its new Blackwell AI GPUs to Indian companies including Mukesh Ambani's Reliance Industries, to build a new AI supercomputer in India.
Reliance is building a new 1 GWh (one-gigawatt hour) data center in the western state of Gujarat, India, announced at an AI summit recently held in the business capital of Mumbai. NVIDIA CEO Jensen Huang and Mukesh Ambani announced the news, with Jensen saying: "In the future, India is going to be the country that will export AI. You have the fundamental ingredients - AI, data and AI infrastructure, and you have a large population of users".
Jensen added: "India is already world-class in designing chips, India already develops AI. Instead of being an outsourcer and a back office, India will become an exporter of AI".
NVIDIA CEO says AI workers will have 1000x higher productivity than humans in 'specific jobs'
NVIDIA CEO Jensen Huang says that AI will do some jobs with 1000x higher productivity than humans, but AI will never fully replace the humans that perform these jobs.
At NVIDIA's October AI Summit held in Mumbai, India, CEO Jensen Huang said: "As we speak, AI has no possibility of doing what we do. Depending on the jobs we do, it could do 20% of our jobs 1000 times better. For some people, it could do 50% of their job 1000x better. But in no job can they do all of it".
Jensen was asked if AI would take his job -- as the CEO of NVIDIA -- to which he replied: "absolutely not".
OpenAI senior safety staffer leaves company to write unbiased warning about coming AI
OpenAI has suffered yet another blow, as a senior staffer in the company's AGI Readiness team, a team dedicated to advising OpenAI on the impact of the powerful AI models it's creating and how ready the world is for them, has left the company. This was promptly followed by a warning published to the former OpenAI's staffer's Substack account.
The former OpenAI senior staffer is Miles Brundage, who, as of Friday this week, will no longer be working at OpenAI's AGI Readiness team. For those that don't know, AGI stands for Artificial General Intelligence (AGI), which is the description of an AI model with the same level of cognitive abilities as a human across all fields. This level of sophistication has yet to be fully achieved, but given the potential impact of such a system coming online or potentially falling into the wrong hands, guardrail teams such as the AGI Readiness team were formed.
However, Brundage states in his post that OpenAI has "gaps" in its readiness policy, but they aren't alone in this problem as every other AI lab also does. According to Brundage, OpenAI, and any other AI company, along with the world, isn't ready for AGI. Additionally, the post by the former OpenAI staffer revealed his departure triggered a complete disbanding of the AGI Readiness team, which comes at a time when OpenAI is attempting an internal restructuring into a for-profit business.
Microsoft officially unveils tools to create AI employees that work for humans
Microsoft has taken to its blog to announce the release of a suite of autonomous artificial intelligence agents that will serve businesses as virtual employees.
Redmond states in its blog post that it's announcing new capabilities that will enable customers to create autonomous agents with Copilot Studio, along with ten new autonomous agents in Dynamic 365. Microsoft writes that agents should be thought of as the "new apps for an AI-powered world" and that it believes one day, every organization will have a "constellation of agents" that will range from simply prompt and response bots to bots that are completely autonomous.
Copilot will be how customers interact with these agents, and their capabilities will range from sales, supplier communications, customer intent, and customer knowledge management agents. Microsoft states in its blog post that its AI agents will be able to increase the productivity of a business and is an example of how artificial intelligence can increase the output generated by a worker per hour. As for custom agents, Microsoft chief executive Satya Nadella said that Copilot Studio is built as a "no-code way for you to be able to build agents," which means users won't need any prior programming knowledge to create a custom agent successfully.
AI-generated product reviews deemed officially illegal by US government authority
The Federal Trade Commission (FTC), the US government agency designed to protect American consumers against illegal business practices, has changed its guidelines to target AI-generated content.
Since the rise of AI-generation tools such as OpenAI's ChatGPT, Microsoft's Copilot, or simple image-generation tools such as DALLE, internet users have been enduring the rise of AI-generated content. While this is typically fine, as most of the content is interesting images or videos on social media timelines, there is a real problem when it comes to posting a review for a product or service when AI created it. The FTC's new guidelines are designed to protect consumers against businesses intentionally trying to mislead consumers.
An example of this would be a business that uses AI tools to create fake reviews or testimonials for products. These AI-generated reviews would then be published all over the listing for the product and would mislead potential buyers into thinking the product is more reliable than it actually might be. More specifically, the new FTC guidelines list fake reviews attributed to people who don't exist or someone that overstates their level of experience with the product as a particularly egregious offence.
HBM chip market to grow 156% year-on-year to $46.7B in 2025: up from $18.2B in 2024
The HBM memory business is expected to continue to explode throughout 2025, with experts predicting the HBM market will grow 156% in 2025 to reach $46.7 billion... a market worth just $18.2 billion in 2024.
At the TrendForce Roadshow Korea held in Seoul last week, senior vice president of research operations at TrendForce, Avril Wu, said she expects to see the global HBM memory chip business to expand by 156% next year to $46.7 billion, up from $18.2 billion this year. HBM memory chip share of the overall DRAM market is expected to rise up to 34% in 2025, up from 20% in 2024.
NVIDIA's new Blackwell AI chips are the driving force behind ultra-fast HBM memory, with TrendForce noting that major AI solution providers will witness a "significant shift" in HBM specification requirements towards HBM3E, with the boost to HBM3E 12-Hi stack products, and will increase the HBM capacity per chip.
NVIDIA CEO: 'we had a design flaw in Blackwell, it was 100% NVIDIA's fault' not TSMC's fault
NVIDIA CEO Jensen Huang has addressed the issues surrounding its latest Blackwell AI chip, admitting that there was a design flaw that was "100% NVIDIA's fault" and that TSMC helped them through the tough Blackwell AI GPU launch.
NVIDIA initially unveiled its new Blackwell chips at GTC 2024 earlier this year in March, expected to ship in Q2 2024 but were delayed (as you can see in the stories below) which could've affected big-paying customers like Meta, Microsoft, and Google.
Huang said: "We had a design flaw in Blackwell. It was functional, but the design flaw caused the yield to be low. It was 100% NVIDIA's fault. In order to make a Blackwell computer work, seven different types of chips were designed from scratch and had to be ramped into production at the same time".
Salesforce CEO says 'Copilot is a flop' and that Microsoft is in 'panic mode' over AI failing
Well, that's one way to put it... "Copilot+ is a flop" and that Microsoft rebranding Copilot as "agents" is the company in "panic mode" says the CEO of Salesforce.
Salesforce CEO Marc Benioff posted some fightin' words on X just now, where he said: "Microsoft rebranding Copilot as 'agents'? That's panic mode. Let's be real-Copilot's a flop because Microsoft lacks the data, metadata, and enterprise security models to create real corporate intelligence. That is why Copilot is inaccurate, spills corporate data, and forces customers to build their own LLMs. Clippy 2.0, anyone? Meanwhile, Agentforce is transforming businesses now".
I mean, he's not wrong. I've had a bunch of Copilot+ laptops come through my lab in the last couple of months, and I've been brutally honest: NPUs are virtually useless. They're wasted space of silicon when we could have more Performance cores, more 3D V-Cache, more cache in general, just something... useful.
NVIDIA renames Blackwell Ultra to B300 series: HBM3E 12-Hi memory, TSMC CoWoS-L packaging
NVIDIA has reportedly rebranded all of its upcoming Blackwell Ultra products to the B300 series, with the beefed-up B300 and GB300 chips to also reportedly use TSMC's new CoWoS-L advanced packaging.
In a new report from TrendForce, we're learning that the B200 Ultra has been renamed to the B300, while the GB200 Ultra has been renamed to the GB300. On top of that, the B200A Ultra and GB200A Ultra will be called the B300A and GB300A, respectively.
NVIDIA is expected to launch its now rebranded B300 and GB300 chips in Q2 2025 to Q3 2025, whilew the B200 and GB200 is shipping in small quantities now, more throughout Q4 2024 and things really kick off in Q1 2025.
SK hynix shows off HBM3E 12-Hi chips: 12 chips stacked, 40% thinner than 8-Hi stacks for 2025
SK hynix showed off its new HBM3E 12-Hi memory alongside NVIDIA products at the OCP Global Summit last week, with a range of AI memory semiconductor tech and products teased, with SK hynix aiming at leading the future semiconductor market.
During the event, SK hynix showed off some of its AI memory products, including its new HBM3E 12-Hi stack memory which it started mass-producing in September, marking a significant milestone in the evolution of High Bandwidth Memory.
SK hynix's new HBM3E 12-Hi stack was showed off with NVIDIA's new H200 AI GPU and GB200 Grace Blackwell Superchip, with a huge 36GB capacity achieved by making the DRAM chips 40% thinner. This innovation has paved the way for a 12-stack HBM3E memory configuration at the same thickness as the previous HBM3E 8-Hi stack.
Morgan Stanley says NVIDIA 2U air-cooled MGX GB200 NVL2 still suffers from 'thermal issues'
NVIDIA's new 2U air-cooled MGX GB200 NVL2 server continues to suffer from "thermal issues" says Morgan Stanley.
In a new investment note, Morgan Stanley provides an updates on the AI scene with notes that NVIDIA's MGX GB200 NVL2 with its 2 x Grace CPUs and 2 x Blackwell B200 AI GPUs on the same PCB board, is suffering from "thermal issues".
The full note explained: "NVIDIA MGX GB200 NVL2 houses 2x Grace and 2x B200 Blackwell GPUs on the same PCB board, with the GPU module connecting to the main PCB board using an SXM7 module. All of the servers showcased at OCP were based on a 2U air-cooled form factor. However, our conversations with supply chain partners indicated to us that there are still some thermal issues with the 2U form factor, so this may potentially end up being in a 4U form factor instead".
Microsoft's demand for NVIDIA GB200 AI servers is more than all other cloud companies combined
Microsoft's demand for NVIDIA's new GB200 AI servers "exceeds total orders from other CSPs" or cloud service providers, with the Windows' giant Q4 2024 orders "significantly" increased 3-4x says insider Ming-Chi Kuo.
In a new post on Medium, the insider points out that Microsoft's GB200 key component suppliers will begin mass production and shipments starting in Q4 2024, contributing to the supply chain's performance earlier than competing CSPs.
Kuo expects around 150,000 to 200,000 x Blackwell AI GPU chip shipments in Q4 2024, with "significant growth projected" at 200-250% quarter-over-quarter to 500,000 to 550,000 units in Q1 2025.
SuperMicro unveils NVIDIA GB200 NVL72 SuperCluster: liquid-cooled AI servers
SuperMicro has revealed its new NVIDIA GB200 NVL72 SuperCluster, a liquid-cooled Exascale Compute system in a single rack, ready to go. Check it out:
SuperMicro details its end-to-end AI data center solutions: "In the era of AI, a unit of compute is no longer measured by just the number of servers. Interconnected GPUs, CPUs, memory, storage, and these resources across multiple nodes in racks construct today's artificial Intelligence".
The company continues: "The infrastructure requires high-speed and low-latency network fabrics, and carefully designed cooling technologies and power delivery to sustain optimal performance and efficiency for each data center environment. Supermicro's SuperCluster solution provides end-to-end AI data center solutions for rapidly evolving Generative AI and Large Language Models (LLMs)".
AI in real-life usage: Can't win an argument with your partner? Get ChatGPT to do it for you
Fallen out with your partner? That's nothing new, all couples have disagreements - or even more full-on arguments at times - but one person's solution, namely turning to AI, has gone viral for reasons that, well, you'll see.
This comes to us courtesy of a post on Reddit by 'Drawss4scoress' on r/AmITheA**hole (or AITAH) where as you can guess, people ask whether they might be, shall we say - in the wrong.
To sum up the gist of this scenario, Drawss4scoress has been dating their girlfriend for eight months, and every time they argue, to quote the Redditor:
Dell PowerEdge XE9712: NVIDIA GB200 NVL72-based AI GPU cluster for LLM training, inference
Dell has just unleashed its new PowerEdge XE9712 with NVIDIA GB200 NVL72 AI servers, with 30x faster real-time LLM performance over the H100 AI GPU.
Dell Technologies' new AI Factory with NVIDIA sees the GB200 NVL72 AI server cabinet with 30x faster real-time LLM performance, lighting-fast connectivity with 72 x B200 AI GPUs connected and acting as one with NVLink technology. Dell points out that the liquid-cooled system maximizes your datacenter power utilization, while rapid deployment will see your AI cluster at-scale, with a "white glove experience" adds Dell.
We have 25x more efficiency than Hopper H100, 8K for LLM training with the highest performance delta at 8K+ GPU clusters, and 30x faster real-time trillion-parameter LLM inference compared to the H100 AI GPU.
Google shares photos of liquid-cooled NVIDIA Blackwell GB200 NVL racks for AI cloud platform
Google has teased some photos of using NVIDIA's new Blackwell GB200 NVL AI server racks for its AI cloud platform, using liquid-cooled GB200 AI GPUs. Check it out, because it's utterly gorgeous:
The official Google Cloud account shared the photo on X, with the US-based search giant showing off its first GB200 NVL-based server, deployed to power its AI cloud platform. Google is now deploying NVIDIA GB200 NVL racks for its AI cloud platform, showing off liquid-cooled GB200 high-performance AI GPUs: each of the GB200 chips feature 1 x Grace CPU and 1 x B200 AI GPU for up to 90 TFLOPs of FP64 compute performance.
Google is using custom GB200 NVL racks here, so we don't know what the configuration is exactly -- as the GB200 NVL72 packs 32 x Grace CPUs and 72 x B200 AI GPUs through a 72-GPU NVLink domain.
Parents of student that used AI to cheat sue school, claiming AI tools aren't bannable
It was only a matter of time before lawsuits began flying in the academic space for the use of artificial intelligence-powered tools, but who thought they would come from the parents of a child who was busted using the AI tools?
That's right, the parents of a child who was busted using AI tools to help them complete their history project are now suing the school after it disciplined the student. The parents claim that the disciplinary measures issued by the school have now harmed the students' chances of getting into prestigious universities such as Stanford. According to Dale and Jennifer Harries, the parents of the student, their son's punishment of a Saturday detention and a grade of 65 out of 100 on the history project has now impacted their son's future and his "exemplary record".
Hingham Public School in Massachusetts claimed the use of AI tools is prohibited, and that is stated within the student's handbook, "unauthorized use or close imitation of the language and thoughts of another author and the representation of them as one's own work." The district dealing with the case stated in a recent motion to dismiss that Harris' son received a "relatively lenient" punishment and that siding with the parents would only "invite dissatisfied parents and students to challenge day-to-day discipline, even grading of students, in state and federal courts."





















