Artificial Intelligence - Page 44

All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more - Page 44.

Follow TweakTown on Google News

NVIDIA's full-spec Blackwell B200 AI GPU uses 1200W of power, up from 700W on Hopper H100

Anthony Garreffa | Mar 24, 2024 7:38 PM CDT

NVIDIA revealed its next-generation Blackwell B200 AI GPU at its recent GTC 2024 (GPU Technology Conference) event but left out some details that we're now discovering... like the new AI GPU consuming up to a whopping 1200W of power.

NVIDIA's full-spec Blackwell B200 AI GPU uses 1200W of power, up from 700W on Hopper H100

The new information on the Blackwell AI GPUs comes directly from NVIDIA SVP and GPU Architect, Jonah Albe, along with Ian Buck, the VP of Hyperscale and HPC at NVIDIA. Jonah pointed out that NVIDIA's new Blackwell GPU uses a completely different microarchitecture to Hopper, with Blackwell featuring 2nd Generation Transformer Engine Technology that adds both FP4 and FP6 compute formats, which along with new software optimizations that NVIDIA has made, unleashes Blackwell to be the fastest AI chip on the planet.

Blackwell has a 32% increase in FP64 compute performance with B200 versus Hopper H100, with Blackwell being an AI GPU first and foremost, FP64 compute performance isn't as important from an AI workload standpoint, where the lower you go, the faster the AI inferencing and training capabilities become.

Continue reading: NVIDIA's full-spec Blackwell B200 AI GPU uses 1200W of power, up from 700W on Hopper H100 (full post)

Google confirms AI can predict the most common natural disaster 7 days before it happens

Jak Connor | Mar 22, 2024 1:35 AM CDT

Google has announced that its AI system is capable of predicting the most common natural disaster up to seven days before it happens.

Google confirms AI can predict the most common natural disaster 7 days before it happens

The new research has been published in the scientific journal Nature and details a new machine-learning model that has been trained on historical event data, river level readings, elevation and terrain readings, and any other relevant information that is necessary to arrive at a prediction. Following the model being trained, it was then strained by running "hundreds of thousands" of simulations of flooding events occurring in each location. The result of the training and simulations is that the model is now capable of predicting riverine floods up to seven days in advance, according to Google.

Google states that the use of this AI-powered model will help solve the riverine flooding problem on a global scale. Notably, the model was able to successfully predict a flood seven days in advance in some cases, but on average landed on five days. Furthermore, Google has said that this new technology has "reliability of currently-available global nowcasts from zero to five days."

Continue reading: Google confirms AI can predict the most common natural disaster 7 days before it happens (full post)

Scientists busted publishing AI-generated papers in academic journals

Jak Connor | Mar 22, 2024 12:48 AM CDT

A new report from 404 Media has highlighted at least several instances of scientific journals publishing papers that were seemingly generated using artificial intelligence-powered tools such as ChatGPT.

Scientists busted publishing AI-generated papers in academic journals

The report states that AI-generated papers are being published in academic journals, which has raised the question of the impact of AI-powered tools on academia as a whole. The report cites Google Scholar, a journal database, and when searching this database with phrases such as "As of my last knowledge update" and "I don't have access to real-time data," two phrases commonly used by AI in its responses to prompts from users, more than 100 studies become listed.

It's unclear if these papers were entirely generated by AI, or AI was used to assist their creation. However, 404 Media reports at least one paper appears to be flagrantly submitted to a respected chemistry journal, Surfaces and Interfaces. The paper was published after peer review and didn't even remove the AI-powered chatbot's introduction.

Continue reading: Scientists busted publishing AI-generated papers in academic journals (full post)

Micron's entire HBM supply sold out for 2024, and a majority of 2025 supply already allocated

Anthony Garreffa | Mar 21, 2024 7:02 PM CDT

Micron has announced that it has sold out of its HBM3E memory supply for 2024, and that most of its HBM3E memory has been allocated for 2025.

Micron's entire HBM supply sold out for 2024, and a majority of 2025 supply already allocated

We are to expect Micron's latest HBM3E memory to be inside of NVIDIA's beefed-up H200 AI GPU, with the US company competing against HBM rivals in South Korea with Samsung and SK hynix. Micron CEO Sanjay Mehrotra talked about HBM supply in a recent earnings call, where we're finding out the new information.

Sanjay Mehrotra, chief executive of Micron, said: "Our HBM is sold out for calendar 2024, and the overwhelming majority of our 2025 supply has already been allocated. We continue to expect HBM bit share equivalent to our overall DRAM bit share sometime in calendar 2025. We are on track to generate several hundred million dollars of revenue from HBM in fiscal 2024 and expect HBM revenues to be accretive to our DRAM and overall gross margins starting in the fiscal third quarter".

Continue reading: Micron's entire HBM supply sold out for 2024, and a majority of 2025 supply already allocated (full post)

Samsung AGI Computing Labs in US and South Korea to build completely new semiconductor for AGI

Anthony Garreffa | Mar 20, 2024 9:33 PM CDT

Samsung has just announced its started the development of next-generation artificial general intelligence (AGI) dedicated semiconductors through its AGI Computing Labs in the USA and South Korea.

Samsung AGI Computing Labs in US and South Korea to build completely new semiconductor for AGI

Kyung Kye-hyun, president of Samsung Electronics' Device Solutions Division, said on his own social media on March 19: "I am pleased to announce the establishment of Samsung Semiconductor's AIG Computing Labs in both the United States and Korea".

Dr. Woo Dong-gyuk is a former developer of Google's Tensor Processing Unit (TPU) as one of the three that designed the TPU platform for the search giant, is now running the AGI Computing Lab and is recruiting more staff to help with Samsung Semiconductor's journeys into completely new semiconductor technology for the future of AGI. He said: "We will create a completely new type of semiconductor specifically designed to meet the astonishing processing requirements of future AGI".

Continue reading: Samsung AGI Computing Labs in US and South Korea to build completely new semiconductor for AGI (full post)

NVIDIA is qualifying Samsung's new HBM3E chips, will use them for future B200 AI GPUs

Anthony Garreffa | Mar 20, 2024 9:11 PM CDT

NVIDIA CEO Jensen Huang told the press during a media briefing at GTC 2024 that "HBM memory is very complicated and the value added is very high. We are spending a lot of money on HBM". Jensen added: "Samsung is very good, a very good company".

NVIDIA is qualifying Samsung's new HBM3E chips, will use them for future B200 AI GPUs

SK hynix gobbles up most of the advanced HBM3 and HBM3E memory needs for NVIDIA and its growing arsenal of AI GPUs with the Hopper H100, H200, and new Blackwell B100 and B200 AI GPUs all using HBM memory. Jensen continued: "The upgrade cycle for Samsung and SK Hynix is incredible. As soon as NVIDIA starts growing, they grow with us. I value our partnership with SK Hynix and Samsung very incredibly".

The news directly from the CEO of NVIDIA that it will be using HBM memory supplied by Samsung saw the South Korean company's shares jump by 5.6% on Wednesday.

Continue reading: NVIDIA is qualifying Samsung's new HBM3E chips, will use them for future B200 AI GPUs (full post)

Meta orders NVIDIA's next-gen Blackwell B200 AI GPUs, shipments expected later this year

Anthony Garreffa | Mar 20, 2024 8:37 PM CDT

Meta has purchased NVIDIA's new Blackwell B200 AI GPUs to train its Llama models, according to Meta CEO Mark Zuckerberg. The company is also training a third-generation of its Llama model on two GPU clusters that it announced last week, each of them packing around 24,000 of NVIDIA's current-gen Hopper H100 AI GPUs.

Meta orders NVIDIA's next-gen Blackwell B200 AI GPUs, shipments expected later this year

The news is coming from a new report by Reuters, which said that Meta will continue using its current H100-powered AI GPU clusters to train its current-gen Llama 3 model, but will use NVIDIA's new Blackwell B200 AI GPUs to train future generations of the model, according to a Meta spokesperson.

NVIDIA announced its new Blackwell B200 AI GPU at its GPU Technology Conference (GTC) event this week, offering gigantic improvements to all things AI.

Continue reading: Meta orders NVIDIA's next-gen Blackwell B200 AI GPUs, shipments expected later this year (full post)

NVIDIA adds generative AI into cuLitho: game-changing 60x speed up for chipmakers like TSMC

Anthony Garreffa | Mar 19, 2024 10:08 PM CDT

NVIDIA announced its new Blackwell GPU at GTC 2024 this week, with the company announcing that TSMC and Synopsys are using its cuLitho software in production to boost computational lithography.

NVIDIA adds generative AI into cuLitho: game-changing 60x speed up for chipmakers like TSMC

This is an important step that helps chip makers walk around limitations imposed on them as we get into 2nm and smaller transistors produced with next-level machines like ASML's new High-NA EUV lithography machines. The computational power behind these chips continues to skyrocket, which is where NVIDIA and cuLitho steps in.

NVIDIA used an example of a cuLitho-powered system packing 300 x H100 AI GPUs, which provided a gigantic 60x performance increase for a workload that would normally require 40,000 x GPU systems and a mind-boggling 30 million or more hours of compute time.

Continue reading: NVIDIA adds generative AI into cuLitho: game-changing 60x speed up for chipmakers like TSMC (full post)

Sam Altman responds to question asking if he's afraid of AGI taking over

Jak Connor | Mar 19, 2024 9:10 AM CDT

Sam Altman, the CEO of OpenAI, the company behind many popular and powerful AI-powered tools such as ChatGPT, Sora and the underlying technology called GPT, was asked during an interview if he was worried about losing control of AGI once its created.

Sam Altman responds to question asking if he's afraid of AGI taking over

Altman sat down for an interview with Lex Fridman on the "Lex Fridman Podcast," where he was asked if he worries about "losing control of the AGI itself", as there are many people out there, including numerous security researchers concerned about the creation of a super-intelligent AI system and the implications that has on the planet through existential risk. Fridman prefaces the question by saying that "losing control" would not be because of "state actors, not because of security concerns, but because of the AI itself".

The OpenAI CEO responded promptly by saying, "That is not my top worry as I currently see things." Adding, "there have been times I worried about that more. There may be times again in the future where that's my top worry. That's not my top worry right now." Fridman followed up by asking "What's your intuition about that not being your top worry," adding "do you think we could be surprised?".

Continue reading: Sam Altman responds to question asking if he's afraid of AGI taking over (full post)

OpenAI CEO Sam Altman was asked if he trusts himself with the power of AGI

Jak Connor | Mar 19, 2024 8:15 AM CDT

Sam Altman, the CEO of OpenAI, the company behind GPT-4, ChatGPT, Sora, and many other industry-leading AI technologies, has sat down for an interview with Lex Fridman to discuss multiple topics regarding artificial intelligence and the impressive creations being made at OpenAI.

OpenAI CEO Sam Altman was asked if he trusts himself with the power of AGI

Lex Fridman asked Altman if he trusts himself with the power of leading a company that could potentially create the first Artificial General Intelligence (AGI), an AI system that is capable of human-level and beyond intelligence. Altman responded honourably to the question, saying that he believes that its important that "I nor any other one person have total control over OpenAI, or over AGI, and I think you want a robust governance system."

The OpenAI CEO further explained that he "continues to not want super voting control over OpenAI" and "I continue to think that no company should be making these decisions and that we really need governments to put rules of the road in place."

Continue reading: OpenAI CEO Sam Altman was asked if he trusts himself with the power of AGI (full post)

Newsletter Subscription