Artificial Intelligence - Page 42
All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more - Page 42.
Bill Gates is now worried about AI taking his job
The age of artificial intelligence-powered tools is upon us, and with the impressive capabilities of these tools many are worried about job security - and they would be right to be worried.
With tools such as OpenAI's ChatGPT and Microsoft's Copilot, many people around the world are worried about customized AI-powered applications making their positions within companies obsolete, as the company would simply adopt an AI-powered tool to perform all of the work for them. In many of these instances, AI-powered tools would perform the job much more efficiently, without complaints, with no days off, and would cost far less than hiring a human.
For all these reasons and more, people are concerned about the societal impact of AI tools, and it seems that not just the everyday worker is concerned, as Microsoft founder Bill Gates is also worried about being made obsolete. Gates spoke to OpenAI CEO Sam Altman during an episode of the Unconfuse Me with Bill Gates podcast, in which he said that he was initially skeptical about AI and didn't believe it would advance as quickly as it has. More specifically, Gates said he didn't "expect ChatGPT to get so good".
Continue reading: Bill Gates is now worried about AI taking his job (full post)
SK hynix has 'high hopes' for its advanced packaging plant in the US
SK hynix wants utter world domination in the HBM and advanced packaging markets, with its new Arizona, USA-based plant gearing up for all-systems-go in the coming years on US soil.
In an interview posted on SK hynix's own blog last week, the vice president in charge of SK hynix's package and test division, Choi Woo-jin, said: "Package and test (P&T) technology is turning into a crucial factor in the battle for semiconductor leadership".
Choi is a packaging expert who has conducted and led research and development in chip memory packaging over the last 30 years, with the P&T division at SK hynix that he runs taking care of the back-end process where wafers are packaged into products and tested, ensuring their meet the strenuous demands of customers.
Continue reading: SK hynix has 'high hopes' for its advanced packaging plant in the US (full post)
Jim Keller laughs at $10B R&D cost for NVIDIA Blackwell, should've used ethernet for $1B
NVIDIA spent a sizeable $10 billion on R&D for its next-generation Blackwell GPU architecture, but chip legend Jim Keller said he could've done the same job for just $1 billion.
I first saw Keller's tweet, noticing that it looked familiar... he had taken an image of the story I wrote about NVIDIA spending $10 billion on R&D for Blackwell. Great to see, but how does Keller fix NVIDIA's $10 billion R&D budget for one-tenth of that cost, just $1 billion?
First, NVIDIA uses dual Blackwell B100 dies on its new B200 AI GPU, which has a whopping 208 billion transistors, each B100 GPU die featuring 104 billion transistors. NVIDIA uses two B100 dies to create the B200 in full, with NVIDIA using its in-house NV-High Bandwidth Interface (NV_HBI), which has up to 10TB/sec of bandwidth.
LG expands self-developed on-device AI chip, will go into 46 products in LG's product families
LG has announced plans to expand its home appliance-specific on-device AI chip -- DQ-C -- to 46 models through 8 product families.
The new LG DQ-C chip supports AI control, LCD display driving, and voice recognition and is specialized for operating systems inside of home appliances. LG has designed the DQ-C AI chip in-house, with previous chips produced by other semiconductor companies, while LG outsources to TSMC on its 28nm process node in Taiwan to make its DQ-C AI chip.
LG has spent three years deep inside research and development of the DQ-C chip, first announced in July 2023, and is used inside of five LG products including washing machines, dryers, and air conditioners. LG first introduced washing machines and dryers with its DQ-C chip under its Home Appliances 2.0 series in July 2023, showing off the actual DC-Q chip at IFA last year.
Arm CEO says AI could end up consuming up to 25% of all power in the United States by 2030
The latest IEA Electricity 2024 report states that the electricity and power demands from the data center sector in countries like the United States and China will increase dramatically by the time 2030 rolls around. If you've been keeping track of some of the AI data center plans from the likes of Google, Meta, Amazon, and others, you're aware that this year alone, hundreds of thousands of high-performance NVIDIA GPUs are set to be installed in various locations.
The report writes, "The AI server market is currently dominated by tech firm NVIDIA, with an estimated 95% market share. In 2023, NVIDIA shipped 100,000 units that consume an average of 7.3 TWh of electricity annually. By 2026, the AI industry is expected to have grown exponentially to consume at least ten times its demand in 2023."
According to Rene Has, CEO of Arm (via The Wall Street Journal), if AI accounts for around 4% of current power usage in the United States, this could rise to around 25% by 2030. He also called out generative AI models like Chat GPT as "insatiable" regarding electricity.
NVIDIA H100 AI GPU lead times improve: 4-month wait is now 2-3 month wait
NVIDIA's shortage of Hopper H100 AI GPUs is improving, with the previous 4-month wait now turning into 8-12 weeks.
It was just a few months ago that we reported that NVIDIA AI GPU shipments had been "greatly accelerated," according to analysts, with waiting times of 8-11 months for AI GPU deliveries reduced to just 3-4 months. Now that 4-month wait, is a 2-3 month wait.
In a new report from TrendForce, Dell is reportedly capitalizing on AI, with Dell Taiwan's General Manager saying on April 9 that the company is experiencing stronger server orders and demand in the Taiwanese market. This surge is thanks to AI needs within Taiwan's own corporate sector.
Next-gen AI with 'human-level cognition' is on the brink of being released
The next wave of powerful AI-powered chatbots is only just around the corner as Meta and OpenAI prepare for the release of GPT-5 and Llama 3, the large language models that power popular AI tools such as ChatGPT.
The underpinning technology powering popular AI tools such as ChatGPT and DALL-E will soon be getting an upgrade, according to recent reports citing progress updates from Meta and OpenAI, the two tech giants leading the charge when it comes to AI development.
Meta's president of global affairs Nick Clegg said the company is currently preparing to release Llama 3 to the public "Within the next month, actually less" and that this next-generation of Llama will arrive with a suite of new features that Meta promises will be much more impressive than the current model.
Google announces Arm-based CPU for AI called Axion, 50% more performance than current-gen x86
With all the big tech companies investing billions in AI data centers, research, and the creation of generative AI models and tools, many are looking to create their own hardware as an alternative to NVIDIA's chips - while competing with AMD, Intel, and new AI-chip players like Microsoft.
Google is entering the race with its own arm-based processor designed for the AI market. Like Google's tensor processing units (TPUs), which developers can access only via Google Cloud, the Arm-based CPU called Axiom will apparently deliver "superior performance to x86 chips."
How much extra performance? According to Google, Axiom offers 30% better performance than "general purpose Arm chips" and 50% better performance than "current generation x86 chips" as produced by Intel and AMD.
Meta's next-gen in-house AI chip is made on TSMC's 5nm process, with LPDDR5 RAM, not HBM
Meta has just teased its next-gen AI chip -- MTIA -- which is an upgrade over its current MTIA v1 chip. The new MTIA chip is made on TSMC's newer 5nm process node, with the original MTIA chip made on 7nm.
The new Meta Training and Inference Accelerator (MTIA) chip is "fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity" that will be used for the unique requirements of Meta. We've seen the best AI GPUs on the planet using HBM memory -- with HBM3 used on NVIDIA's Hopper H100 and AMD Instinct MI300 series AI chips -- with Meta using low-power DRAM memory (LPDDR5) instead of server DRAM or LPDDR5 memory.
The social networking giant created its MTIA chip was the company's first-generation AI inference accelerator that the company designed in-house for Meta's AI workload in mind. The company says that their deep learning recommendation models are "improving a variety of experiences across our products".
AMD's upgraded Instinct MI350 with newer 4nm node, HBM3E rumored for later this year
AMD has already confirmed it has refreshed variants of its Instinct MI300 series AI and HPC processors in the second half of this year, with a tweaked Instinct MI350X featuring ultra-fast HBM3E memory.
AI GPU competitor NVIDIA has its current Hopper H100 AI GPU with HBM3 memory, while its newly announced H200 AI GPU features ultra-fast HBM3E memory -- the world's first AI GPU with HBM3E memory. The next-gen Blackwell B200 AI GPU ships with ultra-fast HBM3E memory as standard.
Market research firm TrendForce recently teased AMD's new Instinct MI350X. The firm says the new Instinct MI350X will feature chiplets made on TSMC's newer 4nm process node, which is an enhanced version of TSMC's 5nm-class process node. The new TSMC N4 process node will allow AMD to choose between increasing performance or lowering power consumption on its tweaked Instinct MI350X over the MI300 series AI GPU.