Artificial Intelligence - Page 46
Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 46
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Apple and NVIDIA busted swiping YouTube videos to train AI models
In early April, YouTube sent a clear message to AI model developers that downloading data from the platform and using it to train AI models is a clear violation of YouTube's terms of service.
This sentiment was reinforced in the same week as YouTube's public comment about its content being used to train AI model, but it came from a Google spokesperson who told the New York Times any, "unauthorized scraping or downloading of YouTube content" is prohibited. However, a new report from Proof News has found YouTube has been scraped for its data, and some of the biggest tech companies advancing AI have used it to train models.
According to a Proof News investigation, subtitles from 172,535 YouTube videos were siphoned from more than 48,000 channels, and some of these channels included prominent creators on the platform such as MKBHD (19 million subscribers), MrBeast (289 million), Jacksepticeye (31 million), PewDiePie (111 million), Stephen Colbert, John Oliver, Jimmy Kimmel, and more. Notably, the video transcriptions are subtitles files.
Continue reading: Apple and NVIDIA busted swiping YouTube videos to train AI models (full post)
Microsoft's $1.5 billion AI deal sets off national security alarms, White House involved
Microsoft has struck a $1.5 billion deal with Group 42 (G42), an artificial intelligence research and development firm operating in the United Arab Emirates, and is chaired by national security advisor His Highness Sheikh Tahnoun bin Zayed Ai Nahyan.
The deal has sparked security concerns, with two House committee chairs now sending a public letter to the White House prompting it to investigate the deal, following Sheikh Mohamed bin Zayed Al Nahyan's visit to Beijing to strengthen AI-related ties, and G42 being investigated less than a year ago for its association with China. The $1.5 billion deal has caused fear among US intelligence officials as it could mean advanced US AI technologies could eventually reach China.
Notably, UAE's AI minister admitted the US concerns about the deal are valid, as it would be for "any country that has adversaries". Despite these seemingly widespread concerns, Microsoft and G42 are making an effort to implement as much transparency about the deal as they can, with Microsoft President Brad Smith saying G42 won't be gaining any AI access to proprietary US technologies such as processors, AI model design tools, and more. Moreover, the Microsoft president said the UAE's access would be in a "vault within a vault".
Samsung to manufacture logic dies for next-gen HBM4 AI memory using 4nm node
Samsung will use its in-house 4nm foundry process to mass produce its next-generation HBM4 memory to directly take on South Korean competitor SK hynix and TSMC in the race for AI memory supremacy.
In a new report from the Korea Economic Daily, we're hearing that Samsung will use its 4nm foundry process for the logic die of HBM4 memory chips (sixth-generation HBM) according to industry sources of KED. The logic die itself sits at the base of the stacks of dies and is one of the core components of an HBM chip used on AI chips.
SK hynix, Samsung, and Micron all make HBM, with logic dies used on the latest HBM3E memory, but the new HBM4 memory of the future requires a foundry process that's ready with customized functions required by AI chip makers like NVIDIA, AMD, and Intel, and more.
Intel's next-gen Falcon Shores AI chip: ordered on TSMC 3nm node, CoWoS advanced packaging
Intel has reportedly placed 3nm orders with TSMC for its next-generation AI chip codenamed "Falcon Shores" which teases TSMC's new CoWoS advanced packaging will be used, with Falcon Shores expected to be produced in late 2025.
In a new report from CNyes, TSMC has received yet another "successful order" for Intel's next-gen AI chip called Falcon Shores on TSMC's newer 3nm process node and CoWoS advanced packaging technology, with Falcon Shores to take on NVIDIA's tight grip on the AI chip market.
The new Falcon Shores AI chip design has been finalized (tape out) and will reportedly enter mass production at the end of 2025. Intel acquired Habana back in 2019, with Habana maintaining its independence in its operating model, but over the years, things have changed and now Intel will combine Habana's technology with its own GPU technology.
NVIDIA, TSMC, SK hynix form 'triangular alliance' for next-gen AI GPUs and HBM4 memory
SK hynix is gearing up to strengthen its "triangular alliance" with NVIDIA and TSMC for the future of AI GPUs and future-gen HBM4 memory.
The South Korean company will attend SEMICON Taiwan on September 4, with SK hynix president Kim Joo-sun to deliver a keynote speech at the CEO Summit of SEMICON Taiwan, which will be the first time that the company has taken a large role like this at the event.
Around 1,000 companies attend SEMICON, a major event in the semiconductor space. Companies like TSMC attend to show off their latest semiconductor equipment and technologies, build new relationships, and strengthen old ones.
MediaTek's in-house Arm-based AI server chip on TSMC's 3nm process expected in 2H 2025
MediaTek is plotting along with new server processors based on the Arm architecture, in both CPU and GPU form, using TSMC's newer 3nm process node. The new Arm-based MediaTek AI chips will be launched in the second half of 2025.
In a new report from UDN, we hear that mass production is aiming for the second half of 2025, with orders expected from large cloud service providers (CSPs). MediaTek hasn't responded to the rumors, of course, but industry analysis shows that the AI server market is "rising rapidly," reports UDN, and that high-end AI models require high-performance computing (HPC) chips from major manufacturers like NVIDIA and AMD.
However, HPC consumes far too much power and doesn't require a large amount of AI inference, so in this field, there's no need for HPC chips. The mid-to-low-end AI server market is now growing, generating new demand. The low-power Arm architecture processor has become a "new target" for major CSP manufacturers.
NVIDIA raises orders 25% for TSMC for its next-gen Blackwell AI GPUs amid strong AI demand
TSMC is reportedly preparing to start production of NVIDIA's next-gen Blackwell AI GPU platform, with strong customer demand, NVIDIA has reportedly increased AI chip investment in TSMC by 25%.
In a new report from UDN, we hear that NVIDIA has amped up Blackwell AI GPU orders by 25% at TSMC. This shows that the insatiable AI demand isn't slowing down. TSMC's performance in the second half of 2024 is going to be bonkers, and 2025 will be even bigger (for both companies).
NVIDIA's next-generation Blackwell AI GPU family will usher in new performance levels, with major manufacturers like Amazon, Dell, Google, Meta, Microsoft, and more to use Blackwell AI GPUs in their new AI servers, and right now the capacity exceeds expectations.
Samsung Foundry wins first 2nm AI chip order, and stole a TSMC client in the process
Samsung Foundry has proudly announced that it has secured its first 2nm AI chip order from a Japanese company, taking one of TSMC's long-standing customers away.
In a new press release, the South Korean electronics giant said it secured a Japanese company to fab its new 2nm AI chip, with Taejoong Song, Corporate VP at Samsung Electronics explaining: "This order is pivotal as it validates Samsung's 2nm GAA process technology and Advanced Package technology as an ideal solution for next-generation AI accelerators. We are committed to closely collaborating with our customers ensuring that the high performance and low power characteristics of our products are fully realized".
The Japanese company in question is Preferred Networks, a leading Japanese AI company that is involved the R&D focused on deep learning workloads. Preferred Networks is heading towards vertically integrating "the AI value chain from chips to supercomputers" with the Japanese company providing a medium for businesses to have their own in-house AI clusters.
Amazon AWS shows off next-gen Graviton4 processor: 3x compute power and memory, 30% better perf
Amazon AWS has just shown off its fourth-generation general purpose processor -- the new Graviton4 -- with 3x the compute power and memory over Graviton3.
The next-gen Graviton4 chip also has 75% more memory bandwidth than Graviton3 and 30% more performance. Amazon AWS adds that the new Graviton4 processors are rented out for $0.02845 per second of compute power. Yahoo! Finance reports that the price-performance ratio is "crucial" for AWS, as it uses its proprietary chips to power its cloud infrastructure and servers.
Rahul Kulkarni, Amazon's director of product management for Compute and AI, said: "Collectively it's delivering more price performance, which means for every dollar spent, you get a lot more performance".
Planning begins for world's most powerful GPU cluster with 100,000
The race to create the world's most powerful artificial intelligence system has certainly heated up, and by the looks of things, it isn't stopping anytime soon, with multiple tech companies flocking to NVIDIA for its powerful workstation GPUs used to train the AI models.
It was only a few months ago Musk confirmed it was building a massive AI factory with Dell and Supermico, and within these huge server farms will be NVIDIA GPUs that will be used to train xAI's model Grok. To "catch up" to the competition and make Grok a viable AI solution that is as good, if not better than leading models from OpenAI and Microsoft, Musk plans on throwing 100,000 Hopper-based GPUs into a server.
The SpaceX founder explained via a post on X that xAI contracted 24,000 H100 (Hopper) GPUs from Oracle, which are being used to train Grok 2. However, Musk said xAI will move forward with its 100,000 H100 system by itself as that will result in the "fastest time to completion". Moreover, Musk wrote the decision to go ahead with the 100,000 GPU system internally was the company's "fundamental competitiveness depends on being faster than any other AI company."
Continue reading: Planning begins for world's most powerful GPU cluster with 100,000 (full post)