Artificial Intelligence - Page 44
Get the latest AI news, covering cutting-edge developments in artificial intelligence, generative AI, ChatGPT, OpenAI, NVIDIA, and impressive AI tech demos. - Page 44
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
NVIDIA, TSMC, SK hynix form 'triangular alliance' for next-gen AI GPUs and HBM4 memory
SK hynix is gearing up to strengthen its "triangular alliance" with NVIDIA and TSMC for the future of AI GPUs and future-gen HBM4 memory.
The South Korean company will attend SEMICON Taiwan on September 4, with SK hynix president Kim Joo-sun to deliver a keynote speech at the CEO Summit of SEMICON Taiwan, which will be the first time that the company has taken a large role like this at the event.
Around 1,000 companies attend SEMICON, a major event in the semiconductor space. Companies like TSMC attend to show off their latest semiconductor equipment and technologies, build new relationships, and strengthen old ones.
MediaTek's in-house Arm-based AI server chip on TSMC's 3nm process expected in 2H 2025
MediaTek is plotting along with new server processors based on the Arm architecture, in both CPU and GPU form, using TSMC's newer 3nm process node. The new Arm-based MediaTek AI chips will be launched in the second half of 2025.
In a new report from UDN, we hear that mass production is aiming for the second half of 2025, with orders expected from large cloud service providers (CSPs). MediaTek hasn't responded to the rumors, of course, but industry analysis shows that the AI server market is "rising rapidly," reports UDN, and that high-end AI models require high-performance computing (HPC) chips from major manufacturers like NVIDIA and AMD.
However, HPC consumes far too much power and doesn't require a large amount of AI inference, so in this field, there's no need for HPC chips. The mid-to-low-end AI server market is now growing, generating new demand. The low-power Arm architecture processor has become a "new target" for major CSP manufacturers.
NVIDIA raises orders 25% for TSMC for its next-gen Blackwell AI GPUs amid strong AI demand
TSMC is reportedly preparing to start production of NVIDIA's next-gen Blackwell AI GPU platform, with strong customer demand, NVIDIA has reportedly increased AI chip investment in TSMC by 25%.
In a new report from UDN, we hear that NVIDIA has amped up Blackwell AI GPU orders by 25% at TSMC. This shows that the insatiable AI demand isn't slowing down. TSMC's performance in the second half of 2024 is going to be bonkers, and 2025 will be even bigger (for both companies).
NVIDIA's next-generation Blackwell AI GPU family will usher in new performance levels, with major manufacturers like Amazon, Dell, Google, Meta, Microsoft, and more to use Blackwell AI GPUs in their new AI servers, and right now the capacity exceeds expectations.
Samsung Foundry wins first 2nm AI chip order, and stole a TSMC client in the process
Samsung Foundry has proudly announced that it has secured its first 2nm AI chip order from a Japanese company, taking one of TSMC's long-standing customers away.
In a new press release, the South Korean electronics giant said it secured a Japanese company to fab its new 2nm AI chip, with Taejoong Song, Corporate VP at Samsung Electronics explaining: "This order is pivotal as it validates Samsung's 2nm GAA process technology and Advanced Package technology as an ideal solution for next-generation AI accelerators. We are committed to closely collaborating with our customers ensuring that the high performance and low power characteristics of our products are fully realized".
The Japanese company in question is Preferred Networks, a leading Japanese AI company that is involved the R&D focused on deep learning workloads. Preferred Networks is heading towards vertically integrating "the AI value chain from chips to supercomputers" with the Japanese company providing a medium for businesses to have their own in-house AI clusters.
Amazon AWS shows off next-gen Graviton4 processor: 3x compute power and memory, 30% better perf
Amazon AWS has just shown off its fourth-generation general purpose processor -- the new Graviton4 -- with 3x the compute power and memory over Graviton3.
The next-gen Graviton4 chip also has 75% more memory bandwidth than Graviton3 and 30% more performance. Amazon AWS adds that the new Graviton4 processors are rented out for $0.02845 per second of compute power. Yahoo! Finance reports that the price-performance ratio is "crucial" for AWS, as it uses its proprietary chips to power its cloud infrastructure and servers.
Rahul Kulkarni, Amazon's director of product management for Compute and AI, said: "Collectively it's delivering more price performance, which means for every dollar spent, you get a lot more performance".
Planning begins for world's most powerful GPU cluster with 100,000
The race to create the world's most powerful artificial intelligence system has certainly heated up, and by the looks of things, it isn't stopping anytime soon, with multiple tech companies flocking to NVIDIA for its powerful workstation GPUs used to train the AI models.
It was only a few months ago Musk confirmed it was building a massive AI factory with Dell and Supermico, and within these huge server farms will be NVIDIA GPUs that will be used to train xAI's model Grok. To "catch up" to the competition and make Grok a viable AI solution that is as good, if not better than leading models from OpenAI and Microsoft, Musk plans on throwing 100,000 Hopper-based GPUs into a server.
The SpaceX founder explained via a post on X that xAI contracted 24,000 H100 (Hopper) GPUs from Oracle, which are being used to train Grok 2. However, Musk said xAI will move forward with its 100,000 H100 system by itself as that will result in the "fastest time to completion". Moreover, Musk wrote the decision to go ahead with the 100,000 GPU system internally was the company's "fundamental competitiveness depends on being faster than any other AI company."
Continue reading: Planning begins for world's most powerful GPU cluster with 100,000 (full post)
DOJ seizes AI-enhanced social media bot farm pretending to be American
What is just a portion of the undiscovered bot farms plaguing the internet has been discovered by the Department of Justice (DOJ) and seized.
The DOJ announced on Tuesday that it seized a bot farm linked to Russia's state-owned publication RT. The US authorities claim the bot farm can all be traced back to one employee. The bot farm consisted of more than 900 social media accounts designed to masquerade as accounts owned by Americans, and their goal was to post a massive amount of information online at once.
It states in the report the accounts posted content about the Ukraine-Russia war, which included videos of Russia's President Vladimir Putin justifying the invasion. The RT employee setup this bot farm by acquiring two domain names from Arizona-based company Namecheap. These domain names were then used to create two email servers that were then used to create nearly 1,000 bot accounts for social media platforms - 968 in total.
Continue reading: DOJ seizes AI-enhanced social media bot farm pretending to be American (full post)
Google DeepMind CEO: current AI models aren't even at the IQ level of a domestic cat
The world of AI is being forced upon us whether we like it or not, and as much bragging as these tech giants do, the CEO of Google DeepMind says that the IQ levels of current AI models are not as smart as a domestic cat.
In a recent chat with Tony Blair, the ex-Prime Minister of Britain, Google DeepMind CEO Demis Hassabis compared artificial intelligence versus cat IQ during the Future of Britain Conference 2024, organized by the Institute for Global Change.
Hassabis talked about his work not being focused on AI but rather on AGI (AI = artificial intelligence, AGI = artificial general intelligence), and that is how he is looking at the computer versus cat comparison. Current AI models can write, paint, create music, and more in a human-like fashion, but a domestic house cat has more intelligence. Hassabis said: "At the moment, we're far from human-level intelligence across the board. But in certain areas like games playing (AI is) better than the best people in the world".
Wells Fargo predicts AI power demand to skyrocket by 8050% by 2030
Wells Fargo is projecting that AI power demand will surge by an incredible 8050% by 2030, from 8TWh in 2024 up to an incredible 652TWh in 2030.
The world of AI is exploding in more ways than one, with the power consumption of today's high-end AI GPUs like the NVIDIA H100 AI GPU where the H100 in SXM form factor using 700W of power alone. Moving over to NVIDIA's next-generation B200 AI GPU, which will use up to 1200W of power per AI GPU, we can see how these AI power demands are going to get out of control.
AMD's previous-gen Instinct MI250 AI accelerators draw a peak of 560W of power, while the new MI300X AI accelerator from the company consumes 750W at peak, a 50% increase gen-over-gen. Intel doesn't get out of this either, with its new Gaudi 2 AI accelerator using 600W, its new Gaudi 3 AI accelerator uses 900W of power at peak (another 50% increase).
Continue reading: Wells Fargo predicts AI power demand to skyrocket by 8050% by 2030 (full post)
SK hynix looking for semiconductor engineers: 48 positions in HBM, FinFET transistor experts
SK hynix is looking for semiconductor engineers to help fill 48 positions related to HBM chips, including manufacturing process engineers to help increase yields and improve testing, FinFET transistor experts, and former Samsung staffers.
In a new report from The Korea Economic Daily, dominance in the AI and HBM space is heating up between South Korean rivals Samsung SK hynix, particularly in the HBM4 memory chip business. SK hynix designs and produces most of its semiconductors in-house, including HBM memory, but for HBM4, the company is outsourcing manufacturing to a foundry or contract chipmaker, with all signs pointing to TSMC.
On top of that, SK hynix is "aggressively" looking for the top talent in the industry -- mostly in South Korea, where Samsung and SK hynix are battling it out -- to advance its HBM technology and oversee tech outsourcing. SK hynix is aiming for the throat, looking at hiring as many Samsung engineers as it can.