Artificial Intelligence - Page 25

All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more - Page 25.

Follow TweakTown on Google News

SK hynix looking for semiconductor engineers: 48 positions in HBM, FinFET transistor experts

Anthony Garreffa | Jul 9, 2024 12:35 AM CDT

SK hynix is looking for semiconductor engineers to help fill 48 positions related to HBM chips, including manufacturing process engineers to help increase yields and improve testing, FinFET transistor experts, and former Samsung staffers.

SK hynix looking for semiconductor engineers: 48 positions in HBM, FinFET transistor experts

In a new report from The Korea Economic Daily, dominance in the AI and HBM space is heating up between South Korean rivals Samsung SK hynix, particularly in the HBM4 memory chip business. SK hynix designs and produces most of its semiconductors in-house, including HBM memory, but for HBM4, the company is outsourcing manufacturing to a foundry or contract chipmaker, with all signs pointing to TSMC.

On top of that, SK hynix is "aggressively" looking for the top talent in the industry -- mostly in South Korea, where Samsung and SK hynix are battling it out -- to advance its HBM technology and oversee tech outsourcing. SK hynix is aiming for the throat, looking at hiring as many Samsung engineers as it can.

Continue reading: SK hynix looking for semiconductor engineers: 48 positions in HBM, FinFET transistor experts (full post)

SK Group leading rivals in developing glass substrates for AI chip packaging, made in the USA

Anthony Garreffa | Jul 8, 2024 11:33 PM CDT

South Korean giants Samsung Electronics and SK Group are "speeding up efforts" to secure future dominance in glass substrates, which would be a "game-changer" for the semiconductor and AI industries.

SK Group leading rivals in developing glass substrates for AI chip packaging, made in the USA

In a new report from the Korea Herald, Samsung and SK Group are reportedly speeding up the development of glass substrate semiconductor research, which could "drastically boost the data capacity and speed of semiconductors in the era of artificial intelligence."

Samsung and SK Group leaders have visited their respective business sites that are producing glass substrates, renewing their commitment to the game-changing semiconductor technology. Glass substrates will overcome the limits of conventional plastic substrates, heavily boosting the performance and power efficiency of future-gen semiconductors.

Continue reading: SK Group leading rivals in developing glass substrates for AI chip packaging, made in the USA (full post)

Apple's next-gen M5 chip will use TSMC SoIC advanced packaging for future Macs, AI servers

Anthony Garreffa | Jul 8, 2024 10:35 PM CDT

Apple is gearing up to use TSMC's latest SoIC advanced packaging technologies for its next-generation M5 chips as part of a two-pronged strategy for the company to power its future Macs and AI servers.

Apple's next-gen M5 chip will use TSMC SoIC advanced packaging for future Macs, AI servers

In a report from DigiTimes, we're learning that Apple would adopt TSMC's new SoIC (System on Integrated Chip) advanced packaging technology that allows for 3D stacking of chips, providing improved electrical performance and thermal management versus traditional 2D chip designs.

The Economic Daily reported that Apple has expanded its cooperation with TSMC on its next-generation hybrid SoIC packaging designs that combine thermoplastic carbon fiber composite molding technology, which is reportedly in a small trial production phase. Apple hopes to have TSMC mass producing its next-gen M5 processors in 2025 and 2026 for future-gen Mac system and AI servers.

Continue reading: Apple's next-gen M5 chip will use TSMC SoIC advanced packaging for future Macs, AI servers (full post)

Elon Musk's huge liquid-cooled Gigafactory AI supercomputers get praise from Supermicro CEO

Anthony Garreffa | Jul 7, 2024 6:46 PM CDT

Elon Musk is building a gigantic AI supercomputer at Tesla's Texas Gigafactory, with 350,000 liquid-cooled AI GPUs that Supermicro CEO Charles Liang recently praised. Check out his post on X below:

Elon Musk's huge liquid-cooled Gigafactory AI supercomputers get praise from Supermicro CEO

The Supermicro CEO was photographed next to Elon Musk with some AI server racks, where he said the duo will "lead the liquid cooling technology to large AI data centers". Liang estimates that Musk's moving into the world of liquid-cooled AI supercomputers "may lead to preserving 20 billion trees for our planet" if more AI data centers moved to liquid cooling.

Data centers consume monumental amounts of power, with Supermicro hoping to reduce that through the use of liquid cooling, with the company claiming that direct liquid cooling could reduce electricity costs of cooling infrastructure compared to air cooling by 89%. In a previous tweet, the Supermicro CEO said the company's goal is "to boost DLC (direct liquid cooling) adoption from <1% to 30%+ in a year".

Continue reading: Elon Musk's huge liquid-cooled Gigafactory AI supercomputers get praise from Supermicro CEO (full post)

AMD teases 'sober people' ready to spend billions on '1.2 million GPU' AI supercomputer

Anthony Garreffa | Jul 4, 2024 8:37 PM CDT

AMD has received an inquiry to build an insane supercomputer that would house an incredible 1.2 million data center AI GPUs, with the company receiving inquiries from "unknown clients" for the crazy number of AI accelerators.

AMD teases 'sober people' ready to spend billions on '1.2 million GPU' AI supercomputer

In a recent interview with The Next Platform, AMD's EVP and GP of the Datacenter Solutions Group, Forrest Norrod, revealed that AMD has had inquiries from "unknown clients" that require an insane amount of AI accelerators, confirming the news of the huge AI supercomputer.

1.2 million AI GPUs is a gargantuan amount of AI processing power, with the world's current largest supercomputer -- Frontier -- featuring around 38,000 GPUs. This means that 1.2 million AI accelerators would be a mind-boggling 30x in GPU horsepower (and that's just from the GPUs, let alone the CPUs).

Continue reading: AMD teases 'sober people' ready to spend billions on '1.2 million GPU' AI supercomputer (full post)

There's an underground network smuggling NVIDIA AI GPUs into China, we're totally NOT surprised

Anthony Garreffa | Jul 4, 2024 4:20 AM CDT

China has had its access to the latest and greatest AI GPUs and AI accelerators limited through US restrictions and sanctions, with a network of buyers, sellers, and countries bypassing the US-led rules, smuggling the best AI chips into China anyway.

There's an underground network smuggling NVIDIA AI GPUs into China, we're totally NOT surprised

In a new report from The Wall Street Journal, we're learning that a 26-year-old Chinese student smuggled NVIDIA AI GPUs from Singapore into China last fall. The student packed his suitcase with 6 x NVIDIA compute cards (or modules) with his personal belongings, with each of the add-in boards at about the size of the portable Nintendo Switch console, unnoticed through the airport.

The student declared the value of the AI cards at just $100 each, which is a tiny fraction of the real cost on the (growing) underground market, but it the student didn't raise any red flags when traveling through Singapore (which isn't worried about smuggling advanced AI chips into China), or China, which is interested in getting said advanced AI chips, as well as import duties.

Continue reading: There's an underground network smuggling NVIDIA AI GPUs into China, we're totally NOT surprised (full post)

Amazon Web Services' Trainium3 AI chips - over 1000W of power and liquid cooled

Kosta Andreadis | Jul 4, 2024 4:02 AM CDT

High-end AI hardware is power-hungry, and Amazon Web Services (AWS) next-generation Trainium3 chip will consume 1000 Watts of power and require liquid cooling to keep temperatures in check.

Amazon Web Services' Trainium3 AI chips - over 1000W of power and liquid cooled

Prasad Kalyanaraman, Amazon's VP of infrastructure services, said that "the next generation will require liquid cooling," alluding to the upcoming Trainium3 AI chip. "When a chip goes above 1,000 watts, that's when they require liquid cooling," he added. For those keeping tabs on the incredible pace of AI chip development, this would put Amazon's next-gen chip on par with NVIDIA's beefiest Blackwell chip when it comes to power consumption.

AI hardware is becoming increasingly power-hungry. Although being a 1kW chip could make the Trainium3 a Blackwell competitor, word is that NVIDIA's next-gen Rubin architecture (which it's already talking about) will consume upwards of 1500 Watts on the high-end.

Continue reading: Amazon Web Services' Trainium3 AI chips - over 1000W of power and liquid cooled (full post)

Apple rumored to announce partnership with Google at iPhone 16 event

Jak Connor | Jul 4, 2024 2:32 AM CDT

Reports point to Apple announcing a partnership with Google at the company's upcoming iPhone 16 event.

Apple rumored to announce partnership with Google at iPhone 16 event

The information comes from Bloomberg's Mark Gurman, a known Apple insider who has previously revealed what Apple is cooking up behind the scenes ahead of time. According to Gurman, Apple is expected to announce a new partnership with Google at the iPhone 16 launch event, and the new partnership will involve introducing Google Gemini to the iPhone.

The coming Apple event is scheduled for September, and according to reports, Apple is looking to provide users with a variety of different AI-powered tools in addition to the already announced ChatGPT integration with its creators, OpenAI, and local Apple Intelligence processing. Additionally, Gurman writes that Meta asked Apple if it wanted to adopt its Llama AI model, to which the Cupertino company promptly refused, even declining a sitdown meeting with the Facebook and Instagram parent company.

Continue reading: Apple rumored to announce partnership with Google at iPhone 16 event (full post)

OpenAI's ChatGPT for Mac was storing your conversations exposing sensitive information

Jak Connor | Jul 4, 2024 1:01 AM CDT

OpenAI's ChatGPT was discovered to have a security flaw that made it extremely easy to find your chats with the AI-powered tool on your device. The exploit enabled a user to read the chats as they were in plain text.

OpenAI's ChatGPT for Mac was storing your conversations exposing sensitive information

The security exploit was demonstrated by Pedro José Pereira Vieito on Threads, and replicated by The Verge, and it appears it was up until Friday last week that ChatGPT for Mac OS saved chat logs with the AI-powered tool right after they were sent. The saving of the chatlogs made it extremely easy for them to be accessed by another app that was made by Pereira Vieito. The problem with this exploit?

If a user is having private conversations with ChatGPT and within those conversations is sensitive information, such as finances, passwords, etc, a bad actor would have an easily accessible way of tracking/saving all of those conversations if it had access to the computer. OpenAI was alerted about this issue, and the AI company rolled out an update that fixed the exploit, saying its new update has encrypted the conversations.

Continue reading: OpenAI's ChatGPT for Mac was storing your conversations exposing sensitive information (full post)

Panmnesia's new 'CXL Protocol' will have AI GPUs using memory from DRAM, SSDs with low latency

Anthony Garreffa | Jul 3, 2024 11:27 PM CDT

Panmnesia is a company you probably haven't heard of until today, but the KAIST startup has unveiled its cutting-edge IP that enables adding external memory to AI GPUs over the CXL protocol over PCIe, which enables new levels of memory capacity for AI workloads.

Panmnesia's new 'CXL Protocol' will have AI GPUs using memory from DRAM, SSDs with low latency

The current fleets of AI GPUs and AI accelerators use their on-board memory -- usually super-fast HBM -- but this is limited to smaller quantities like 80GB on the current NVIDIA Hopper H100 AI GPU. AMD and NVIDIA's next-gen AI chip offerings will usher in up to 141GB HBM3E (H200 AI GPU from NVIDIA) and up to 192GB HBM3E (B200 AI GPU from NVIDIA, and Instinct MI300X from AMD).

But now, Panmnesia's new CXL IP will let GPUs access memory from DRAM and SSDs, expanding the memory capacity from its built-in HBM memory... very nifty. The South Korean Institute (KAIST) startup bridges the connectivity with CXL over PCIe links, which means mass adoption is easy with this new tech. Regular AI accelerators don't have the subsystems required to connect with and use CXL for memory expansion directly, relying on solutions like UVM (Unified Virtual Memory) which is slower, defeating the purpose completely... which is where Panmnesia's new IP comes into play.

Continue reading: Panmnesia's new 'CXL Protocol' will have AI GPUs using memory from DRAM, SSDs with low latency (full post)

Newsletter Subscription