Artificial Intelligence News - Page 1

All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, impressive AI demos & plenty more.

Follow TweakTown on Google News

Microsoft gifts first-of-its-kind AI model to US intelligence agencies

Jak Connor | May 9, 2024 3:32 AM CDT

A new report from Bloomberg reveals Microsoft has created a new generative AI model that is designed specifically for US intelligence agencies.

Microsoft gifts first-of-its-kind AI model to US intelligence agencies

The report states the main difference between this new AI model and others that power popular AI tools, such as ChatGPT, is that it's completely divorced from the internet, making it the first of its kind. Known AI models such as ChatGPT, DALL-E, and Microsoft's Copilot rely on cloud services to process prompts, train data, and reach conclusions. However, the AI model now handed over to US intelligence agencies doesn't require any cloud services, meaning it is completely devoid of any internet access and, therefore, secure.

Why do US intelligence agencies want an advanced AI model? According to the report, due to the security of the AI model, top-secret information can now be inputted and analyzed, which will help intelligence agencies understand and filter through large swaths of classified information.

Continue reading: Microsoft gifts first-of-its-kind AI model to US intelligence agencies (full post)

Microsoft set to build $3.3 billion cloud campus to fuel AI growth

Jak Connor | May 9, 2024 3:01 AM CDT

Microsoft has snapped up the same location Foxconn acquired to build an LCD panel manufacturing plant, but instead of panels, Microsoft will be constructing a data center.

Microsoft set to build $3.3 billion cloud campus to fuel AI growth

After years of delays Foxconn failed to materialize the LCD manufacturing project at Mount Pleasant, Wisconsin, and over the years Microsoft has snapped up more and more of the land that was originally set aside for Foxconn's project, eventually resulting in the Taiwanese company pulling out and Microsoft scooping up the rest of the site.

Microsoft's proposal for a data center features infrastructure and community improvements to the local area, with promises that it will up-skill 100,000 residents across the state to be component at generative AI technologies such as Microsoft Copilot, train and certify 3,000 local AI software developers, and 1,000 cloud datacenter technicians. Moreover, Microsoft President Brad Smith, backed the push for AI as it has the potential to revolutionize manufacturing plants, assist workers and create more jobs.

Continue reading: Microsoft set to build $3.3 billion cloud campus to fuel AI growth (full post)

NVIDIA's next-gen R100 AI GPU: TSMC 3nm with CoWoS-L packaging, next-gen HBM4 in Q4 2025

Anthony Garreffa | May 8, 2024 10:15 PM CDT

NVIDIA is still cooking its new Blackwell GPU architecture and B200 AI GPU, and while we've had teases of the next-gen Vera Rubin GPU, now we're hearing the next-gen R100 AI GPU will be in mass production in Q4 2025.

NVIDIA's next-gen R100 AI GPU: TSMC 3nm with CoWoS-L packaging, next-gen HBM4 in Q4 2025

In a new post by industry insider Ming-Chi Kuo, NVIDIA's next-generation AI chip will enter mass production in Q4 2025 with the R-series and R100 AI GPU, with the system/rack solution to enter mass production in Q1 2026. NVIDIA's next-gen R100 will be made on TSMC's newer N3 process node, compared to B100 which uses TSMC N4P, with R100 using TSMC's newer CoWoS-L packaging (the same as B100).

NVIDIA's next-gen R100 AI GPU features around 4x reticle design, compared to the B100 with 3.3x reticle design, while the interposer size for R100 "has yet to be finalized," with Ming-Chi saying there are 2-3 options. R100 will feature 8 x HBM4 units, while GR200's new Grace CPU will use TSMC's N3 process (compared to TSMC's N5 for GH200 and GB200's Grace CPUs).

Continue reading: NVIDIA's next-gen R100 AI GPU: TSMC 3nm with CoWoS-L packaging, next-gen HBM4 in Q4 2025 (full post)

US government plans to prevent AI software like ChatGPT getting to China

Jak Connor | May 8, 2024 8:27 AM CDT

The US government is reportedly preparing to make another move against China to prevent the nation from gaining access to the US's best artificial intelligence capabilities.

US government plans to prevent AI software like ChatGPT getting to China

The Biden administration has already taken measures to prevent China from gaining AI supremacy by banning the exportation of specific high-end NVIDIA graphics cards, which are used to train the AI models, and proposing a rule that requires all US cloud companies to inform the government when foreign customers are using their cloud systems to train AI models.

According to reports, more guardrails are being considered by the Commerce Department, which plans on targeting the exportation of proprietary or closed-source AI models. The idea behind these new purported regulations is to prevent US-based AI giants such as the Microsoft-funded OpenAI, the company behind popular AI tool ChatGPT, or Google DeepMind, creators of Gemini, from taking their world leading AI models to global market and selling them to the highest bidder.

Continue reading: US government plans to prevent AI software like ChatGPT getting to China (full post)

Meta AI boss confirms the company has purchased around $30 billion worth of NVIDIA AI GPUs

Anthony Garreffa | May 7, 2024 8:46 PM CDT

Meta has purchased 500,000 more AI GPUs for a total of 1 million AI GPUs, which is valued at $30 billion.

Meta AI boss confirms the company has purchased around $30 billion worth of NVIDIA AI GPUs

We're hearing about the gargantuan AI GPU hardware investment from Meta AI boss Yann LeCun at the AI Summit, also saying that upcoming variations of its Llama 3 large language model are on the way. LeCun emphasized the computational limitations and GPU costs as things that are slowing the progression of AI.

OpenAI CEO Sam Altman plans to spend $50 billion a year on AGI development (artificial general intelligence) by using 720,000 NVIDIA H100 AI GPUs that cost a hefty $21.6 billion. Microsoft is aiming for 1.8 million AI GPUs by the end of 2024, while OpenAI wants to have 10 million AI GPUs before the end of the year.

Continue reading: Meta AI boss confirms the company has purchased around $30 billion worth of NVIDIA AI GPUs (full post)

Sam Altman says AI will be able to 'know absolutely everything' about you

Jak Connor | May 7, 2024 5:07 AM CDT

In a new interview with MIT Technology Review, OpenAI CEO Sam Altman revealed what the future will look like with artificial intelligence-powered systems becoming more engrained in our lives.

Sam Altman says AI will be able to 'know absolutely everything' about you

The CEO of one of the companies leading the charge in AI development began the interview by saying AI tools will replace the smartphone as the most dependent piece of technology in our daily lives, and the capabilities of AI will be so great that it will feel like "this thing that is off helping you." Altman went on to describe the perfect app for AI would be a "super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I've ever had, but doesn't feel like an extension."

Additionally, this level of AI could attempt things outside its known capabilities, fail, and then come back to the user with follow-up questions. The answers the user provides to those questions are then integrated into its second attempt at the task. So, what will power this crazy new AI? Altman believes there's a chance that we won't even need a specific piece of hardware to use this AI on the go, as the new app could simply access the cloud.

Continue reading: Sam Altman says AI will be able to 'know absolutely everything' about you (full post)

Apple is working on its own chip to run AI software in its data center servers

Anthony Garreffa | May 6, 2024 10:39 PM CDT

Apple is reportedly working on its own AI chip according to sources of The Wall Street Journal, which reports the Curpentino-based giant is building AI chips for its data centers to run new AI-based features that'll be announced at WWDC 2024 next month.

Apple is working on its own chip to run AI software in its data center servers

The World Wide Developers Conference (WWDC) is where APple will unveil its plans for the future of AI with its products and services, with the WSJ reporting: "Apple has been working on its own chip designed to run artificial intelligence software in data center servers, a move that has the potential to provide the company with a key advantage in the AI arms race".

The project is called ACDC which stands for Apple Chips in Data Center, but I can see some truly awesome marketing from Apple using the "ACDC" branding if they do it right. Come on, Apple... you know you're going to do it anyway.

Continue reading: Apple is working on its own chip to run AI software in its data center servers (full post)

Samsung establishes dream team of engineers to win AI chip orders from NVIDIA

Anthony Garreffa | May 6, 2024 9:53 PM CDT

Samsung has established a new dream team of engineers to secure HBM memory chip deals for AI GPUs from NVIDIA.

Samsung establishes dream team of engineers to win AI chip orders from NVIDIA

The news is coming from South Korean outlet KED Global, which reports Samsung's new task force features about 100 "excellent engineers" who have been working on improving manufacturing yields and quality with the first objective being passing NVIDIA's tests.

NVIDIA CEO Jensen Huang asked Samsung to raise the yields and quality of its 8-layer and 12-layer HBM3E memory chips for supply according to industry insiders on Monday. HBM3E memory is the cornerstone of NVIDIA's next-gen Blackwell B200 AI GPUs, as well as the new beefed-up Hopper H200 AI GPU, each with HBM3E memory mostly from Samsung's South Korean HBM rival: SK hynix.

Continue reading: Samsung establishes dream team of engineers to win AI chip orders from NVIDIA (full post)

TSMC to expand advanced packaging capacity at 3 plants: CoWoS, SoIC, SoW for AI GPU demand

Anthony Garreffa | May 6, 2024 8:00 PM CDT

TSMC is focused on getting its advanced packaging production capacity to new levels, with the company boosting advanced packaging capacity at its Zhonghe, Nanka, and Jiake fabs, which are "all in the process of expanding production".

TSMC to expand advanced packaging capacity at 3 plants: CoWoS, SoIC, SoW for AI GPU demand

The news is coming from Taiwanese media outlet Ctee, reporting that the Chiayi Science Park was finalized this year and is expected to build two advanced packaging plants first. The first phase of Jiake will see the breaking ground ceremony in the coming weeks, and into operation in the second half of the year.

The second phase of Jiake is expected to begin construction in Q2 2024, with operation in Q1 2027 which is still a few years away, which Ctee reports will continue to expand the market share of AI and HPC.

Continue reading: TSMC to expand advanced packaging capacity at 3 plants: CoWoS, SoIC, SoW for AI GPU demand (full post)

NVIDIA and AMD have reserved all of TSMC's CoWOS and SoIC advanced packaging for 2024 and 2025

Anthony Garreffa | May 6, 2024 6:59 PM CDT

AMD and NVIDIA are both pumping out new generations of AI hardware, with the two GPU giants eating all of TSMC's advanced packaging capacity for not just 2024, but also 2025.

NVIDIA and AMD have reserved all of TSMC's CoWOS and SoIC advanced packaging for 2024 and 2025

In a new report from UDN, the site reports that TSMC will contract advanced packaging production capacity of its in-house CoWoS and SoIC for this and next year, as AI GPU hardware orders aren't slowing down. TSMC expects revenue contributed by server AI processors to more than double this year, and once NVIDIA's new Blackwell B200 AI GPUs are being pumped out later this year, 2025 is going to be a bananas year for TSMC.

In response to the insatiable AI demand, TSMC is now actively expanding its advanced packaging production capacity, with industry estimates that TSMC's CoWoS monthly production capacity will be between 45,000 and 50,000 by the end of 2024. This is a gigantic 3x increase from the 15,000 in 2023, and it's expected to hit 50,000 in 2025.

Continue reading: NVIDIA and AMD have reserved all of TSMC's CoWOS and SoIC advanced packaging for 2024 and 2025 (full post)