Artificial Intelligence - Page 45
All the latest Artificial Intelligence (AI) news with plenty of coverage on new developments, AI tech, NVIDIA, OpenAI, ChatGPT, generative AI, impressive AI demos & plenty more - Page 45.
Samsung forms dedicated 'HBM team' to boost AI memory chip production to beat SK hynix
Samsung has set up a dedicated HBM (High Bandwidth Memory) team inside its memory chip division. The new HBM team will increase production yields as the South Korean giant continues developing its sixth-generation AI memory, HBM4, and its new Mach-1 AI accelerator.
In a new report from KED Global, we're hearing about the new HBM team that's in charge of the development and sales of DRAM and NAND flash memory "according to industry sources." Hwang Sang-joon, corporate executive vice president and head of DRAM Product and Technology at Samsung, will lead the new HBM team.
Kyung Kye-hyun, head of Samsung's semiconductor business, said in a note posted on social media: "Customers who want to develop customized HBM4 will work with us. HBM leadership is coming to us thanks to the dedicated team's efforts".
Microsoft and OpenAI team up for $100 billion AI supercomputer codenamed Stargate
Microsoft and OpenAI have been "drawing up plans" for a data center project to feature an AI supercomputer that is codenamed "Stargate" with millions of next-gen, specialized server chips to power OpenAI's artificial intelligence.
The news comes from The Information and "three people who have been involved in the private conversations about the proposal." According to a person who spoke to OpenAI founder and CEO Sam Altman about it and had viewed some of Microsoft's initial cost estimates, the new data center and AI supercomputer codenamed "Stargate" would cost as much as $100 billion to build.
The gigantic explosion of AI across virtually every industry has seen demand for AI data centers capable of handling massive use compared to traditional data centers is seeing multiple big players announcing and building new AI-focused data centers.
Amazon to spend $150 billion on datacenters for expected 'explosion in demand' for AI
Amazon plans to spend nearly $150 billion over the next 15 years on data centers, as the company expects an "explosion in demand" for AI applications and other digital, cloud-based services.
Microsoft is the king of data centers right now, but Amazon Web Services (AWS) has been experiencing record lows in the last year as business customers reduce costs and delayed projects. The insatiable AI demand has fueled new energy for Amazon, as AWS is looking at securing land and power systems for its new data centers.
Kevin Miller, an AWS vice president who oversees the company's data centers, said: "We're expanding capacity quite significantly. I think that just gives us the ability to get closer to customers".
Amazon AWS will join Google and Microsoft with Taiwan-based data centers in 2024
Amazon AWS has announced that it will build data centers in Taiwan, with "specific progress" expected in 2024. This will see the US cloud provider joining other American cloud companies like Google and Microsoft, which have been setting up data centers in Taiwan.
Wang Dingkai, general manager of Amazon AWS Taiwan and Hong Kong, said on March 28 that the "computer room implementation plan continues," and that it is also subject to "dynamic adjustments". Dingkai added that "there will be good news to share with you soon".
Microsoft first announced in 2020 that it would build a new data center in Taiwan, "quietly carrying out related projects" over the past 2 to 3 years. It's rumored that there will be "specific progress" in Microsoft's data center in Taiwan. The company says that since this is a large-scale project, it will be completed in stages and that if everything continues going to plan, there will be an announcement in the future.
YouTube is preparing an AI feature that skips the boring parts of videos
YouTube is constantly testing new features in what the company calls "experiments" and the latest experiment to surface online is a new AI feature that's called "jump ahead".
Most YouTuber users are aware that double tapping the screen on either side either rewinds or fast forwards the video by 10 seconds, with each additional press increasing the time that is jumped forward/backward. But what if you double-pressed the screen and jumped right to the next most interesting part of the video? That feature is currently being worked on over at YouTube, and it's powered by AI that analyses user watch data and picks the next most interesting part of the video.
The feature works like this; double tapping the screen will bring up a prompt that says "jump ahead". Tapping that prompt will fast forward the clip to what YouTube considers the next best point of interest. Notably, YouTube says that the feature will only work for specific eligible videos, while also not specifies what criteria a video will need to meet to become eligible for the feature. Furthermore, users will need to have a YouTube Premium account to access the feature.
Endless Family Guy AI stream broken with ear-bruising screaming
In June 2023, an endless livestream, "AI Peter," was put up on YouTube, which broadcasts AI-generated Family Guy "episodes" to the world. However, that stream has been hijacked by viewers attempting to push the AI powering the episodes to its absolute brink, resulting in an ear-bruising experience.
The AI Peter stream features 3D models of Family Guy characters and locations, with viewers submitting pitches for each episode that are then generated and showcased to the entire stream. The stream uses AI-generated text and speech tools to produce the content, and with viewers being able to submit pitches for the episodes it wasn't long before some viewers wanted to see how far they could push they AI before it broke.
On March 25 X user "abcdent" attempted to do that very thing, writing in their post that a "few months ago" they paid $4 to submit a prompt that "single handedly halved the viewership". The prompt resulted in Brian Griffin scream incoherently at the camera while Cleveland Brown attempted to list of 50 bacterial infections. The viewers of the stream were asking the host to skip this episode but since its all automated "it just kept going".
Continue reading: Endless Family Guy AI stream broken with ear-bruising screaming (full post)
South Korean search giant Naver moves from NVIDIA, orders $752 million of AI chips from Samsung
Samsung will make the next-generation Mach-1 artificial intelligence (AI) chips for Naver Corporation, a deal worth up to 1 trillion won ($752 million USD).
With its new deal with Samsung, south Korean search giant Naver will significantly reduce its reliance on NVIDIA for its AI processors. Samsung's System LSI business division has agreed to supply AI chips to Naver, with the two companies in "final talks to fine-tune the exact volume and prices," according to "people familiar with the matter," reports KED Global.
Samsung expects the price of the next-gen Mach-1 AI chip to be around 5 million won ($3756 USD or so) with Naver wanting to receive between 150,000 and 20,000 units of its new AI accelerator according to the same sources. Naver is a leading Korean online platform giant, where it will use the next-gen Mach-1 AI chips in its servers for AI inferencing, replacing the chips that it received from NVIDIA.
ZOTAC unveils new AI-powered ZBOX Mini PCs with Intel and AMD AI CPU options
ZOTAC has just announced three brand-new compact form-factor Mini AI PC systems powered by the latest processors and NPUs for AI workloads from Intel and AMD.
The new ZOTAC Mini AI PC systems feature Intel Core Ultra "Meteor Lake" and AMD Ryzen 7840HS "Hawk Point" APUs, both with integrated NPUs (Neural Processing Units) that are used for AI workloads. First, we've got ZOTAC's new ZBOX M Series PC with Intel's latest Core Ultra 7 155H and Core Ultra 5 125H "Meteor Lake" CPUs.
The ZOTAC ZBOX Edge MI672 and MI652 feature a beautiful low-profile design that looks fantastic. Thanks to the LPE cores inside the Meteor Lake CPU, it's also power efficient. Intel includes integrated Arc graphics that pack up to 2x the performance of previous-gen chips, so you can enjoy some light-level gaming on the ZBOX Edge MI672/MI652 Mini AI PC systems.
Scientists are using AI to make beer taste even better
Artificial intelligence is being used around the world to create some pretty incredible things, such as photorealistic video from text prompts, but it's also being used to make beer taste even better than it already does.
A new study published in the scientific journal Nature details Belgian researchers taking a machine learning model and feeding it 180,000 online beer reviews, along with feedback from a panel of 16 people, to create a new AI system that is capable of predicting how to make taste as good as possible. The panel sampled 250 beers for 50 attributes over three years, taking into account variables such as bitterness, sweetness, alcohol content, and malt aroma.
The newly trained model was then asked to improve the taste of beer by providing the best composition. The team of researchers then made changes to already commercially available beers before they were given to the sampling panel. The panel responded by giving the AI-altered beer a much higher score. It should be noted that creating beer is much more than just identifying the best ingredients, as the skill of the brewer is a massive factor in the end result.
Continue reading: Scientists are using AI to make beer taste even better (full post)
Quanta Computer to make NVIDIA GB200-based AI servers for Google, Amazon, and Meta
Quanta Computer is one of the largest OEM suppliers in the world, with new contracts won to build NVIDIA GB200 AI systems for the likes of Google, Amazon AWS, Meta, and some B200-based AI systems for Microsoft.
The company will have its first GB200 AI servers in testing in July or August "at the earliest" reports UDN, with mass production expected in September. Quanta holds "large OEM orders" for GB200 servers from Google, Amazon AWS, and Meta which are provided as complete AI cabinets. Microsoft ordered some B200 servers, which means Quanta is building next-gen AI systems for all four major US cloud servers in one set of orders.
NVIDIA's new GB200 cabinet AI servers cost around $2-3 million each, so we can expect some major revenues for NVIDIA in the second half of this year once these orders are processed. UDN reports that Quanta is "optimistic" that as a shortage of materials in the supply chain gets better, AI server shipments will increase as soon as May or June, while the second half of 2024 is expected to be an "explosive period".