Artificial Intelligence - Page 44
Discover the latest in artificial intelligence - including generative AI breakthroughs, ChatGPT updates, and major advancements from OpenAI, Google DeepMind, Anthropic, and xAI. Learn how NVIDIA is driving AI innovation with cutting-edge hardware, and explore impressive real-world demos showcasing the future of AI technology. - Page 44
As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.
Samsung wins advanced chip packaging order from NVIDIA for AI GPUs, TSMC isn't enough
Samsung has reportedly won a contract with NVIDIA to provide the AI GPU giant with advanced 2.5D packaging.
The news is coming from TheElec, with their sources saying Samsung's Advanced Package (AVP) team will be providing an interposer and I-Cube -- its 2.5D package -- to NVIDIA. Other companies will produce the High-Bandwidth Memory (HBM) and GPU wafers, with the 2.5D packaging housing the chip dies-CPU, GPU, I/O, HBM, and others-placed horizontally onto the interposer.
Samsung calls its 2.5D packaging technology I-Cube, while TSMC calls its 2.5D packaging CoWoS (Chip-on-Wafer-on-Substrate). NVIDIA's entire fleet of A100 and H200 series AI GPUs uses 2.5D packaging, and more importantly, the monster new 208 billion transistor Blackwell B200 AI GPU uses the same advanced packaging.
OpenAI's Sam Altman and Jony Ive are teaming up on a new personal AI device, but they need cash
Jony Ive, Apple's former head of design, is reportedly working with OpenAI CEO Sam Altman on a new AI-powered personal device with the pair now seeking funding for the new project.
The news, shared by The Information, means that the pair have teamed up on what could be a new device similar to the Humane AI pin or something along those lines. Notably, Altman is also a major investor in Humane so there are clear links there.
Details on exactly what the product will be are hard to come by right now but it apparently won't be like a smartphone, something that will surely be music to the ears of Apple's executives. This also isn't the first time that we've heard that the pair are working together after information surfaced last fall. However, things seem to have now progressed somewhat with the two people now thought to be seeking funding to the tune of $1 billion.
AI safety expert predicts a 99.999999% chance of p(doom). What's that mean? Well, it isn't good
Could AI spell doom for humankind eventually? Depending on which expert you talk to, the chances vary considerably, but one researcher definitely has a gloomy (and doomy) opinion - one that Elon Musk doesn't share.
Business Insider reported on revelations made at the recent Abundance Summit (held last month), which included a 'great AI debate' where Musk estimated the risk of AI ending humanity was "about 10% or 20% or something like that."
Obviously that's something akin to wild guesswork, but the general gist of the billionaire's philosophy is that we should push ahead with AI development as the probable positive outcomes outweigh any negative scenario.
Apple's new AI system can 'see' and could be a game-changer for Siri
Apple is diving head first into artificial intelligence-powered systems and according to reports citing Apple researchers behind these new systems, one specifically is designed to take on OpenAI's GPT products.
Reports indicate that Apple is developing the ReaLM system, which stands for "Reference Resolution As Language Modeling", a new system that is designed to make interacting with AI much more natural. Additionally, ReaLM is able to "see" on-screen content, with the researchers behind the project saying it outperforms OpenAI's GPT-4, the underlying technology powering ChatGPT, in determining context and interpreting linguistic expressions.
Additionally, the researchers behind the project believe ReaLM is "an ideal choice" for a context deciphering system that would be present "on-device without compromising on performance". So, how would it work? Imagine asking Siri to show you a list of local groceries around your location. After Siri has completed bringing that list up you will then be able to ask her "Call the bottom one." With ReaLM implementation Siri would be able to identify the bottom option and proceed to call them. Apple researchers say ReaLM outperformed GPT-4 in this context deciphering area.
Continue reading: Apple's new AI system can 'see' and could be a game-changer for Siri (full post)
Samsung shows off next-gen 3D DRAM tech, should hit the market after 2030
Samsung has just unveiled its next-generation 3D DRAM technology for next-generation memory solutions, with a single-chip capacity of over 100GB, a massive leap over current limitations in DRAM technology.
Samsung made the announcement at the recent Memcon 2024 conference. With DRAM line widths expected to fall below 10nm in the coming years, current memory architectures are getting close to their scaling limits. This is where 3D DRAM will come into play, boosting memory capacity and reducing its footprint.
Samsung talked about two different key technologies for 3D DRAM: Vertical Channel Transistors and Stacked DRAM. Starting with Vertical Channel Transistors (VCT), which will be a fundamental change in transistor design, where the current flow channel is changed from horizontal to vertical, Samsung aims to massively reduce the transistor's footprint. VCT requires much higher precision during the etching process, so there are caveats.
US Government wants tech companies to go nuclear to meet the power demands of AI data centers
The AI boom is in full swing, with Microsoft and OpenAI teaming up to build a $100 billion AI supercomputer, Amazon planning to spend $150 billion on data centers, and Meta planning to install 50,000 NVIDIA H100 GPUs by the end of 2024. These are mind-boggling projects, and with powerful GPU hardware at the heart of them, you can be sure that the power bills will be astronomical.
Governments and big tech are aware of the "AI power issue" and are looking at nuclear energy as a potential solution. US Energy Secretary Jennifer Granholm spoke with Axiom to confirm that it plans to accelerate discussions with companies like Microsoft, Google, and Amazon to host "small nuclear plants" next to their massive data centers.
"AI itself isn't a problem because AI could help solve the problem," Granholm said, nodding toward the AI boom as a good thing.
OpenAI makes ChatGPT extra free but with one big catch
OpenAI has decided to make ChatGPT even more accessible than it already is, as users now don't have to make an account to use the artificial intelligence-powered tool.
TechCrunch has reported that starting today users will no longer have to make an account to access the world's most popular AI-powered chatbot, with the company behind the tool, OpenAI, telling the publication that the even freer version will use the same large language model as logged in users, but with some changes. Searching for chat.openai.com will now bring users to an open conversation with ChatGPT, but unlogged in users won't get access to the AI's full range of features such as saving chats, using custom instructions, and more.
Notably, users without an account will still be able to disable OpenAI using chats for training its model, which can be achieved by heading to the question mark in the lower right-hand side, clicking "settings" and then navigating to the disable data tracking toggle. Furthermore, the even freer ChatGPT will come with "slightly more restrictive content policies," which are seemingly not specified. When asked by TechCrunch and OpenAI spokesperson responded with the below message.
Continue reading: OpenAI makes ChatGPT extra free but with one big catch (full post)
DARPA boss says the US is using AI in 70% of its programs, AI-powered F16 fighter jets tested
The Defense Advanced Research Projects Agency (DARPA) has announced its developing artificial intelligence (AI) that is "trustworthy for the Defense Department" in making life-or-death recommendations to warfighters, said Matt Turek, deputy director of DARPA's Information Innovation Office.
At the Center for Strategic and International Studies event, Turek said that AI, machine learning, and autonomy are being used in "about 70%" of DARPA's programs in "some form or another." The push into the arms of advanced AI development is such a priority to "prevent an unexpected breakthrough in technology" or a "strategic surprise" by enemies that could be (probably) developing advanced AI capabilities.
DARPA is looking for "transformative capabilities and ideas from industry and academia," according to Turek; with DARPA getting these capabilities and ideas to various challenges teams from the private sector can win millions of dollars in prizes. DARPA recently held the Artificial Intelligence Cyber Challenge, using generative AI technologies including large language models (LLMs) to automatically discover and fix vulnerabilities in open-source software, some of which is used as critical infrastructure in the United States.
Critical security fixes issued for NVIDIA's ChatRTX AI Chatbot, so make sure you update
NVIDIA recently launched the beta for its AI-powered ChatRTX app, a generative AI chatbot that runs locally on GeForce RTX 30 and RTX 40 Series hardware with at least 8GB of VRAM. With ChatRTX, being able to run AI locally versus in the cloud is a smart move, as Tensor-RT LLM optimizations and GPU AI acceleration are a big part of NVIDIA's entire lineup.
If you're an early adopter of ChatRTX, you should probably update to the latest March 2024 build. The UI contained a couple of 'Medium' and 'High' severity security vulnerabilities. According to the security bulletin, the more dangerous of the two (given an 8.2 rating) lets potential attackers gain access to system files. This exploit could lead to an "escalation of privileges, information disclosure, and data tampering."
The second security vulnerability, rated 6.5) doesn't sound much better. The exploit allows attackers to run "malicious scripts in users' browsers," which can cause denial of service, information disclosure, and even code execution.
Samsung's next-gen Mach-2 AI accelerator chip gets development speed up, coming sooner
Samsung is still working on its in-house Mach-1 AI inferencing chip. CEO Kyung Kye-hyun, who is in charge of the chip business, teased a second-gen Mach-2 AI accelerator in a new Instagram post.
Samsung Electronics CEO Kyung Kye-hyun said: "Client interest in inference-committed Mach-1 is increasing. Some of the clients want to use Mach in large-scale applications with more than 1 trillion parameters, which justifies the faster-than-expected development of Mach-2. We should get down to preparation".
The new Mach-1 AI accelerator was announced at Samsung Electronics' recent shareholder meeting, but few details were released: all we know is that it will be used for AI inferencing and will launch in early 2025. Samsung recently formed a new HBM team focused on increasing productivity and quality to ensure HBM leadership against South Korean rival SK hynix.
Ex-Google chip designers launch MatX startup: will develop AI chips specifically for LLMs
A couple of ex-Google chip designers have left the US search giant, forming a new MatX startup to build AI processors specifically designed for LLMs (Large Language Models).
Mike Gunter and Reiner Pope used to work at Google, forming MatX, which has one objective: design next-generation silicon specifically for processing the data needed to fuel large language models (LLMs). LLMs are the basis in which the generative AI world sits on, with the likes of ChatGPT from OpenAI, Gemini from Google, and other LLM-powered generative AI platforms.
Gunter used to focus on designing hardware like chips to run AI software, while Pope wrote the AI software itself for Google. Google has been working hard at building its own in-house AI processors with Tensor Core Processors, first designed before LLMs became a thing and were too generic for the tasks at the time.
Hackers using AI-powered attacks: listening to your keyboard, learns your passwords by typing
Kaspersky's in-house team of cybersecurity experts are warning about new "acoustic side-channeling attacks" -- or ASCA -- as they use sophisticated AI to listen to your keyboards to work out what you're typing... email addresses, passwords, phone numbers, private messages, and more.
The hackers using this new ASCA method rely on the sounds being typed into a keyboard; using AI, the hackers will determine what you're typing. If the hackers have the right equipment, they can analyze the sounds you're making by tapping on your keyboard, and possibly decode the exact letters that you're typing in.
ASCAs are another type of side-channel attack, exploiting unintended lines of communication leakage within a system; they're dangerous as they target indirect channels like power consumption and electromagnetic emissions... or the sounds of your keyboard.
Samsung forms dedicated 'HBM team' to boost AI memory chip production to beat SK hynix
Samsung has set up a dedicated HBM (High Bandwidth Memory) team inside its memory chip division. The new HBM team will increase production yields as the South Korean giant continues developing its sixth-generation AI memory, HBM4, and its new Mach-1 AI accelerator.
In a new report from KED Global, we're hearing about the new HBM team that's in charge of the development and sales of DRAM and NAND flash memory "according to industry sources." Hwang Sang-joon, corporate executive vice president and head of DRAM Product and Technology at Samsung, will lead the new HBM team.
Kyung Kye-hyun, head of Samsung's semiconductor business, said in a note posted on social media: "Customers who want to develop customized HBM4 will work with us. HBM leadership is coming to us thanks to the dedicated team's efforts".
Microsoft and OpenAI team up for $100 billion AI supercomputer codenamed Stargate
Microsoft and OpenAI have been "drawing up plans" for a data center project to feature an AI supercomputer that is codenamed "Stargate" with millions of next-gen, specialized server chips to power OpenAI's artificial intelligence.
The news comes from The Information and "three people who have been involved in the private conversations about the proposal." According to a person who spoke to OpenAI founder and CEO Sam Altman about it and had viewed some of Microsoft's initial cost estimates, the new data center and AI supercomputer codenamed "Stargate" would cost as much as $100 billion to build.
The gigantic explosion of AI across virtually every industry has seen demand for AI data centers capable of handling massive use compared to traditional data centers is seeing multiple big players announcing and building new AI-focused data centers.
Amazon to spend $150 billion on datacenters for expected 'explosion in demand' for AI
Amazon plans to spend nearly $150 billion over the next 15 years on data centers, as the company expects an "explosion in demand" for AI applications and other digital, cloud-based services.
Microsoft is the king of data centers right now, but Amazon Web Services (AWS) has been experiencing record lows in the last year as business customers reduce costs and delayed projects. The insatiable AI demand has fueled new energy for Amazon, as AWS is looking at securing land and power systems for its new data centers.
Kevin Miller, an AWS vice president who oversees the company's data centers, said: "We're expanding capacity quite significantly. I think that just gives us the ability to get closer to customers".
Amazon AWS will join Google and Microsoft with Taiwan-based data centers in 2024
Amazon AWS has announced that it will build data centers in Taiwan, with "specific progress" expected in 2024. This will see the US cloud provider joining other American cloud companies like Google and Microsoft, which have been setting up data centers in Taiwan.
Wang Dingkai, general manager of Amazon AWS Taiwan and Hong Kong, said on March 28 that the "computer room implementation plan continues," and that it is also subject to "dynamic adjustments". Dingkai added that "there will be good news to share with you soon".
Microsoft first announced in 2020 that it would build a new data center in Taiwan, "quietly carrying out related projects" over the past 2 to 3 years. It's rumored that there will be "specific progress" in Microsoft's data center in Taiwan. The company says that since this is a large-scale project, it will be completed in stages and that if everything continues going to plan, there will be an announcement in the future.
YouTube is preparing an AI feature that skips the boring parts of videos
YouTube is constantly testing new features in what the company calls "experiments" and the latest experiment to surface online is a new AI feature that's called "jump ahead".
Most YouTuber users are aware that double tapping the screen on either side either rewinds or fast forwards the video by 10 seconds, with each additional press increasing the time that is jumped forward/backward. But what if you double-pressed the screen and jumped right to the next most interesting part of the video? That feature is currently being worked on over at YouTube, and it's powered by AI that analyses user watch data and picks the next most interesting part of the video.
The feature works like this; double tapping the screen will bring up a prompt that says "jump ahead". Tapping that prompt will fast forward the clip to what YouTube considers the next best point of interest. Notably, YouTube says that the feature will only work for specific eligible videos, while also not specifies what criteria a video will need to meet to become eligible for the feature. Furthermore, users will need to have a YouTube Premium account to access the feature.
Endless Family Guy AI stream broken with ear-bruising screaming
In June 2023, an endless livestream, "AI Peter," was put up on YouTube, which broadcasts AI-generated Family Guy "episodes" to the world. However, that stream has been hijacked by viewers attempting to push the AI powering the episodes to its absolute brink, resulting in an ear-bruising experience.
The AI Peter stream features 3D models of Family Guy characters and locations, with viewers submitting pitches for each episode that are then generated and showcased to the entire stream. The stream uses AI-generated text and speech tools to produce the content, and with viewers being able to submit pitches for the episodes it wasn't long before some viewers wanted to see how far they could push they AI before it broke.
On March 25 X user "abcdent" attempted to do that very thing, writing in their post that a "few months ago" they paid $4 to submit a prompt that "single handedly halved the viewership". The prompt resulted in Brian Griffin scream incoherently at the camera while Cleveland Brown attempted to list of 50 bacterial infections. The viewers of the stream were asking the host to skip this episode but since its all automated "it just kept going".
Continue reading: Endless Family Guy AI stream broken with ear-bruising screaming (full post)
South Korean search giant Naver moves from NVIDIA, orders $752 million of AI chips from Samsung
Samsung will make the next-generation Mach-1 artificial intelligence (AI) chips for Naver Corporation, a deal worth up to 1 trillion won ($752 million USD).
With its new deal with Samsung, south Korean search giant Naver will significantly reduce its reliance on NVIDIA for its AI processors. Samsung's System LSI business division has agreed to supply AI chips to Naver, with the two companies in "final talks to fine-tune the exact volume and prices," according to "people familiar with the matter," reports KED Global.
Samsung expects the price of the next-gen Mach-1 AI chip to be around 5 million won ($3756 USD or so) with Naver wanting to receive between 150,000 and 20,000 units of its new AI accelerator according to the same sources. Naver is a leading Korean online platform giant, where it will use the next-gen Mach-1 AI chips in its servers for AI inferencing, replacing the chips that it received from NVIDIA.
ZOTAC unveils new AI-powered ZBOX Mini PCs with Intel and AMD AI CPU options
ZOTAC has just announced three brand-new compact form-factor Mini AI PC systems powered by the latest processors and NPUs for AI workloads from Intel and AMD.
The new ZOTAC Mini AI PC systems feature Intel Core Ultra "Meteor Lake" and AMD Ryzen 7840HS "Hawk Point" APUs, both with integrated NPUs (Neural Processing Units) that are used for AI workloads. First, we've got ZOTAC's new ZBOX M Series PC with Intel's latest Core Ultra 7 155H and Core Ultra 5 125H "Meteor Lake" CPUs.
The ZOTAC ZBOX Edge MI672 and MI652 feature a beautiful low-profile design that looks fantastic. Thanks to the LPE cores inside the Meteor Lake CPU, it's also power efficient. Intel includes integrated Arc graphics that pack up to 2x the performance of previous-gen chips, so you can enjoy some light-level gaming on the ZBOX Edge MI672/MI652 Mini AI PC systems.





















