Artificial Intelligence News - Page 8
Apple co-founder claims Elon Musk's company is trying to kill people with AI
The co-founder of Apple, one of the biggest technology companies on the planet, has thrown some shade at Tesla's AI developments, saying if you wanted to study how AI can kill a human, you should get a Tesla vehicle.
Apple co-founder Steve Wozniak
In a recent interview with CNN, Apple co-founder Steve Wozniak discussed various topics around the technology space, and one of those was the developments around artificial intelligence (AI) and its potential dangers. Wozniak was asked during the interview if he ever speaks to Tesla CEO Elon Musk, to which he answered that he's never met Musk in person but does admire some of his technological achievements while distaining others.
Wozniak praised Musk's efforts at pushing large swaths of the population toward electric vehicles but believes the Tesla CEO has made a few empty promises or has fallen short of the capabilities he has promised for his Tesla vehicles. This tune from Wozniak comes with no surprise as the Apple co-founder has long held a critical stance against Tesla's Autopilot feature, saying that the current state of the technology is nowhere near the reality that Elon Musk has promised.
This viral AI-generated beer commercial is nightmare fuel you need to see
With the rise of artificial intelligence and its exponentially impressive capabilities, many individuals are worried their jobs will be at risk of replacement. Here's an example of why that future may not necessarily be right around the corner.
A video created entirely with AI-powered tools has gone viral on Twitter, and surprisingly, it's a commercial for a beer. The clip called "Synthetic Summer" is a 30-second long video that showcases an AI-generated house party with many people enjoying the advertised beverage. The commercial was created by Helen Power and Chris Boyle from the London-based production company Privateisland.tv.
According to Ars Technica, who wasn't able to reach Power or Boyle for comment before publishing, the video was seemingly created using Runway's Gen-2 AI model, which the publication says is able to generate short video clips based on user prompts in the same way OpenAI's ChatGPT provides responses to text prompts. The 30-second clip was paired with the iconic Shrek song "All-Star" by Smash Mouth, which played over a backyard barbecue scene.
Skyrim ChatGPT mod shows the future of gaming will be powered by AI
Classic RPG titles such as Skyrim or The Witcher: Wild Hunt are known for being some of the greatest worlds to get lost in, with a large variety of characters to interact and seemingly never-ending adventures to take undertake.
However, gamers reach the end of these games, and with the aforementioned games in particular, they even choose to replay them in a different way for an alternative experience. Unfortunately, there are limitations to these alternative experiences, and one of those is the dialogue players can have with NPCs. Skyrim is one of the most successful games ever made and is known for its iconic dialogue, but what if NPCs were able to have an infinite amount of dialogue and players were able to ask them whatever they wanted?
A very early example of that has been created by modder Art From The Machine, who has taken the underlying language model powering OpenAI's ChatGPT, and combined it with xVASynth for text-to-speech capabilities and Whisper for speech-to-text capabilities. So, what has Art From The Machine achieved? Players are able to speak into their microphones and talk directly to NPCs, who will be able to understand what the player is saying. Whisper converts the player's speech to text which is then fed into ChatGPT's language model that generates a response. That text response is then played through xVASynth, and the NPCs are able to respond appropriately.
Continue reading: Skyrim ChatGPT mod shows the future of gaming will be powered by AI (full post)
'Godfather of AI' quits Google to warn people about what he created
75-year-old Dr. Geoffrey Hinton spent more than a decade at Google developing the foundational technology to artificial intelligence, and now "the Godfather of AI" has left the company to sound alarm bells on its potential dangers.
Dr. Geoffrey Hinton
In a new article published in The New York Times, it's explained that Hinton worked at Google for more than 10 years, developing the foundational technology used to create the AI systems we are seeing today. Hinton is renowned as a pioneer of artificial intelligence and gained high respect within his field, but he decided to leave his long career at Google to discuss the dangers of AI and how a part of him regrets the role he played in its AI's initial creation.
In an interview with the NYT, Hinton said, "I console myself with the normal excuse: If I hadn't done it, somebody else would have." Adding, "It is hard to see how you can prevent the bad actors from using it for bad things." Hinton left Google to discuss the potential dangers of AI without any impact from Google and not to criticize the company itself. In fact, Hinton said that Google has acted "very responsibly" with his departure.
Continue reading: 'Godfather of AI' quits Google to warn people about what he created (full post)
Samsung puts ChatGPT in its cross hairs after staff leak insider source code
Early last month, Samsung staff accidentally leaked an internal source to OpenAI's ChatGPT, sounding alarm bells within the company and outside of it.
There were at least three instances of Samsung employees sharing confidential information with OpenAI's AI-powered chatbot, ChatGPT. The first was a staff member leaking the source code of a confidential database into ChatGPT, which was followed by a request for the AI to check for any errors. Another instance was code optimization being shared with ChatGPT, and the last was a request for ChatGPT to convert an internal Samsung video of a meeting into minutes.
Now, Bloomberg News has read a new memo issued to staff notifying them of a new policy change. According to the memo, Samsung is terribly concerned about the rise of AI-powered tools and how it may affect intellectual property. In an effort to reduce any leaks of confidential data that may or may not impact the company, a widespread ban has been placed on any AI-generative tools on employee-issued devices. Samsung staff are now prohibited from having any AI-generative tools or applications on company-owned computers, tablets, phones, and on any of Samsung's internal networks.
Scientists teach AI to 'read minds and transcribe thought'
Scientists announced on Monday that they created a way for an artificial intelligence system to transcribe what people are thinking by feeding it scans of the individual's brain activity.
The new system was developed by researchers at The University of Texas in Austin, and according to the study published in the journal Nature Neuroscience, the goal behind this new technology is to provide assistance to people that are mentally conscious but are unable to physically speak, such as people that have suffered from strokes. The study was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.
The researchers explain that the AI model was trained on people that participated in several sessions of magnetic resonance imaging (fMRI). These long hours of brain activity recording were then fed into GPT-1, a predecessor language model that was later developed into GPT-4, the language model powering the popular AI chatbot, ChatGPT. Scientists then trained the model to predict how each person's brain would respond to hearing speech, such as a short story. These stories were listened to by participants while they were in the fMRI sessions.
Continue reading: Scientists teach AI to 'read minds and transcribe thought' (full post)
NVIDIA's new software can stop AI chatbots making fools of themselves (or worse)
NVIDIA has unveiled a new piece of software, NeMo Guardrails, that will ensure chatbots driven by large language models (LLMs) - such as ChatGPT (the engine of Microsoft's Bing AI) - stay on track in various ways.
The biggest problem the software sets out to resolve is what's known as 'hallucinations,' occasions when the chatbot goes awry and makes an inaccurate or even absurd statement.
These are the kind of incidents that were reported in the early usage of Bing AI and Google's Bard. They're often embarrassing episodes, frankly, which erode trust in the chatbot for obvious reasons.
Here's how much it costs OpenAI to run ChatGPT every day
OpenAI's ChatGPT has pioneered AI-powered chatbots such as Microsoft's Bing Chat or Google's Bard. But how much does it cost developers of these chatbots to keep them up and running?
A new report posted on The Information cites Dylan Patel, chief analyst at semiconductor research firm SemiAnalysis, who said that OpenAI could be paying as much as $700,000 a day to keep ChatGPT servers up. So, why does ChatGPT cost so much to run? It's relatively simple. ChatGPT requires a large amount of power to analyze its database and create an appropriate response for a prompt. Patel spoke to Insider and said that his initial estimate was based on OpenAI's GPT-3.5 model, which is far less powerful than OpenAI's most-recent model, GPT-4.
Patel says that GPT-4 would cost the company much more money simply because that language model has many times more parameters. Furthermore, speaking to Forbes, Patel and Afzal Ahmad, another analyst from SemiAnalysis, said that it would have likely cost tens of millions of dollars to train ChatGPT's underlying language models, but that cost is nothing compared to operational expenses or inference costs. The analysts said that ChatGPT's inference costs "exceed the training costs on a weekly basis".
Continue reading: Here's how much it costs OpenAI to run ChatGPT every day (full post)
EU watchdog issues urgent warning on ChatGPT and risks AI poses to consumers
The European Consumer Organisation (BEUC) is calling for consumer protection bodies in Europe to investigate ChatGPT and similar chatbot technologies to ascertain how much risk they might represent to the public.
Microsoft's Bing AI is powered by ChatGPT and is clearly ahead of Google's Bard in the chatbot arena right now (Image Credit: Microsoft)
Reuters reports that the BEUC - an umbrella group covering consumer protection organizations across 32 countries - is stepping up to challenge whether such chatbots could be problematic in terms of their influence on youngsters in particular.
The worry is that the responses of the AI to queries may appear authoritative and true - particularly to younger consumers and children - but cannot be relied upon, and indeed are often factually incorrect to some degree (perhaps even a large one).
AI-generated Drake song featuring The Weekend forced to be taken offline
An AI-generated song that attracted millions of plays has been removed after the Universal Music Group (UMG) discovered its popularity and the artists that were on track.
The song called 'Heart On My Sleeve' was released by an anonymous TikTok user called Ghostwriter977, and it featured vocals from Aubrey "Drake" Graham and Abel Makkonen "The Weeknd" Tesfaye. But these vocals didn't come from the artists themselves, they were created using artificial intelligence-powered tools that were fed voice samples of the artists to create imitated voices that are uncannily similar. At the moment, it remains unclear if the entire song was created with AI tools or just the voices of the artists.
Regardless of the intricacies of how the song was made, it generated more than 1,000,000 streams on Spotify, and Ghostwriter977's TikTok video of the song was viewed more than 15 million times. Additionally, a YouTube video of the song gained more than 275,000 views, and its creator commented on it, saying, "this is just the beginning". Overall the response to the song was very positive, with some people saying it was the best Drake song that has been released this year. Others recognized the quality of the song and immediately commented on how good AI is getting at sampling an artist's vocals.