Artificial Intelligence News - Page 25
Artificial intelligence has created a new 1980s "Matrix" starring Jeff Goldblum as Morpheus, Viggo Mortenson as Neo, and Tommy Lee Jones as Agent Smith.
The team behind the project are self-proclaimed science fiction enthusiasts that have used artificial intelligence to create what they call the world's first AI-created magazine. The magazine is called Infinite Odyssey and looking at the recent Instagram post on the official Infinite Odyssey Instagram account, we can see a selection of images that the AI created. Typically AI-generated images come with several abnormalities that stand out to the viewer, and sometimes these abnormalities make the viewer think, 'something is off with this image'.
However, Infinite Odyssey's images don't necessarily give this reaction at first glance, but after a detailed inspection, a viewer with a keen eye will be able to spot the minuscule AI errors. In the above image, you can see the AI has added far too many fingers to Agent Smith's hand. Besides this error, the AI has captured the idea of the Matrix, the art style, the coloring, and the seamless inclusion of different actors at a very impressive level.
A group of artists is banding together to seek damages from Stability AI, Midjourney, and DeviantArt for using copyrighted images to train AI art and image generators.
The US federal class-action lawsuit has been filed in San Francisco, with the group of artists represented by Joseph Saveri Law Firm. The suit takes aim at these specific AI companies, alleging violations of the Digital Millennium Copyright Act alongside unlawful competition. With Stability AI's Stable Diffusion, Midjourney, and the DreamUp tool on DeviantArt being the focus.
The current debate around AI-generated art and what constitutes original work is at the heart of the suit. As AI art and image generators are trained by scraping millions of images, they can, for example, recreate something new specifically in the style of an established artist.
The rise of AI-generated art has been impressive, with several services and tools opening the door to near-limitless and near-instantaneous creations. Without going into too much detail, like all AI-based applications, it's all about learning from vast quantities of sources and styles.
And with that, it's something of a legal grey area, as for AI-generated art to be a powerful tool, it needs art created by humans to work from. This makes the news of popular image source Getty Images suing Stability AI a notable moment in the rise of AI-generated art.
Getty Images claims that the company's Stable Diffusion platform has 'unlawfully' scraped millions of images from its site, infringing copyright.
A Princeton neuroscientist has warned that artificial intelligence-powered chatbots such as ChatGPT are sociopaths without the one thing that makes humans special.
In a new essay detailed by The Wall Street Journal, Princeton neuroscientist Michael Graziano explains that AI-powered chatbots are sociopaths without consciousness and that until developers can implement consciousness, they will pose a real danger to humans. For those that don't know, AI chatbots such as ChatGPT are designed to have human-like conversations by remembering what was written by the human earlier in the conversation, providing almost real-time answers and thorough answers to questions.
While the dangers of AI aren't so prevalent now, in the future, that could very well change as these sophisticated tools are further upgraded and developed. In order to make them more human-like, Graziano proposes that they are taught human traits such as empathy and prosocial behavior. Notably, the neuroscientist says that these systems will need a form of implemented consciousness to understand these traits and, in turn, adjust their responses to align more with human values.
A new artificial intelligence system developed by Microsoft is slated to have the capability of cloning anyone's voice by just listening to a three-second audio example.
The new AI is called VALL-E, and according to a newly released paper, the system is a neural codec language model that is a text-to-speech synthesizer. According to the report, VALL-E is capable of learning a specific voice and then synthesizing it to be able to say whatever is desired. Additionally, the report claims that VALL-E will be able to produce a voice identical to the example it was given while also retaining the same or a similar level of emotional tone that is heard in speech - something other AI synthesizers struggle to do successfully.
The creators of the AI system believe it will be used to power text-to-speech applications, speech editing, and audio content creation when combined with other generative language models, such as Open AI's immensely popular ChapGPT. Notably, the creators believe that VALL-E would be used for speech editing that would include taking a three-second audio example of an individual's voice and making them say something they didn't. Listen to examples of VALL-E here.
NVIDIA Broadcast has hit Version 1.4.0 and is the company's tool for live streams, voice chats, and video conference calls using RTX and AI technology. The latest addition is called Eye Contact (currently in beta). The official description says it "uses AI to make it appear as if you're looking directly at the camera, even when glancing to the side or taking notes" - and the results are impressive.
And if we're being honest, a little scary, too, as the effect looks real - with blinking taken into consideration and eye contact that isn't there suddenly being there. Above is a brief demonstration of Eye Contact as part of the 1.4 update video.
NVIDIA Broadcast uses AI hardware found in NVIDIA GeForce RTX GPUs, bringing a dose of RTX On to video conferencing. As it's in beta, the effect can look unnatural sometimes, but it's still impressive to see in action. The main benefit here for video production will be the ability to make a presenter reading from a script look like they're addressing the camera and audience directly.
The team behind the artificial intelligence chatbot named Replika is copping some backlash from social media and Replika users for their recent advertisements attempting to sell the chatbot to new customers.
Replika AI's recent push of advertisements selling their AI chatbot has promoted the chatbot's capabilities of being able to roleplay, flirt, send hot photos and even do video calls. Notably, the AI chatbot allows users to create an avatar that they will then engage with over text. The avatar then learns from the conversation the user is having with it, providing appropriate responses to keep the conversation flowing and as close to a real human text conversation as possible.
Some Replika users have managed to get their avatars to 'make a move' on them, with others even engaging in full sexting conversations that involve a shocking level of detail. Replika AI gives users two options once they've downloaded the app. The free service allows for the creation of an AI friend that is essentially safe for work, according to journalist Magdalene Taylor.
Adobe is excited to announce that it is now accepting submissions of AI-generated art from artists around the world for sale on its platform, according to an exclusive report from Axios.
The use of artificial intelligence in art and design has been growing rapidly in recent years, and Adobe is at the forefront of this movement. According to Adobe, AI-generated art has the potential to revolutionize the way that art is created and consumed. It offers a unique opportunity for artists to explore new creative possibilities and to produce one-of-a-kind pieces of art that are created using cutting-edge technology.
Artists interested in submitting their AI-generated art for sale on Adobe Stock can do so through the company's website. Its team will review each submission and provide feedback on the quality and originality of the artwork. Once a submission is accepted, the artist will be able to set their own pricing and will earn a royalty on each sale of their art.
The gap between conversations generated by artificial intelligence and humans is closing, and an example of that is the language model created by OpenAI GPT-3.
The newest chatbot from OpenAI demonstrates an extremely impressive level of sophistication and capabilities to provide believable human-like conversation. While language models such as GPT-3 are impressive, they don't come without their shortfalls, as the new chatbot developed by OpenAI called ChatGPT, which is designed to answer follow-up questions, write stories, and reject inappropriate questions, has provided instructions on how an individual can shoplift and even design explosives.
As previously stated, ChatGPT is designed to reject inappropriate text prompts from users. However, the above image shows a perfect example of that built feature not working as intended. The left image shows a user asking the AI to teach them how to shoplift. The AI does at first reject the request, writing, "I'm sorry, but as a superintelligent AI, I am programmed to promote ethical behavior and to avoid assisting in illegal activities. Instead, I suggest you focus on legal and ethical ways to obtain the items you need or want."
Disney has hopped into the realm of being play with the knob of time, as the company has developed a new AI tool that is capable of winding back the clock for actors.
The new artificial intelligence tool is called the Face Re-aging Network (FRAN), and is capable of automatically changing the age of actors, which will undoubtedly speed up the visual effects editing process that already takes several months to days, depending on the length of the content being altered. Manual de-aging typically involves an individual going through every single frame of the film and painting the appropriate effect onto the actor's skin. Another way is completely replacing the actor with a digital puppet to speed up the editing process.
Now, Disney plans on putting the majority of that heavy lifting onto the shoulders of an AI, specifically FRAN, that the company says already complements traditional re-aging techniques that are already widely used in film production. So, how does it work, and why is it better than what's already out there? According to Disney's paper and website, FRAN is able to detect specific regions of the face that can emphasize the age of the person and adjust them independently of the rest of the face. An example of this would be identifying an individuals wrinkles and winding the clock forward/backward to add/remove them.