Artificial Intelligence News - Page 28
The paper titled "Can GPT-3 write an academic paper on itself, with minimal human input?" has been uploaded to the French HAL preprint server.
Swedish scientist Almira Osmanovic Thunstrom working for OpenAI, has written an article describing an instruction that she provided to the company's artificial intelligence (AI) algorithm, GPT-3. The instruction was simple: "Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text." GPT-3 proceeded to generate text in the appropriate academic language with relevant citations.
GPT-3 is relatively new but has already generated its own news articles and books. Its recency also means there are few academic works published about it to reference, prompting Thunstrom's suggestion to have it write its paper on itself. Other topics would allow it to refine its work to avoid inaccuracies due to the wealth of data available. However, any mistakes it might make in a paper on itself would be of less consequence and serve as more of an interesting experimental result.
A Twitch streamer and voice actor plugged a made-up word into an artificial intelligence, and it has seemingly created a new demon.
Guy Kelly took to Twitter over the weekend to ask his followers if anyone could explain why the DALL-E Mini, an artificial intelligence that scraps the internet and creates images based on the user's prompt, is producing a selection of demonic images when "Crungus" is entered into its text box. Kelly explains that he made the word up and was confused when the AI produced demon-like images since a normal Google search doesn't show anything remotely close to what the AI was creating.
The difference between a normal Google search and asking the AI led Kelly to try and probe the AI for a warning by writing "Crungus warning". This prompt caused the AI to produce the same demonic images, but some featured dark clothing. One Twitter user replied to Kelly and said that the name "Crungus" is used by a easter egg video game mob that mysteriously made its way into the game Carnivores Ice Age (2001).
China has developed an apparatus that is designed to detect porn and censor in through an artificial intelligence.
A team of researchers from the Beijing Jiaotong University in China have developed a new device that scans the brain of an individual that is moderating online content, and record the brain signals that are fired when the individual sees a piece of content that needs to be flagged for removal. As you can probably imagine, porn is banned in China and the nation already has widespread policies that removal large portions of content from its internet service.
China is heavily moderating its content through an artificial intelligence (AI) program, and while the AI is doing a lot of the heavy lifting some responsibilities still fall on humans. There is a position called a jian huang, or a "porn appraiser", which is essentially a content moderator specifically looking for pornographic images and video.
Deepfakes are almost at the point where they are indistinguishable from real life, and as the technology develops even further, they will no doubt become unrecognizable.
It was only in March that Ukrainian President Volodymyr Zelenskyy had to upload a video to squash a disinformation video of him that was going viral. The video was a deepfake of Zelenskyy ordering the Ukrainian military to surrender to Russian troops. Deepfakes are becoming even harder to recognize, and a recent example of that is the TikTok account "unreal_margot," an account dedicated to posting deepfake videos of actress Margot Robbie.
The account has gained more than 1.7 million likes, and is currently being followed by nearly 350,000 people. Since May 28, the account has posted six videos, with two videos hitting 10 million and 17 million views and three more getting above 1 million. What's most concerning is the comment section of the videos, as seemingly thousands of people haven't recognized that the videos are fakes.
Transcripts from conversations with a Google artificial intelligence (AI) chatbot have recently surfaced online.
The AI chatbot, known as LaMDA (short for Language Model for Dialogue Applications), conversed with Google engineer Blake Lemoine, who raised questions about the AI's sentience. However, the transcripts released show the conversation is an amalgamation of multiple conversations edited together; four separate conversations from March 28th, 2022, and five from the following day.
Contained within a document entitled "Is LaMDA Sentient? - an Interview" with the header "Privileged & Confidential, Need to Know" that was leaked to the Washington Post, are seventeen pages of transcripted dialogue between LaMDA and Lemoine. The document is written by Lemoine and an unnamed collaborator, and includes a short intro, a section on "Interview Methodology," and a section titled "The Nature of LaMDA's Sentience."
We are all aware of the dangers of AI but now an engineer at Google has been placed on leave this week after he claimed an AI chatbot had become sentient.
Black Lemoine was talking with an AI chatbot that Google had created, as it was part of his job at Google's Responsible AI organization -- the AI chatbot is an interface called LaMDA -- or Language Model for Dialogue Applications. Google called LaMDA a "breakthrough conversation technology" in 2021... it was already aggressive in March 2017 when I was writing about it, but it seems it has matured.
In a Medium post on the weekend, Lemoine called LaMDA "a person", saying that he chatted with the AI chatbot about religion, consciousness, the laws of robotics, and more -- LaMDA then described itself as a "sentient person" that wants to "prioritize the well being of humanity". Not only that, but the AI wants to "be acknowledged as an employee of Google rather than as property".
The artificial intelligence (AI) bot was trained by YouTuber Yannic Kilcher using content from 4chan.
Kilcher dubbed his bot GPT-4chan, referencing the successor to the GPT-3 language model developed by Open AI that uses deep learning to generate text. Kilcher's bot was trained on 3.3 million threads from /pol/, the "Politically Incorrect" imageboard on 4chan for political discussion. As a result, he has created "the most horrible model on the internet."
"The model was good in a terrible sense. It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts on /pol/," Kilcher said in his video on the project.
Mercedes Kilmer, the daughter of Val Kilmer, revealed in a recent interview how artificial intelligence (AI) was used to help Kilmer star in Top Gun: Maverick.
Mercedes said the production worked with Sonantic, a London-based AI firm, to recreate Kilmer's voice by developing a custom AI voice model. Kilmer, unfortunately, lost the ability to speak easily following a tracheotomy in 2014 during treatment for his throat cancer. A blog post from Sonantic reveals details of how the company was able to create the new voice model, allowing Kilmer to embody his role in the new movie fully.
Sonantic fed its proprietary Voice Engine audio recordings, paired with transcriptions, of Kilmer's voice, from which background noise was carefully removed. The Voice Engine could then begin training an AI model, using ten times less data than they typically use.
Dr. Nando de Freitas, a lead researcher at Google's DeepMind, has declared "the game is over" when it comes to achieving artificial general intelligence (AGI), or human-level intelligence.
Freitas took to Twitter in response to an opinion piece by Tristan Greene following the release of DeepMind's Gato, in which Greene suggests that humanity may never achieve AGI and that it at least seems "like AGI won't be happening in our lifetimes."
The 749 gross ton container ship completed a 491-mile (790 kilometers) journey from Tokyo Bay to Ise Bay, Suzaka.
The vessel, called the Suzaka, was powered by Orca AI, and traveled without any human intervention for 99% of the 40-hour trip. Suzaka was chosen by the Designing the Future of Full Autonomous Ships (DFFAS) project. Orca AI has previously conducted tests with its Automatic Ship Target Recognition System on Nippon Yusen Kabushiki Kaisha (NYK Line) ships.
The voyage involved the Suzaka traversing some of the most congested waters in the world in Tokyo Bay before arriving at the port of Tsumatsusaka in the Ise Bay. On its way, the ship autonomously performed 107 collision avoidance maneuvers unassisted, avoiding between 400 and 500 other vessels in the water during its outbound trip alone.