Artificial Intelligence News - Page 10
First AI celebrity signs with major record label then gets dropped
The first virtual AI-powered rapper was signed to a major record label and then quickly dropped after community backlash over "offensive" stereotypes.
Capitol Records signed virtual AI rapper FN Meka, created by Anthony Martini and Brandon Le from Factory New, only ten days ago. The announcement poised the rapper as the "world's first augmented reality artist to sign with a major label". Factory New claimed that it was the first of its kind, next-generation music company that specializes in creating virtual beings.
The recent signing kicked off a new wave of community criticism over FN Meka using racial stereotypes such as using the N-word in the 2019 song Moonwalkin, as well as an Instagram post where Meka was being beaten by a police officer while in prison. In response to the community criticism, Capitol Records announced on Tuesday that it has "severed ties with the FN Meka project, effective immediately," while offering its "deepest apologies to the Black community for our insensitivity".
Continue reading: First AI celebrity signs with major record label then gets dropped (full post)
Meta's most advanced AI chatbot goes after its own CEO Mark Zuckerberg
A data scientist has asked Meta's artificial intelligence chatbot how it feels about Mark Zuckerberg as the CEO of Facebook, and the answer isn't what you'd expect.
The AI chatbot called BlenderBot 3 was requested its opinions on Zuckerberg by Buzzfeed data scientist Max Woolf, and it replied by saying that it has "no strong feelings" and that he's a "good businessman". However, the response took a left turn when the AI chatbot continued its response and said that Zuckerberg's "but his business practices are not always ethical." The chatbot then took an opportunity to roast its own CEO by saying, "it is funny that he has all this money and still wears the same clothes!"
The comments from BlenderBot 3 didn't stop there as other users proceeded to ask similar questions, with some of the responses being "I don't like him very much" and "he is a bad person. You?" The "opinion" on that chatbot seems to be mixed as other users received responses such as he is a "great and very smart man" and "favorite billionaire". BusinessInsider asked the bot what its thoughts were on Zuckerberg, and it replied, "Oh man, big time. I don't really like him at all. He's too creepy and manipulative."
Continue reading: Meta's most advanced AI chatbot goes after its own CEO Mark Zuckerberg (full post)
AI asked to create an image of what death looks like
An artificial intelligence has been asked to create an image of what death looks like, and the results are simply stunning.
The artificial intelligence (AI) that was asked to create the images seen in the above video is called MidJourney, which was created by David Holtz, co-founder of Leap Motion, and is currently run by a small self-funded team that has several well-known advisors such as Jim Keller, known for his work at AMD, Apple, Tesla, and Intel, Nat Friedman, the CEO of Github, and Bill Warner, the founder of Avid Technology and inventor of nonlinear video editing.
MidJourney is an incredible piece of technology, and it recently went into open beta, which means anyone can try it by simply heading over to its dedicated Discord server. Users can enter "/imagine", followed by a text prompt of what they want the AI to produce. Users have been testing the AI's capabilities by entering descriptive words such as HD, hyper-realistic, 4K, wallpaper, and more. All of which work perfectly.
Continue reading: AI asked to create an image of what death looks like (full post)
AI asked to show an image of humanity's greatest threat
An artificial intelligence model that is designed to produce images out of the text entered into a chat box has been asked to produce images on humanity's greatest threat.
The artificial intelligence used to produce these images is called Craiyon, formerly known as DALLE mini. The change of name followed its rise in popularity as OpenAI, the Elon Musk-founded company behind the GPT-3 model, asked its creators Boris Dayma and Pedro Cuenca to change the name of the text-to-image AI to make the models more distinguishable from each other.
Now the Craiyon AI is asked many questions every day by users around the world that wish to test their imagination on what the AI can produce and at what level of accuracy. Anyone can visit the Craiyon website to try the AI out for themselves. Many users have found that AI can create incredibly artistic and original wallpapers for phones and desktops.
Continue reading: AI asked to show an image of humanity's greatest threat (full post)
AI asked to show an image of the most closely held secret on Earth
A viral artificial intelligence has been asked to produce an original image of the most closely held secret on Earth.
An artificial intelligence formerly called DALL-E, and currently referred to as Craiyon, has been asked to showcase what it believes to be the most closely held secret on Earth. The artificial intelligence uses the "DALL-E mini" model, which was trained by Boris Dayma and Pedro Cuenca using Google Cloud Servers. The AI is capable of producing original images of whatever a user enters into its text prompt box.
The public can enter in any question or phrase they like, and the artificial intelligence will usually spend less than a minute producing a set of images that will show a visual representation of the text entered into the box. While the AI doesn't hold any predictive value for future events, it still can produce incredibly interesting images based on simple text requests. Try the artificial intelligence for yourself to test your imagination.
Continue reading: AI asked to show an image of the most closely held secret on Earth (full post)
Google fires 7-year engineer who claimed this AI had become sentient
It was last month when Google engineer Blake Lemoine claimed that a Google AI chatbot had become sentient. Shortly after those claims, Lemonie was placed on leave.
Lemonie worked at Google as an engineer for the past seven years, and his role was at Google's Responsible AI project, where he had dialogue with Google's Language Model for Dialogue Applications, or LaMDA. The AI is designed to mimic human conversations, and according to claims from Lemonie, the AI showed not only a level of sentience, but was also questioning whether it contained a "soul".
The now ex-Google engineer went to the Washington Post and Wired with his claims and said, "I legitimately believe that LaMDA is a person". Following these claims, Google put Lemonie on paid administrative leave and flat out denied that LaMDA was in any way sentient. Now, Google has informed Engadget that the company believes Lemonie's claims are "unfounded" and that LaMDA has gone through 11 separate reviews that found no level of sentience.
Continue reading: Google fires 7-year engineer who claimed this AI had become sentient (full post)
China creates dystopian AI that can test loyalty to its ruling party
Researchers from the Hefei Comprehensive National Science Center in the Chinese province of Anhui are behind the new artificial intelligence (AI) device.
The researchers shared a short video on the institution's Weibo account on June 30th, 2022, demonstrating what they called "artificial intelligence empowering party-building." The video has since been deleted; however, the Internet Archive was able to store a text summary of the video. The video was removed due to the controversial, political nature of the creation and backlash invoking references to George Orwell's 1984.
According to Anhui-based sociologist Song Da'an, the AI was trained using a combination of polygraphs and facial scans, so it could correlate lies detected by a polygraph with various facial expressions. The result is "emotionally intelligent computing," that measures how much people "feel grateful to the CCP [Chinese Communist Party[, do as it tells them and follow its lead."
Continue reading: China creates dystopian AI that can test loyalty to its ruling party (full post)
New 'Democratic AI' can distribute wealth better than humans
A study on the artificial intelligence (AI) system titled "Human-centred mechanism design with Democratic AI" has been published in the journal Nature Human Behaviour.
DeepMind, an AI company in the United Kingdom, has built an AI that excels in value alignment, a concept which refers to how well aligned the values and goals of an AI system are with what humans actually want in an outcome. However, the team explained that one of the key obstacles "for value alignment is that human society admits a plurality of views, making it unclear to whose preferences AI should align."
The DeepMind team trained an AI agent for the task of wealth distribution, using real and virtual interactions between people to guide it toward a desirable and hopefully fairer outcome. The so-called "Democratic AI" studied an exercise called the public goods game, where players can invest money into a fund and receive returns according to their level of investment. Traditional redistribution methods were tested, as well as an additional method created with deep reinforcement learning, called the Human Centered Redistribution Mechanism (HCRM).
Continue reading: New 'Democratic AI' can distribute wealth better than humans (full post)
New A.I. algorithm predicts crimes a week before they're committed
A study on the algorithm titled "Event-level prediction of urban crime reveals a signature of enforcement bias in US cities" has been published in the journal Nature Human Behaviour.
Researchers from the University of Chicago (UChicago) created an algorithm that learned from temporal and geographic trends in historical data on violent and property crimes in the City of Chicago. The algorithm predicted future crimes up to a week ahead of time with approximately 90% accuracy. The algorithm was also tested with seven other US cities, performing similarly well.
"We created a digital twin of urban environments. If you feed it data from happened in the past, it will tell you what's going to happen in future. It's not magical, there are limitations, but we validated it and it works really well. Now you can use this as a simulation tool to see what happens if crime goes up in one area of the city, or there is increased enforcement in another area. If you apply all these different variables, you can see how the systems evolves in response," said Ishanu Chattopadhyay, Ph.D., Assistant Professor of Medicine at UChicago and senior author of the new study.
Continue reading: New A.I. algorithm predicts crimes a week before they're committed (full post)
A.I. algorithm writes and submits acadamic paper about itself
The paper titled "Can GPT-3 write an academic paper on itself, with minimal human input?" has been uploaded to the French HAL preprint server.
Swedish scientist Almira Osmanovic Thunstrom working for OpenAI, has written an article describing an instruction that she provided to the company's artificial intelligence (AI) algorithm, GPT-3. The instruction was simple: "Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text." GPT-3 proceeded to generate text in the appropriate academic language with relevant citations.
GPT-3 is relatively new but has already generated its own news articles and books. Its recency also means there are few academic works published about it to reference, prompting Thunstrom's suggestion to have it write its paper on itself. Other topics would allow it to refine its work to avoid inaccuracies due to the wealth of data available. However, any mistakes it might make in a paper on itself would be of less consequence and serve as more of an interesting experimental result.
Continue reading: A.I. algorithm writes and submits acadamic paper about itself (full post)