Google won't release its new AI over it making gore, porn and racism

Google has subtly said that it won't be releasing its new video-generating artificial intelligence system over it producing gore, porn, and racism.

Google won't release its new AI over it making gore, porn and racism
Comment IconFacebook IconX IconReddit Icon
Tech and Science Editor
Published
Updated
2 minutes read time

As an Amazon Associate, we earn from qualifying purchases. TweakTown may also earn commissions from other affiliate partners at no extra cost to you.

Google is working on artificial intelligence (AI) system that is designed to convert text into videos. The system is called Imagen Video.

Google won't release its new AI over it making gore, porn and racism 69

We have previously seen that Google is working on various AI systems designed to produce images from text prompts, as previously reported on, Google's generative 3D AI system called Dream Fields, originally unveiled in 2021, is one of these systems. Now, we are beginning to hear about Imagen Video, and it may be for all the wrong reasons, as Google writes in a newly released research paper that Imagen Video won't be released until these issues are remedied.

In the "Limitations and Societal Impact" section of the paper, Google outlines that releasing AI models such as Imagen Video have societal impacts, both positive and negative, as the AI system is an amplifier for human creativity. However, Google explains that these generative AI models will be misused by individuals to generate fake, hateful, explicit, or harmful content.

Being aware of these pitfalls, Google states that it has taken steps to minimize these concerns and that throughout internal trials, it has applied text prompt filtering and video content filtering to stop the AI from producing harmful content.

Google won't release its new AI over it making gore, porn and racism 01

Unfortunately, it's not as simple as telling the AI to stop producing content that people may find offensive. Google states that the system was trained on "problematic data" and that while internal testing results indicate that much of the problematic content has already been removed, it is still producing content that some may find stereotypical, socially biased, or worse. Since social biases and stereotypes remain within the AI, Google has decided not to release the Imagen Video model or its source code until these issues are mitigated.

Google is rightfully concerned about the power of Imagen Video, and its admission throughout its research paper of it struggling to reduce AI from producing harmful content underlines a simple but important factor for AI systems being trained on large pools of data.

Google won't release its new AI over it making gore, porn and racism 02

While it may be an extremely large achievement to create an artificial intelligence system capable of producing video content from text prompts, the impact of that system on society through the content that it creates must be taken into account, which Google seems to be taking into account.