YouTube has taken to its blog to announce a new rule for its platform designed to protect users from being deceived by creators who are trying to pass off their content as authentic when it was created using the assistance of artificial intelligence-powered tools.
With the unveiling of OpenAI's Sora tool, which is designed to create photorealistic video content from user text prompts, and the popularity of ChatGPT, it isn't a reach to say that AI-powered tools such as Sora will be popular when they are fully developed and released.
Some of the video examples provided by Sora creator OpenAI featured tell-tale signs of AI-powered creation, but the vast majority at a glance, or even viewed by the untrained eye, wouldn't raise alarm bells for synthetic creation, further blurring the line between what's real and what's fake on the internet.
To combat this, YouTube will now be forcing creators to disclose to viewers "when realistic content - content a viewer could easily mistake for a real person, place, or event - is made with altered or synthetic media, including generative AI."
These disclosures will appear as labels either on the front of the video player or within the description of the video. YouTube states that it won't be requiring creators to label content that is clearly unrealistic, such as animated content. Additionally, creators won't be required to use the label if generative AI has been used for production assistance of the video.
Some examples of content that require disclosure include:
- Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another's or synthetically generating a person's voice to narrate a video.
- Altering footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different than in reality.
- Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.