With the emergence of AI-powered tools, there have been genuine concerns about their misuse for nefarious purposes, such as impersonating public figures to sell a product or pushing a specific message online.
These AI "deepfake" videos, which are essentially AI-generated content designed to impersonate an individual, are a real problem that social media platforms will need to tackle. Now YouTube has announced it's upping its defences against AI deepfakes by expanding its likeness detection technology.
YouTube has outlined in a new blog post that creators in the YouTube Partner Program can now submit a short video of themselves along with a government ID to teach the system what they look like.
That way, once a YouTube video is uploaded, the system will scan the content and attempt to detect whether the creator's likeness has been mimicked by an AI. If the system detects an AI-generated impersonation, the creator being impersonated can review the content and request its removal.
If a removal request is submitted, the content will be subject to YouTube's existing privacy and moderation guidelines, meaning some deepfake content will remain on YouTube if it qualifies as parody, satire, or legitimate commentary, despite being a deepfake of a public figure.



