Microsoft has publicly come out and denounced AI-generation tools being used to create deepfake images that are then used to commit crimes such as fraud, abuse, manipulation.

Unfortunately, the demographics that are victims of this form of abuse are children and the elderly, and according to a recent blog post by Microsoft Vice Chair and President Brad Smith, the US government needs to step in and implement new regulations that hold the creators of deepfake content with nefarious purposes accountable for their actions.
Smith explains that AI-generated deepfakes are realistic and extremely easy for anyone to make. Unfortunately, due to their accessibility, the technology, while being built with the intention to conduct research and assist in people's workflows/projects, is increasingly being used to commit fraud, abuse, and other crimes. Smith not only called up regulators for new laws to protect victims of AI deepfakes but also the private sector to acknowledge its responsibility to "prevent the misuse of AI."
"The greatest risk is not that the world will do too much to solve these problems. It's that the world will do too little," wrote Smith
"AI-generated deepfakes are realistic, easy for nearly anyone to make, and increasingly being used for fraud, abuse, and manipulation - especially to target kids and seniors. While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud. In short, we need new laws to help stop bad actors from using deepfakes to defraud seniors or abuse children," wrote Microsoft President Brad Smith