Google has changed its internal guidelines on what it thinks is okay and not okay to design and deploy AI tools for, as the search engine has altered its AI principles.

The change was spotted by The Washington Post, which reports the search engine quietly made significant changes to its AI principles, which were first published in 2018. Prior to the changes, Google stated it would not "design or deploy" AI tools that were going to be used in weapons or surveillance. However, the search engine now appears to be ok with its AI being used in both of those, as the new guidelines don't feature those pledges but instead feature much more vague promises.
The new guidelines now have a section titled "responsible development and deployment," in which Google pledges to implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." Compared to the search engine's previous pledge, the new language is far broader and much more vague, especially considering how specific the previous commitment was:
Google will not design AI for use in "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."
When asked about the changes, Google pointed to its recent blog post wherein Google DeepMind CEO Demis Hassabis and James Manyika, senior vice president of research, labs, technology and society at Google, wrote the emergence of AI as a "general-purpose technology" warranted a change to Google's policy.
"We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security," wrote Hassabis and Manyika
Adding, "Guided by our AI Principles, we will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights - always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks."