Wed Feb 05 06:49:18 UTC 2025: ## Google Updates AI Ethics Policy, Drops Explicit Ban on Weapons and Surveillance

**Mountain View, CA** – Google has revised its AI principles, removing a previous commitment to avoid developing AI for weapons or surveillance that violates international norms. The updated policy, announced Tuesday, states that Google will pursue AI responsibly, aligning with “widely accepted principles of international law and human rights.” This change marks a significant shift from the company’s 2018 stance, which explicitly prohibited AI applications deemed likely to cause harm, including those used for weapons and intrusive surveillance.

The decision follows years of internal debate and employee activism, stemming from Google’s involvement in the Pentagon’s Project Maven. Following significant employee pushback, Google withdrew from the project and later declined a large cloud computing contract with the Department of Defense citing concerns about alignment with its AI principles.

Google DeepMind chief Demis Hassabis and research labs senior vice president James Manyika framed the updated policy as a commitment to responsible AI development, emphasizing the importance of democratic leadership guided by core values. They highlighted the need for collaboration between companies, governments, and organizations to create AI that benefits society and supports national security.

The revised policy comes amidst a broader shift in the regulatory landscape for AI. The recent rescinding of a Biden administration executive order that mandated safety testing disclosures for AI technologies may have contributed to Google’s decision. The company has yet to comment publicly on the specifics of this change. Critics may argue that the removal of explicit prohibitions opens the door to potentially harmful applications of AI.

Read More