
Google has removed its previous commitment not to use artificial intelligence (AI) for weapons or surveillance, revising its AI principles to prioritize serving government and national security clients.
The tech giant introduced the original policy in 2018, stating that it would not pursue AI applications “likely to cause overall harm.” However, Google has now removed this commitment, citing the need for companies to develop AI that protects people and national security.
The updated AI principles page includes provisions for human oversight and testing to ensure that AI technology is used in line with international law and human rights. Google’s head of AI, Demis Hassabis, and senior vice president for technology and society, James Manyika, wrote that the company wants to lead in AI development and serve government and national security clients.
The move reflects a shift in Google’s stance, echoing the view within the tech industry that companies should prioritize serving U.S. national interests. It also follows other tech giants’ disavowal of commitments to diversity and workforce equality.
Google’s policy change has sparked concerns that the company is prioritizing profits over ethics, particularly in the context of AI development for military purposes. The move comes as Google faces increasing competition in the AI sector, particularly from Chinese start-ups.