Google employees have petitioned showing their opposition for the company’s part in a Pentagon AI program. Over 3,100 employees have signed the petition, and asked CEO Sundar Pichai to pull Google out of the project. The project uses artificial intelligence to analyze video and provide improvements to drone targeting. Additionally, the letter urged Pichai to establish and enforce a policy that would keep the company or its subsidiaries from ever building “warfare technology.”
The policy can include a lot of things, though said policy could also include just giving government agencies access to Google’s technology. Last month, Gizmodo broke the news that the company would be lending AI TensorFlow programming kits to the Pentagon’s Maven, and sources say that employees were outraged that the government would use the technology to improve drone operations.
Google said that this was a “non-offensive” involvement with the Pentagon project, the letter to Pichai says that the involvement alone could hurt the company’s brand and what it stands for. That is why employees are urging the company to disconnect itself altogether from the project.
“Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust,” the internally-circulating petition read, according to The New York Times. “The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google. Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.”
We reached out to Google for comment, here’s what we heard back:
An important part of our culture is having employees who are actively engaged in the work that we do. We know that there are many open questions involved in the use of new technologies, so these conversations – with employees and outside experts – are hugely important and beneficial.
Maven is a well publicized DoD project and Google is working on one part of it – specifically scoped to be for non-offensive purposes and using open-source object recognition software available to any Google Cloud customer. The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.
Any military use of machine learning naturally raises valid concerns. We’re actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies.”