About a month ago, news surfaced that Google was working with the United States Department of Defense on drone software called “Project Maven.” The project applied Google’s image-recognition techniques to the millions of hours of drone footage collected by the military with the goal of identifying people and objects of interest. At the time, some Google employees were reportedly outraged at the news, and now The New York Times reports the situation has escalated to a formal letter being addressed to Google CEO Sundar Pichai.
The letter, which The Times reports has “garnered more than 3,100 signatures” comes right out in the first paragraph and demands the project be cancelled:
We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled and that Google draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.
The letter goes on to say that “building this technology to assist the US Government in military surveillance—and potentially lethal outcomes—is not acceptable” and that Maven will “irreparably damage Google’s brand and its ability to compete for talent.” The letter even invokes Google’s “Don’t Be Evil” motto.
A Google spokesperson sent the following response to the letter:
An important part of our culture is having employees who are actively engaged in the work that we do. We know that there are many open questions involved in the use of new technologies, so these conversations—with employees and outside experts—are hugely important and beneficial.
Maven is a well-publicized DoD project, and Google is working on one part of it—specifically scoped to be for non-offensive purposes and using open-source object-recognition software available to any Google Cloud customer. The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.
Any military use of machine learning naturally raises valid concerns. We’re actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine-learning technologies.
While the project is “specifically scoped to be for non-offensive purposes,” the employee letter takes issue with this assurance, saying that “the technology is being built for the military, and once it’s delivered, it could easily be used to assist in [lethal] tasks.” While the project might not create an autonomous weapons system, in many cases target identification is just the first step in some kind of offensive move toward that target. Many Google employees are clearly uncomfortable with any involvement in that process.
Powered by WPeMatico