Should it, shouldn’t it, Google debates selling AI to military
To win in the business of cloud computing, the company tiptoes into the business of war. Some staff fear it’s a first step toward autonomous killing machines

Last July, 13 US military comm­a­nders and te­chnology execu­tives met at the Pentagon’s Silicon Valley outpost, 2 mi­les from Google headquarters. It was the second meeting of an advisory board set up in 2016 to counsel the military on ways to apply te­c­hnology to the battlefield. Milo Medin, a Google V-P, tu­r­ned the conversation to us­­ing artificial intelligence in war games. Eric Schmidt, Google’s former boss, propo­sed using that tactic to map out strategies for standoffs wi­th China over next 20 years. A few months later, the Defense Department hired Google’s cloud divisi­on to work on Project Mav­en, a sweeping effort to enhance its surveillance dro­n­es with technology that he­lps machines think and see.

The pact could generate millions in revenue for Alphabet Inc’s internet giant. But inside a company whose employees largely reflect the liberal sensibilities of the San Francisco Bay Area, the contract is about as popular as President Donald Trump. Not since 2010, when Goo­gle retreated from China after clashing with state censors, has an issue so roiled the rank and file. Almost 4,000 Google employees, out of an Alphabet total of 85,000, signed a letter asking Google CEO Sundar Pi­c­hai to nix the Project Ma­v­en contract and halt all work in “the business of war”. 

The petition cites Goo­gle’s history of avoiding military work and its famous “do no evil” slogan. One of Alph­abet’s AI research labs has even distanced itself from the project. Employees agai­nst the deal see it as an unacceptable link with a US administration many oppose and an unnerving first step toward autonomous killing machines. 

The internal backlash, which coincides with a broader outcry over how Silicon Valley uses data and technology, has prompted Pichai to act. He and his lieutenants are drafting ethical principles to guide the deployment of Google’s powerful AI tech, according to people familiar with the plans. That will shape its future work. Google is one of several companies vying for a Pentagon cloud contract worth at least $10 billion. A Google spokesman declined to say whether that has changed in light of the internal strife over military work. Pichai’s challenge is to find a way of reconciling Google’s dovish roots with its future. Having spent more than a decade developing the industry’s most formidable arsenal of AI research and abilities, Google is keen to wed those advances to its fast-growing cloud-computing business. Rivals are rushing to cut deals with the government, which spends billions of dollars a year on all things cloud. No government entity spends more on such technology than the military. Medin and Alphabet director Schmidt, who both sit on the Pentagon’s Defense Innovation Board, have pushed Google to work with the government on counter-terrorism, cybersecurity, telecommunications and more. To dominate the cloud business and fulfill Pichai’s dream of becoming an “AI-first company,” Google will find it hard to avoid the business of war.

Inside the company there is no greater advocate of working with the government than Google Cloud chief Diane Greene. In a March interview, she defended the Pentagon partnership and said it's wrong to characterize Project Maven as a turning point. “Google’s been working with the government for a long time,” she said.

The Pentagon created Project Maven about a year ago to analyse mounds of surveillance data. Greene said her division won only a “tiny piece” of the contract, without providing specifics. She described Google’s role in benign terms: scanning drone footage for landmines, say, and then flagging them to military personnel. “Saving lives kind of things,” Greene said. The software isn’t used to identify targets or to make any attack decisions, Google says.

Many employees deem her rationalisations unpersuasive. Even members of the AI team have voiced objections, saying they fear working with the Pentagon will damage relations with consumers and Google’s ability to recruit. At the company’s I/O developer conference last week, Greene told Bloomberg News the issue had absorbed much of her time over the last three months. 

Googlers’ discomfort with using AI in warfare is longstanding. AI chief Jeff Dean revealed at the I/O conference that he signed an open letter back in 2015 opposing the use of AI in autonomous weapons. Providing the military with Gmail, which has AI capabilities, is fine, but it gets more complex in other cases, Dean said. “Obviously there’s a continuum of decisions we want to make as a company,” he said. Last year, several executives—including Demis Hassabis and Mustafa Suleyman, who run Alphabet’s DeepMind AI lab, and famed AI researcher Geoffrey Hinton—signed a letter to the United Nations outlining their concerns. “Lethal autonomous weapons … [will] permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter reads. “We do not have long to act.”