Google’s Pentagon deal blindsided its own AI researchers
Some employees are speaking out over the agreement allowing “all lawful use” of Google’s AI technologies

On Monday, Google signed a deal letting the Pentagon use its AI models on classified work, for “any lawful governmental purpose” — the same terms OpenAI and xAI agreed to earlier this year. Anthropic refused in February, sparking a supply chain risk designation and months-long legal battle.
The announcement came as a complete surprise to Google employees, especially as, according to one DeepMind researcher, senior management had repeatedly insisted Google wouldn’t cave to the Pentagon’s demands, and urged employees to trust that leadership would make the right call.
Many employees have spent months begging company leadership to follow Anthropic’s lead. Over 100 urged Jeff Dean, chief scientist at DeepMind and Google Research, to refuse any military deal that crosses basic red lines around domestic mass surveillance and autonomous weapons. Earlier this week, more than 600 Google employees signed a letter to CEO Sundar Pichai protesting the company’s ongoing negotiations with the Department of Defense.
But Google didn’t even alert its workers when the deal was signed — they had to find out by passing news stories around group chats. The closest thing to an announcement was an innocuous note from Kent Walker, Google’s president of global affairs, buried in the middle of a standard internal newsletter sent on Monday. It shared that the company “strongly support[s] the consensus that has emerged in the field regarding lawful use of AI,” but didn’t mention that a deal would be made.
After The Information broke the news, Pichai and Dean remained silent — though both tweeted to celebrate Google Translate’s 20th anniversary the same day.
Some AI researchers did publicly express their deep frustration with the contract. While it technically states that Google’s AI models are “not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control,” it also explicitly “does not confer any right to control or veto lawful Government operational decision-making.”
The wording suggests that Google has little if any ability to stop the Pentagon from using its AI tools for the most controversial activities that lab employees, and many more outside AI, strongly object to. As Institute for Law and AI senior researcher Charlie Bullock put it on X: “’Should not be used for’ is not the same as ’shall not’ or ’will not’ be used for. ’Should not’ imposes no enforceable obligation on the Pentagon.”
He added: “While OpenAI claimed that they had technical safeguards that would prevent their models from being used to cross red lines, Google’s contract appears to obligate Google to remove any technical safeguards that are preventing DoW from accomplishing some lawful purpose. Domestic mass surveillance and autonomous targeting can both be lawful under some circumstances.”
Some DeepMind employees have spoken out publicly. “The contract includes some meaningless weasel words to allow for PR spin, but it seems so blatantly stupid, readers should feel insulted by it (I do),” tweeted Andreas Kirsch, a research scientist at DeepMind. In the words of another research scientist, Alex Turner: “If OpenAI offered a fig leaf, Google said ‘imagine we offered a fig leaf.’”
The lack of comment from Google management comes even though some have been publicly critical of exactly this kind of deal. At the end of February in the midst of Anthropic’s Pentagon drama, Dean tweeted: “Mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression. Surveillance systems are prone to misuse for political or discriminatory purposes.”
Google has a mixed track record of responding to employee outrage at how its products are deployed. In 2018, workers successfully convinced the company to pull out of Project Maven, which uses AI to analyze and act on drone surveillance footage. This work was eventually picked up by Palantir and other contractors — including, until recently, Anthropic.
But in the years since, being “woke” fell out of vogue, and, like many large corporations, Google began cracking down on activism. When in 2024 employees protested against Project Nimbus, a $1.2b contract supporting Israeli surveillance and military operations in Gaza and the West Bank, over two dozen of them were unapologetically fired.
This latest deal with the Pentagon provides another test of how much employees at the company, and in particular those working at AI labs, can influence how their work is put to use.
“I feel sorry for our comms teams. There are no excuses to explain this away,” Kirsch tweeted. “I do not understand how this is ‘doing the right thing,’ and I think this violates ‘don’t be evil’ quite clearly on many levels. I personally feel incredibly ashamed right now to be Senior Research Scientist at Google DeepMind and I wonder how I’m supposed to do my work today.”


