OpenAI’s Pentagon red lines are a mirage
OpenAI claims its DoW deal prevents its models being used for mass domestic surveillance. That appears to be misleading at best
On Friday — shortly after the Pentagon designated Anthropic a supply-chain risk and demanded that all military contractors cease working with it — OpenAI announced that it had agreed its own deal with the Department of War.
In its announcement, OpenAI claimed that it had the same red lines as Anthropic — no domestic mass surveillance and no lethal autonomous weapons — and that its contract with the Pentagon ensured that its models couldn’t be used for such purposes.
But as more information trickled out over the weekend, it increasingly seems like that might not be true. From what we currently know, the supposed protections in OpenAI’s DoW deal are likely to be ineffective at best — and, in some cases, they do not seem to exist at all.
To understand the OpenAI deal, we first have to understand why Anthropic didn’t sign a deal. According to reporting from The Atlantic and The New York Times, the biggest sticking point was domestic mass surveillance. Per the NYT:
[Emil] Michael, who was on a call with Anthropic executives [on Friday afternoon], said the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data, people briefed on the negotiations said.
Anthropic told the Pentagon that it was willing to let its technology be used by the National Security Agency for classified material collected under the Foreign Intelligence Surveillance Act. But the company wanted a legally binding promise from the Pentagon not to use its technology on unclassified commercial data. At that point, Mr. Michael asked to speak with Dr. Amodei, who was not on the call. Mr. Michael was told that Dr. Amodei was in a meeting. Shortly after, Mr. Hegseth said the talks were over.
This was a sticking point for Amodei for good reason. AI-powered mass surveillance could help the government build a true digital panopticon. And as Amodei has noted, “it would likely not be unconstitutional for the US government to conduct massively scaled recordings of all public conversations” — something that would not be of much use without AI, but with AI tools could be used “to create a picture of the attitude and loyalties of many or most citizens.”
Michael, the Under Secretary of Defense for Research and Engineering, essentially confirmed the NYT and Atlantic’s accounts in a tweet on Sunday. While calling Dario Amodei a liar, he said that Anthropic “wanted to stop DoW from using any *PUBLIC* database … When I called to discuss cutting off @DeptofWar from using publicly available information would hurt our military readiness, @DarioAmodei didn’t have the courage to answer.”
Hours later, OpenAI announced its own deal with the Pentagon — and on Monday, The Verge reported that “OpenAI’s deal is much softer than the one Anthropic was pushing for.”
Per The Verge:
If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of ‘technically legal’ to cover sweeping mass surveillance programs — and more.
At first, OpenAI executives insisted that its deal with the DoW would not enable domestic mass surveillance. Boaz Barak, who works on safety and alignment at the company, said “the DoW has not asked us to support collection or analysis of bulk data on Americans, such as geolocation data, web browsing data and personal financial information purchased from data brokers, and our agreement does not permit it.”
Katrina Mulligan, OpenAI’s head of national security partnerships, went further, claiming that “the Pentagon has no legal authority to do this” and that the “Department of War does not have any domestic surveillance authorities.”
This is simply not true. In 2021, the Defense Intelligence Agency — an agency within the Department of War — told Congress that it purchases bulk smartphone location data, and that it does not believe it needs a warrant to do so.
In 2024, the NSA — also a DoW agency, despite Mulligan’s initial and brief claim to the contrary — confirmed that it does the same with browsing data. And in 2020, Sen. Ron Wyden said that “lawyers for the data broker X-Mode Social confirmed that the company is selling data collected from phones in the United States to US military customers, via defense contractors.”
OpenAI has at times claimed that various safeguards in its contract prevent its models from being used for this widely practiced activity, pointing to a clause in its contract with the DoW: “The AI System shall not be used for unconstrained monitoring of US persons’ private information as consistent with these authorities.”
There are several problems with this clause, however. The DIA argues that it can collect commercially available data without a warrant — does that mean it is not “private information”? “Unconstrained” is also very vague — any constraint at all would arguably mean this clause does not apply. “As consistent with these authorities” simply ties the prohibition to existing legal frameworks — precisely the ones the government uses to justify commercial data purchase and analysis.
And while Mulligan told me (without evidence, as of yet) that the NSA’s work under Title 50, the intelligence statute, is not included in the contract, it is unclear whether that applies to other DoW agencies — or if it would stop the DIA from conducting domestic surveillance under Title 10, the military statute, as it currently does.
Mulligan has conceded this: “We can’t protect against a government agency buying commercially available data sets, but our contract incorporates a prohibition on mass domestic surveillance as a binding condition of use,” she said. She did not, however, respond when pressed for details on where the contract actually says this.
When Barak was asked a similar question, he too could not point to a specific clause. “Our legal and policy teams have worked with the DoW and this interpretation [that bulk data collection and analysis is prohibited] is shared between both sides. They will provide more details on the issue of commercially acquired datasets in the coming days.”
Given the seeming weakness of its grasp of the law, OpenAI’s strongest argument may be that it has other ways to enforce its red lines. It says it has “full discretion” over the system’s safety stack, and can therefore enshrine prohibitions at the system level rather than in the contract. Its agreement to have forward deployed engineers working with the Pentagon, meanwhile, arguably gives it more visibility into how the government is using — or misusing — its technology than Anthropic would have had.
But this argument, too, is flawed. For one thing, it is unlikely to be difficult for the Pentagon to jailbreak OpenAI’s guardrails: in a report last year, researchers at the UK’s AI Security Institute said they could find universal jailbreaks for all AI systems.
More importantly, OpenAI has agreed to an “all lawful use” clause, and it is unclear whether it is allowed to override this with its safety stack. As procurement law expert Jessica Tillipman wrote, there is a “tension at the heart of the agreement … If the safety stack blocks a lawful use, which provision controls? The answer depends on the specific contract language governing the relationship between the permissive use standard and the deployment framework — language that has not been made public.”
And regardless of contracts, there is also the reality of hard power. If the Pentagon — which has proven itself willing to destroy companies that dare defy it — asks OpenAI to remove a safety guardrail, will OpenAI actually refuse?
Even setting aside whether OpenAI could resist Pentagon pressure, there’s the question of whether it would want to — given who’s shaping its national security posture. As they have themselves noted, they have firsthand experience of the realities of the Pentagon’s surveillance practices: in her words, Mulligan “managed the White House response to the Snowden disclosures and helped write the post-Snowden policies constraining surveillance during the Obama years.” One of OpenAI’s board members, meanwhile, was the director of the NSA between 2018 and 2024.
As Understanding AI editor Timothy Lee points out, “the Obama Administration’s view circa 2013 was that most of what Snowden revealed wasn’t illegal or improper. They played a lot of word games to downplay and justify what a lot of ordinary people considered intrusive mass surveillance programs.”
It is reasonable to worry, then, that OpenAI is now playing the same word games. In a statement to The Verge on Monday, an OpenAI spokesperson said that “Our agreement does not permit uses of our models for unconstrained monitoring of US persons’ private information, and all intelligence activities must comply with existing US law. In practical terms, this means the system cannot be used to collect or analyze Americans’ data in a bulk, open-ended, or generalized way.”
But as Sarah Shoker, who previously led OpenAI’s geopolitics team, told The Verge, “there are a lot of modifying words that are in the sentences that the [OpenAI] spokesperson gave … The use of the word ‘unconstrained,’ the use of the word ‘generalized,’ ‘open-ended’ manner — that’s not a complete prohibition.”
From all the details we have so far, we cannot be at all confident that OpenAI can or will actually prevent the Pentagon from using its models to conduct mass surveillance on Americans. We just have to take the company, and the government’s, word for it. History suggests we shouldn’t.





I mean, what do you expect from that criminal. How many people have died from ChatGTP and what about the mysterious death of the whistle blower? Sam Altman is NOT a good person and people are starting to realize that. Good.