The DoD fight is about much more than Anthropic
Are Google and OpenAI prepared to enable mass state surveillance?

It is no longer unusual for the Trump administration to feud with Anthropic. But the fight continuing to make headlines this week could have enormous implications for the future of democracy.
Anthropic is currently arguing with the Department of War over what its AI models can be used for in classified environments. Pentagon officials want Anthropic to agree to “all lawful use cases,” but Anthropic has reportedly said that its models cannot be used for autonomous weapons or domestic mass surveillance. This refusal appears to have royally pissed off the Pentagon, which is now considering designating Anthropic a “supply chain risk” — a move which would significantly hamper its efforts to work with not just the military, but the military contractors who are some of its major clients. Today, Anthropic CEO Dario Amodei is meeting with Secretary Pete Hegseth in a bid to settle the dispute.
But more important than what Anthropic has not agreed to is what other AI companies appear willing to accept. xAI has agreed to an all lawful use clause, and according to under secretary of defense for research and engineering Emil Michael, OpenAI and Google have agreed in principle.
In giving the DoD free rein to use their tools without restrictions, those companies could be enabling a dystopian nightmare. As the Snowden files revealed, mass surveillance is nothing new. But AI could take it to an entirely new level, creating a true digital panopticon.
Imagine a tireless army of AI investigators, trawling through Americans’ personal data and using their remarkably refined analytical ability to flag those saying or doing things that the government deems undesirable. As Amodei recently warned, “it might be frighteningly plausible to simply generate a complete list of anyone who disagrees with the government on any number of issues, even if such disagreement isn’t explicit in anything they say or do.”
This would be worrying in any scenario, but the current administration has already proven itself willing to trawl through personal records to punish its supposed enemies. Earlier this month, the New York Times reported that the Department of Homeland Security sent subpoenas to Google, Reddit, Meta and Discord demanding information on users who had criticized ICE. It is not hard to picture how such an administration would weaponize AI-enhanced surveillance.
In a world where checks and balances are falling apart, it is increasingly difficult to make the argument that government will simply use this technology responsibly. It falls to companies, then, to stand up and defend civil liberties themselves.
The Trump administration, which has badmouthed Anthropic as “ideological,” is hoping that companies are not willing to take a stand. But there are PR reasons for doing so — something that can’t be lost on Anthropic’s leadership — even if the moral arguments aren’t reason enough. Mass surveillance has never been very popular with the US public.
There is still time: despite Michael’s claims, neither Google nor OpenAI have yet signed a deal, and as of Monday were reportedly “close” and “not close,” respectively, to rolling over. Given the stakes — and the almost certain internal backlash to enabling mass domestic surveillance — perhaps they will hold firm.
As the NYT’s Kevin Roose put it, the whole affair seems to have the makings of a “loyalty test.” For the sake of democracy, it ought to be one that companies are happy to fail.


