Trump’s surprisingly OK AI Action Plan
Transformer Weekly: Chip smuggling, location verification, and Anthropic’s lobbying
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
After months of reckless accelerationist rhetoric from the Trump administration, expectations for the AI Action Plan were rock bottom. Observers — myself included — were braced for a plan that wouldn’t even pay lip service to the potential risks from AI, let alone do anything about them.
So it came as a shock when the plan, released on Wednesday, turned out to be … fine?
I wasn’t the only one impressed. Almost everyone I’ve spoken to this week has expressed some pleasant surprise with what the White House put together.
Brad Carson of Americans for Responsible Innovation told me it was “cautiously promising.” Michael Kleinman of the Future of Life Institute said certain aspects were a “step in the right direction.” Brendan Steinhauser of the Alliance for Secure AI said he was “pretty happy” with a lot of it.
Why the cheer? A few sections from the 23-page plan drew the most praise.
The plan calls for the government to “invest in AI interpretability, control, and robustness breakthroughs,” recommending that such research is prioritized in the National AI R&D Strategic Plan.
It maintains CAISI’s role of setting standards and tasks it with improving the science of AI evals.
It endorses chip export controls, and explicitly mentions “location verification features” as a way to enforce them — a shoutout that Sen. Tom Cotton is already using to bolster his Chip Security Act.
A section on biosecurity, which recommends requiring all institutions that receive federal funds have “robust nucleic acid sequence screening” — a positive step that caught a lot of people by surprise.
And while Trump did talk about the need for a unified federal regulatory system in his speech on Wednesday, neither he nor the plan proposed a full-on moratorium on state regulation.
The plan does tell federal agencies to “consider a state’s AI regulatory climate” when making AI-related funding decisions, though.
Even the most controversial element — the executive order targeting “woke” AI — appears to be less insane than its inflammatory framing suggests (though it still has the potential to be abused).
Impressively, the administration appears to have crafted something that both AI safety groups and Marc Andreessen can get behind — something that includes stuff about data center permitting and energy abundance while not completely ignoring the risks.
“The reason the Action Plan has been received so well is that it reflects an awareness of the reality that safety and innovation aren’t conflicting values,” LawAI’s Charlie Bullock told me.
Does that mean the plan is adequate? Of course not.
The document conspicuously avoids discussing artificial general intelligence, and was a “missed opportunity” to explicitly talk about loss of control risks, Kleinman told me.
The ambition to build an “evaluations ecosystem” is all well and good, but that’s only any use if companies actually do evaluate their models — which they are still under no obligation to do.
And a lot of the plan's value will depend on implementation details that remain opaque. We don’t know exactly how much will be invested in interpretability and robustness research, and it’s plausible this ends up being little more than lip service.
There is still, in other words, much to be done. The plan is “at best a start, but we need to go a lot further,” Kleinman told me. But compared to the regulatory apocalypse many feared, the Action Plan is surprisingly encouraging.
The discourse
Rep. Nancy Mace is AGI-pilled:
“Some estimates say singularity is like 1,000 days away, some say 2,000 days away, but it is rapidly approaching, either way.”
Ben Buchanan sharply criticized the Trump administration’s reversal on H20 export controls:
“Permitting these chip sales threatens American dominance in AI, undermines US tech companies and risks our national security — all in favor of one chipmaker’s near-term profits.”
An anonymous House Foreign Affairs Committee aide is worried about BIS’s ability to enforce export controls:
“The IT infrastructure at the Bureau of Industry and Security is a legitimate national security threat to the US.”
Keir Starmer talked about AGI:
“I think AGI is going to be pretty amazing … [it coming in 2029 is] quite ambitious … I do think it’ll be quicker than we think.”
He then goes on to discuss a bunch of milquetoast potential benefits, all of which suggest the Prime Minister does not understand what AGI would actually mean.
Harry Law has a good piece on Peter Kyle’s similar comments from last week:
“If the UK government thinks AGI is coming within the next five years, is it behaving with the seriousness we should expect to prepare for its arrival? Of course not.”
The Economist has a great package on AGI and ASI this week:
“The tech bosses of Silicon Valley say … in just a few years artificial intelligence will be better than the average human being at all cognitive tasks. You do not need to put high odds on them being right to see that their claim needs thinking through. Were it to come true, the consequences would be as great as anything in the history of the world economy.”
See also this piece on the economy, and this one on the (potentially existential) risks.
Noah Smith wrote a corrective to the AI jobs discourse:
“Stop pretending you know what AI does to the economy.”
Elon continues to be Elon:
“At times, AI existential dread is overwhelming.”
And Demis Hassabis is worried, too:
“I don’t have a P-Doom number … what I would say is it’s definitely non-zero and it’s probably non-negligible.”
Policy
The House Foreign Affairs Committee is looking to soften the Chip Security Act, Punchbowl reported, ditching continuous location verification measures for more periodic tracking instead.
Companies still aren’t happy though, with trade groups calling the act’s requirements “burdensome” in a letter this week.
Trump said he considered breaking up Nvidia to increase competition, but was told by aides that would be “very hard.”
Rep. Marjorie Taylor Greene criticized Trump’s AI Action Plan and EOs for ignoring data centers’ “massive” water usage.
The Future Caucus launched a National Task Force on State AI Policy, co-chaired by Rep. Doug Fiefa and Rep. Monique Priestly.
Gov. Gavin Newsom signaled that he’ll resist if the federal government withholds funds due to Californian AI regulation.
Anthropic said it’ll sign the EU GPAI Code of Practice.
The European Commission released a template for AI providers to disclose training data.
A European Parliament study called for strict liability rules for damages caused by high-risk AI systems.
OpenAI signed a strategic partnership with the UK to collaborate on AI security research and explore UK infrastructure investments.
The deal will establish “a new technical information sharing programme” with UK AISI.
Peter Kyle said the UK is preparing a consultation on the AI Bill. I think I first heard about this consultation over a year ago.
The UK will take over leadership of the International Network of AI Safety Institutes in November, according to the head of South Korea’s AISI.
Influence
Q2 lobbying disclosures showed that Anthropic outspent OpenAI ($910,000 vs. $620,000). Both have vastly increased spending since last year.
James Burnham, a former DOGE lawyer, launched the AI Innovation Council to promote an “America First” AI policy agenda.
A TBIJ and Politico investigation — funded by the Tarbell Center for AI Journalism — revealed how Anduril is lobbying hard in the UK.
Industry
General-purpose AI systems from Google DeepMind and OpenAI both achieved gold in this year’s International Math Olympiad — a very impressive achievement that took a lot of people by surprise.
Meta’s reportedly hired three of the GDM researchers who worked on the IMO model.
At least $1bn worth of advanced Nvidia chips were smuggled to China in recent months, an FT investigation found.
Relatedly, Reuters reports that demand is surging for repair services of banned Nvidia chips in China.
Nvidia will reportedly struggle to increase H20 supply in the short term — meaning, as Tao Burga points out, that “more chips to China means fewer for the US.”
Stargate has “sharply scaled back its near-term plans,” according to the WSJ, with SoftBank and OpenAI reportedly struggling to agree on details.
OpenAI and Oracle announced a partnership to develop an additional 4.5GW of US data center capacity.
GPT-5 appears to be imminent. The Verge reports that it’s coming in early August, and people have spotted mentions of it in Microsoft’s Copilot app already. It’s reportedly quite good at coding.
Ruoming Pang, formerly head of Apple’s foundation models team, reportedly wanted to release an open-source AI model, but was overruled by Craig Federighi. Pang’s since left Apple for Meta.
In a leaked memo, Dario Amodei said Anthropic plans to seek investments from UAE and Qatar, saying that “‘No bad person should ever benefit from our success’ is a pretty difficult principle to run a business on.”
Founders Fund and Dragoneer have reportedly each committed over $1bn to the second tranche of OpenAI’s $40bn fundraising round.
xAI is reportedly raising another $12bn.
Cognition is reportedly in talks to raise over $300mn at a $10bn valuation.
Amazon bought AI wearable company Bee.
Moves
Laphonza Butler, a lobbyist who was previously a senator for California, is now advising OpenAI. She previously worked with Chris Lehane at Airbnb.
Steven Adler joined the Roots of Progress Institute as a fellow.
Microsoft has reportedly poached more than 20 Google DeepMind employees in the past six months.
Western companies are reportedly hiring AI engineers in India due to a shortage of domestic talent.
xAI fired Michael Druggan after he posted that it’s “OK” if AI wipes out humans.
Best of the rest
Transformer found that Hugging Face was hosting tools to make deepfake porn of teenage celebrities.
Five billionaires pledged $1bn to fund NextLadder Ventures, which will partner with Anthropic to boost economic mobility for low-income Americans.
A new study found that AI models can transmit harmful traits through seemingly “meaningless” data.
FutureHouse researchers found that approximately 29% of chemistry and biology answers in the Humanity's Last Exam benchmark “are likely wrong.”
RAND published a paper on how to verify compliance with international AI agreements.
The Horizon Fellowship opened applications for its 2026 cohort.
The FDA's AI tool reportedly hallucinates nonexistent studies and misrepresents research.
OpenAI announced a 12-month project to “assess AI’s impact on productivity and the workforce.”
Quanta has a piece on how AI systems are designing “bizarre” physics experiments — that actually work.
A new Oxford Internet Institute paper argued that worries of AI’s impact on elections are overstated.
A large-scale study from UK AISI researchers investigated the political persuasiveness of LLMs.
Netflix has reportedly begun using Runway AI’s video generation software for content production.
Thanks for reading; have a great weekend.
Corrections, 25 July: Corrected the name of Steinhauser’s organization (the Alliance for Secure AI, not the Secure AI Project) and the acquirer of Bee (Amazon, not Anthropic). We apologize for the errors.