Who really won the moratorium fight?
Transformer Weekly: Meta’s superintelligence team, Malaysia export controls, and EU AI Act implementation fights
Happy Fourth of July! And welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
The big story
As those of you who follow me on Twitter will know, I ever-so-slightly embarrassed myself this week.
On Monday, soon after Sen. Marsha Blackburn said she’d reached a deal with Sen. Ted Cruz on the AI regulation moratorium, I said that the “AI safety ecosystem maybe needs to reckon harder with how it keeps losing every single major policy fight.”
Less than 24 hours later, Sen. Blackburn had reneged on the deal, the moratorium was dead, and the AI safety ecosystem had won a major policy fight.
Oops.
But though I’ll confess to having (hilariously) misjudged this particular fight, I do want to — perhaps ill-advisedly — double down a bit. Because while this certainly was a win for the AI safety world, I'm not convinced it's as significant a victory as it might initially seem.
First, a look at what’s actually happened since last week.
Last Friday and over the weekend, opposition to the moratorium ramped up, with Republican governors and prominent state senators speaking out against it.
On Sunday, Sens. Blackburn and Cruz agreed a compromise, shortening the moratorium to five years and saying that it wouldn’t apply to child safety or creator-likeness rules.
Blackburn agreed to this in part because she thought it’d increase the chances of Cruz advancing her Kids Online Safety Act, Politico reported.
But later that night, LawAI produced an analysis showing that the proposed text might actually not do what Blackburn wanted it to. On Monday, that analysis was picked up and amplified to Senate leadership by a range of advocacy groups.
At the same time, Steve Bannon and Mike Davis reportedly started calling Blackburn’s office to get her to change her mind.
Davis claims to have spoken to President Trump about the bill, too, encouraging him not to endorse it.
Ultimately, those efforts prevailed: late Monday, Blackburn bailed on the compromise and signed onto an amendment to remove the moratorium provision.
Cruz, seeing the way the wind was blowing, gave up and also endorsed the amendment — which passed 99-1.
AI safety groups certainly get a lot of credit for this unlikely turn of events.
Without the coalition-building and campaigning efforts of Encode, Americans for Responsible Innovation, LawAI and others, Blackburn may not have changed her mind.
But that does not necessarily mean AI safety itself is on the up.
Blackburn was willing to compromise when she thought the moratorium wouldn’t apply to child-safety and creator-likeness issues. She backtracked only when it became clear that she was wrong.
It’s also notable that many of the groups who opposed the moratorium were motivated by child-safety and other near-term AI concerns.
In other words: I think it is better to model this as a victory for anti-tech groups, but not particularly as a victory for AI safety.
If the moratorium was simply on SB-1047-style regulation focused on tackling extreme risks, I bet it would have passed.
This is an important distinction. It suggests that when AI safety groups’ interests align with other, more powerful actors, there’s the possibility for success. But on many occasions, those interests won’t align — and in such scenarios, success still looks rather distant.
And such scenarios might be imminent: this week, Rep. Brett Guthrie said he’s still going to try to get federal preemption through another way, echoing previous comments from Cruz.
As the Abundance Institute’s Christopher Koopman said this week, “we’re just getting started.”
The discourse
Ford CEO Jim Farley was refreshingly honest about the potential impact of AI on jobs:
“Artificial intelligence is going to replace literally half of all white-collar workers in the US.”
Jason Hausenloy noted the “populist awakening on AI”:
“The proposed 10-year state moratorium on AI regulation may prove to be one of the big strategic missteps in the ‘politics of AGI.’ It has awakened a new, populist coalition.”
Adam Thierer did too, and he’s not happy about it:
“Things may have gotten away from even [Trump] — and especially from his Tech Right allies — because their priority of building out world-leading computational capacity to counter the China threat is now itself directly threatened by a populist movement that is hell-bent on destroying a digital technology sector.”
A group of Stanford HAI authors put out a policy brief on the need for AI incident reporting:
“If policymakers are serious about regulating AI in a way that is effective, adaptive, scalable, and sustainable, building an adverse event reporting system should be a top priority.”
Anton Leicht isn’t sure about SB 813, the California private governance bill:
“Lawmakers are invited to grow complacent, to stay off the ball—and finally risk losing touch with what the movement at the technology frontier requires. In that sense, private AI governance risks speeding up what it’s trying to fix; frustration about the law’s current capacity might lead to its further hamstringing.”
RAND looked at whether AGI increases the risk of war:
“The probability of war is low in absolute terms. But preventive war appears relatively more likely to occur in an attempt to preserve a monopoly on AGI than to prevent one.”
Policy
The reconciliation bill contains a provision to “initiate seed efforts for self-improving artificial intelligence models for science and engineering,” powered by Department of Energy data.
As Vox’s Dylan Matthews helpfully lays out, it also contains a bunch of provisions that will make it harder to power data centers.
The Commerce Department is reportedly drafting a rule which would restrict AI chip shipments to Malaysia and Thailand, in an effort to stop China from getting them.
Meanwhile, the Commerce Department lifted controls on selling chip design software to China.
Rep. John Moolenaar sent a letter to Howard Lutnick with a bunch of AI policy proposals, including requiring location verification for advanced chips, and keeping model weights “physically and legally under US control”.
The House Oversight cyber subcommittee will have a roundtable next week to discuss “AI in the real world”.
“During the roundtable, members will view demonstrations from three AI companies—Anthropic, Knightscope, and Fiddler AI—and discuss how these technologies will impact the use of AI across industry.”
The White House announced an AI Education Pledge with over 60 companies, including OpenAI and Google, committing to provide AI education materials to K-12 students.
The European Commission said it wouldn’t pause implementation of the AI Act, despite increasing lobbying pressure to do so.
This week, a group of big European CEOs — including the CEOs of ASML and Airbus — urged the Commission to pause implementation for two years, and Denmark’s digitalization minister called for similar.
But today a Commission spokesperson said “there is no stop on the clock. There is no grace period. There is no pause.”
Meanwhile, AI companies that sign the code of practice have reportedly been invited to join a task force “that will contribute to the code’s implementation and future updates.”
Peter Kyle has reportedly told the Alan Turing Institute that “further action is needed to ensure the ATI meets its full potential,” following extensive criticism.
Kyle reportedly wants the Turing to “reform itself further to prioritize defense, national security and sovereign capabilities.”
Independent MP Iqbal Mohamed joined the PauseAI protest at Google DeepMind’s offices this week.
Influence
A Federation of American Scientists memo proposed “scaling up a significantly enhanced ‘CAISI+’”, with “expanded capacity for conducting advanced model evaluations for catastrophic risks”.
The Hill profiled the Alliance for Secure AI and its new ad campaigns, targeted at both the left and right.
Industry
Meta officially announced its AI restructuring, establishing “Meta Superintelligence Labs”.
The new team will be co-led by Alexandr Wang and Nat Friedman. Meta has successfully recruited a bunch of former OpenAI staff, too.
It’s reportedly offered as much as $300m over four years to some people.
OpenAI has reportedly told employees that it’s reviewing its own compensation in response. Chief research officer Mark Chen said that he feels like “someone has broken into our home and stolen something.”
Daniel Gross is reportedly joining too; on Thursday he announced that he was leaving Ilya Sutskever’s SSI.
Meta is reportedly buying out a stake in Gross and Friedman’s venture fund.
As for what all those new folks will be working on? Leaks this week showed that Meta is training its AI chatbots to send unprompted follow-up messages to boost user engagement.
Anthropic has reportedly hit $4b in annualized revenue, almost quadruple its revenue in January.
Meanwhile, Apple is reportedly considering using Claude to power Siri, though OpenAI’s models are also in consideration.
OpenAI has reportedly agreed to rent 4.5 GW worth of data center power from Oracle for Stargate.
Oracle announced this week that it had signed a $30b annual deal with a single customer — reportedly OpenAI.
OpenAI has reportedly launched a consulting service, which will help customers spending more than $10m fine-tune its AI models.
CoreWeave became the first cloud provider to deploy Nvidia's Blackwell Ultra AI chips.
Microsoft has reportedly scaled back its AI chip ambitions after internal struggles.
Alibaba announced new data centers in Malaysia and the Philippines.
Huawei open-sourced two AI models, while Baidu’s Ernie became open-source on Monday.
Saudi Aramco has reportedly installed DeepSeek’s models in its main data center, while HSBC and Standard Chartered are testing the company’s technology.
xAI raised $10b, half in debt and half in equity.
Surge AI is reportedly seeking to raise $1b at a $15b+ valuation.
Menlo Ventures is reportedly raising about $1.5b for a new set of AI funds.
Moves
Emran Mian is the new DSIT permanent secretary.
OpenAI announced three new DC hires: Joe Larson is now VP for government, John McCarrick head of global energy policy, and Chad Tucker is joining the gov affairs team.
Boris Cherny and Cat Wu, two of the leads on Claude Code, are joining Anysphere, the company behind Cursor.
James Czerniawski joined the Consumer Choice Center as head of emerging tech policy.
Masha Winchell joined Americans for Responsible Innovation as a legislative and policy analyst.
Jeff Bercovici is the WSJ’s new deputy tech and media editor.
Best of the rest
A group of superforecasters and biosecurity experts said the risk of a human-cased pandemic would increase fivefold if and when AI systems could match PhD virologists on a difficult troubleshooting test. AI systems can now do that.
Claude 4 did worse on METR’s time-horizon benchmark than some expected.
Cloudflare said it will block AI crawlers by default, and launched a “pay per crawl” marketplace.
Microsoft said its medical AI system diagnosed patients four times more accurately than human doctors while reducing costs by 20%.
Lawfare and the University of Texas have launched a new podcast on AI, law, and policy.
Helen Toner has a good piece on “unresolved debates about the future of AI.”
OpenAI’s Boaz Barak has an interesting essay on the risks that might remain even if we solve the technical alignment problem.
The Information has a good piece asking whether chains of thought will remain legible (and the importance of them doing so).
Michael C. Horowitz and Lauren A. Kahn argued that “nuclear non-proliferation is the wrong framework for AI governance.”
Anthropic ran an experiment to see if Claude could run a vending machine. It didn’t go very well.
The Verge has an interesting piece on how adult performers are using AI tools to be more efficient.
Robinhood started offering “OpenAI tokens” to European users, as part of its new “tokenized equity” feature. OpenAI said it’s got nothing to do with it.
TIME explored the rise of “AI resurrection” tools, with people using AI tools to reanimate images of dead loved-ones.
Thanks for reading; have a great weekend.