Washington is waking up to AGI
Transformer Weekly: A very AGI-pilled Congressional hearing; moratorium machinations; and a bunch of copyright developments.
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
It’s taken a while, but Washington seems to finally be waking up to the potential arrival of AGI — and the many risks that could accompany it.
At an AI-focused House Committee on the CCP hearing this week, lawmakers demonstrated a level of situational awareness that would have been unthinkable just months ago.
Here are a few eye-popping excerpts:
Rep. Raja Krishnamoorthi: “Whether it’s American AI or Chinese AI, it should not be released until we know it’s safe. That's why I'm working on a new bill, the AGI Safety Act, that will require AGI to be aligned with human values and require it to comply with laws that apply to humans. This is just common sense.”
Rep. Jill Tokuda: “Mr. Beall, your testimony makes clear that artificial superintelligence, ASI, is one of the largest existential threats that we face right now …. Should we also be concerned that authoritarian states like China or Russia may lose control over their own advanced systems? … And is it possible that a loss of control by any nation-state, including our own, could give rise to an independent AGI or ASI actor that globally we will need to contend with?”
Rep. Dusty Johnson: “Mr. Beall, you noted at the top that maybe we've become numb to the headlines about all of the dangers of AI. I think that might be true. And yet, honestly, what we've heard today, I suspect, has scared the hell out of many of these committee members. Anybody who doesn't feel urgency around this issue is not paying attention.”
Rep. Neal Dunn, meanwhile, brought up the recent Claude-blackmailing-its-developers experiment. Rep. Nathaniel Moran asked about the risks of automated AI R&D. Rep. Ro Khanna brought up the importance of third-party verification of safety standards, and Rep. Seth Moulton proposed a “Geneva Conventions-like agreement that has a chance, at least, at limiting what our adversaries might do with AI at the extremes.”
These are the kinds of questions asked by people who seem to actually understand the technology and appreciate its impacts. And while the hearing was unsurprisingly focused on “beat China”, there was a substantive focus on safety concerns, too.
The shift extends beyond the CCP Committee. Pete Buttigieg wrote this week that “we are still underreacting on AI,” noting both the “obviously enormous” “physically dangerous or potentially nefarious effects of these technologies,” as well as the potential effects on wealth, jobs, and democracy.
“The terms of what it is like to be a human are about to change in ways that rival the transformations of the Enlightenment or the Industrial Revolution, only much more quickly,” Buttigieg wrote.
In many ways, it echoes Sen. Chris Murphy’s piece from last week — as well as a recent piece from Axios founders Jim VandeHei and Mike Allen.
We’ve been here before, most notably in spring 2023. But this time feels different: more grounded, more action-oriented, and more durable.
That might just be wishful thinking on my part. Congress does seem poised to pursue a decade-long moratorium on state-level AI regulation (more on that shortly) — a move that certainly does not reflect situational awareness.
But I’m cautiously optimistic that the US — and world’s — decisionmakers might finally be rising to the immense challenge that faces them. The question now is whether they can act quickly enough.
But speaking of the moratorium. To recap the last week’s developments:
On Sunday, the Senate parliamentarian said the provision was Byrd-compliant.
Since then, opposition to the moratorium (now rebranded as a “temporary pause”) has ramped up, most notably with a Washington Post op-ed from Gov. Sarah Huckabee Sanders.
The tech industry’s lobbying effort to pass the moratorium has also ramped up, with a range of industry groups expressing their support this week.
Most notably, the Meta-funded dark money group American Edge Project said it’s doing a “seven-figure ad buy” advocating for it.
On Wednesday, a group of Republican senators, including Sens. Marsha Blackburn, Josh Hawley, and Rand Paul reportedly wrote to Majority Leader John Thune asking him to remove the provision.
And on Thursday, it was reported that the parliamentarian had reopened discussions, seemingly wanting tweaks to clarify that violating the moratorium would only bar states from receiving their portion of a new $500m in BEAD funding, as opposed to the entire $42.5b pot.
Sen. Ted Cruz has said the provision is supposed to only apply to the new $500m.
So what does this all mean?
It’s probably safe to assume that the provision will be tweaked along the parliamentarian’s requests — a win for advocates, as it makes it much less costly for states to violate the moratorium.
In particular, with less money at stake California and New York might decide that regulating AI is worth the cost.
After that, the reconciliation bill will go to a Senate floor vote, during which Sens. Josh Hawley and Maria Cantwell reportedly plan to bring forward an amendment to kill the provision altogether.
It’s possible that such an amendment would pass: along with Blackburn, Hawley and Paul, Sens. Rick Scott, Ron Johnson and Kevin Cramer have expressed dissatisfaction with the provision, while Sens. Jim Banks and John Curtis have said they’re uncertain about it.
And even if it does make it through the Senate, it could still struggle in the House, where Rep. Marjorie Taylor Greene is likely to oppose it. Given that the previous version of the bill passed the House with just one vote, that could be a real problem.
Policy
Reps. John Moolenaar and Raja Krishnamoorthi introduced the No Adversarial AI Act, an attempt to ban the US government from using Chinese AI models, including DeepSeek.
AI companies have reportedly received a new draft code of practice for the EU AI Act, which reportedly contains “streamlined risk measures” — but still maintains sections on third-party evaluations.
Germany’s data protection authority asked Apple and Google to remove DeepSeek from their app stores.
Switzerland expressed interest in hosting the 2027 AI summit.
Influence
The Computer & Communications Industry Association called for pausing the EU AI Act’s implementation.
OpenAI’s global affairs team published a piece about Zhipu AI, arguing that the company is making progress in helping the CCP “lock Chinese systems and standards into emerging markets before US or European rivals can.”
New documents show that Keir Starmer and Peter Kyle met lots of AI folks in the last few months, with the latter meeting Sam Altman, Dario Amodei, and Demis Hassabis.
Industry
The Microsoft-OpenAI negotiations are reportedly getting hung up on the definition of AGI.
The two are supposedly considering replacing mention of AGI in the contract with ASI, instead.
Relatedly: Microsoft is reportedly struggling to sell its Copilot AI assistant as people prefer using ChatGPT.
Microsoft’s Maia AI chip has reportedly been delayed a year, too.
Export controls are expected to slow adoption of DeepSeek’s R2, Chinese cloud provider sources told The Information, due to a shortage of Nvidia H20s to run the model on.
Relatedly, a senior State Department official said this week that DeepSeek’s trying to use “Southeast Asian shell companies” to get around US export controls on Nvidia chips.
Meta reportedly considered buying Perplexity and Runway. The company is reportedly in advanced talks to acquire PlayAI.
The NYT reported today that as part of the quest to sort out its AI strategy, Meta “discussed ‘de-investing’” in Llama, and instead “embracing” OpenAI and Anthropic’s models.
The company’s hired OpenAI’s Lucas Beyer, Alexander Kolesnikov, Xiaohua Zhai, and Trapit Bansal for its new superintelligence team.
Meta CTO Andrew Bosworth reportedly called Sam Altman “dishonest” for claiming that Meta was offering $100m signing bonuses for top AI talent.
Apple executives have reportedly debated acquiring Perplexity.
A bunch of big copyright developments this week:
In Bartz v. Anthropic, a judge ruled that the company’s use of copyrighted books to train Claude was “fair use,” but that it wasn’t allowed to train on pirated books.
In Kadrey v. Meta, the judge ruled in Meta’s favor, but said that training on copyrighted books is likely not fair use.
In the UK, Getty Images dropped its main copyright infringement claims against Stability AI due to jurisdiction issues.
And a group of authors sued Microsoft for using pirated books to train an AI model.
Google DeepMind launched AlphaGenome, an AI model that predicts how DNA changes affect gene activity.
It also launched Gemini CLI, a Claude Code competitor with very generous free usage limits.
Mira Murati’s Thinking Machines Lab reportedly raised $2b at a $10b valuation, the biggest seed round in history.
Its business model is reportedly “RL for businesses.”
Salesforce CEO Marc Benioff claimed that AI now handles 30-50% of the company's work.
Moves
Elena Hernandez is now chief of staff at the White House Office of Science and Technology Policy.
Rebecca Bellan is now covering AI at TechCrunch.
Anthropic announced plans to open its first Asian office in Tokyo later this year.
Best of the rest
Sam Altman turned up 10 minutes early to his Hard Fork Live interview to needle the hosts about the NYT-OpenAI lawsuit.
New research from Anthropic found that in some experimental settings, AI models from all the major companies will resort to deception, blackmail, and corporate espionage to achieve their goals.
A Business Insider investigation found that Scale AI “routinely uses public Google Docs to track work for high-profile customers like Google, Meta, and xAI, leaving multiple AI training documents labeled ‘confidential’ accessible to anyone with the link”.
The NYT profiled Amazon’s new 2.2 GW data center in Indiana, being built for Anthropic.
Anthropic published new research on how users discuss emotional issues with Claude.
It also launched an “Economic Futures Program”, which will award a bunch of research grants for work on how AI is affecting the economy.
RAND launched a “Geopolitics of AGI” Substack.
The BBC reportedly threatened legal action against Perplexity.
Thanks for reading; have a great weekend.
Such a great recap. Thanks for consolidating everything in digestible bits!