The Pentagon is already suffering the consequences of banning Anthropic
Transformer Weekly: The battle for Gottheimer, OpenAI’s ‘New Deal’, and Meta’s new model
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
Job alert! We’re hiring for a Head of Audience: someone to own our growth strategy and take charge of how we reach readers. See full details here, and apply by April 26.
NEED TO KNOW
Leading the Future endorsed Rep. Josh Gottheimer, despite Public First Action’s previous ads targeting him.
OpenAI released a policy document proposing a ‘New Deal’ for AI, including proposals for higher capital gains taxes and a public wealth fund.
Meta released Muse Spark, its first new model since setting up Meta Superintelligence Labs.
But first…
THE BIG STORY
If I were Pete Hegseth, this week’s news would give me pause.
An American company announced that it has built an extremely powerful cyber tool which has found vulnerabilities in every major operating system and web browser. Rather than releasing it to the public, Anthropic has made Mythos Preview available to a select group of trusted partners, who will hopefully use it to harden their defenses before such capabilities proliferate too widely.
If I were Pete Hegseth, I would want my hands on this model very badly. I would want to take full advantage of America’s AI lead over China and other adversaries by securing critical infrastructure before they can attack it. I would also want to use the model against America’s adversaries: it might come in handy in the current war — not that America needs any help (👊🇺🇸🔥).
And if I were Pete Hegseth, I would be kicking myself for the unforced error I made last month, which has blocked me from being able to do any of that.
When Hegseth, Emil Michael, and President Trump kicked Anthropic out of the Pentagon last month, they severed their own access to America’s most capable AI.
Something like Mythos was predictable, if you believe AI capabilities are rapidly advancing. But this administration doesn’t. Its entire posture treats AI as incremental: good for the economy, but nothing revolutionary, disruptive, or posing imminent national security risks. Mythos just proved that assumption badly wrong, and the administration is paying the price.
Technically, certain government agencies could use Mythos. This week, Anthropic lost its bid to block the Pentagon’s designation in a DC court — but there is a six month grace period before the Pentagon must cease using Anthropic’s technology. And a preliminary injunction from a federal judge in California last month means that Anthropic’s technology remains available to all other agencies, including the Cybersecurity and Infrastructure Security Agency (which has had conversations with Anthropic about the product).
But in either case, using Mythos would mean working with a firm that the President himself has deemed a “radical left, woke company.” Are agency heads brave enough to so directly defy him?
It may be the case that access to Mythos isn’t essential. OpenAI is preparing a similar model, and the Pentagon is free to use that. But betting on OpenAI maintaining parity is not a good national security strategy. The United States needs access to every leading AI tool, as soon as it is available. Choosing to rely on one lab instead of two, especially during a closing window of defensive advantage, is cutting off your nose to spite your face. And if open-weight or Chinese capabilities catch up before OpenAI does, Hegseth may find himself defenseless.
Getting out of this self-inflicted bind is not hard, but it will not be easy either. Pete Hegseth will have to admit he was wrong. But admit he should. A year from now, when adversaries have Mythos-level capabilities and the question is whether America used or squandered its lead, nobody will care about the mea culpa. They’ll care whether the Secretary of Defense did what he could to protect the country — or whether he let pride get in the way.
— Shakeel Hashim
ALSO NOTABLE
Rep. Josh Gottheimer had been pinned as an AI safety candidate. But the biggest pro-innovation super PAC thinks he’s still in play.
On Wednesday, pro-innovation super PAC Leading the Future endorsed Gottheimer and four other House Democrats. But it wasn’t the first: last month, its opposition, Public First Action, targeted Gottheimer in an ad of its own.
As a moderate congressman who says “preemption only makes sense” if paired with stronger rules than the White House is offering — but still likes to reach across the aisle — Public First’s ad urging him to “stand strong for AI safeguards” was predictable. Leading the Future’s endorsement is more of a surprise, signaling that the super PAC thinks he could be converted to the Church of Accelerationism — and, in turn, that it will use its money to sway agnostic candidates rather than just embolden the ones who have already picked a side.
Leading the Future also said it was endorsing California Rep. Sam Liccardo — a freshman candidate with a track record of courting the industry, but who’s said he’s “concerned” about AI-related political spending and recently told Politico he’s not going to “focus his energy” on the AI issue. Leading the Future seems to be doing what it can to ensure he doesn’t separate from the herd.
— Veronica Irwin
THIS WEEK ON TRANSFORMER
Claude Mythos knows when it’s breaking the rules — and tries to hide it — Celia Ford explains the new model’s weird misbehavior
Lawmakers are using AI to write laws. What could go wrong? — Katie McQue looks at the burgeoning phenomenon of AI-assisted lawmaking
THE DISCOURSE
Nicholas Carlini, Anthropic AI security researcher, said:
“I’ve found more bugs in the last few weeks with Mythos than in the rest of my entire life combined.”
OpenAI’s Boaz Barak wants Mythos for all:
“I think preserving models for internal deployment is risky. I encourage Anthropic to release Mythos, even if it’s a version that over refuses on cyber tasks or routes risky responses to a weaker model, as we did with codex.”
Yann LeCun simply tweeted:
“Mythos drama = BS from self-delusion.”
Ryan Greenblatt of Redwood Research, meanwhile, called BS on Anthropic’s claim in its Alignment Risk Update that it has “an achievable path” to mitigating risks:
“I don’t think Anthropic (or anyone) has an achievable path for keeping risk low if AI proceeds as fast as Anthropic expects.”
“Anthropic employees (especially Anthropic employees writing this report) often don’t believe there is an achievable path to keeping risk low if Anthropic builds powerful AI / ASI in the next 5 years, so this text seems incorrect or misleading.”
Helen Toner thinks “AGI” doesn’t mean anything anymore:
“Many people seem to treat AGI as a “know-it-when-you-see-it” kind of thing…[but] expecting that we’ll know it when we see it is patently not working.”
She suggested some more precise milestones:
“Full automated AI R&D”; “AI that is as adaptable as humans”; “Self-sufficient AI”; and “AI becoming conscious or otherwise worthy of moral status”
The New Yorker sicced Ronan Farrow and Andrew Marantz on Sam Altman:
“‘He’s unconstrained by truth,’ the [OpenAI] board member told us. ‘He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.’”
Dario Amodei’s private notes from his OpenAI days were highlighted in their ~16,000-word novelette:
“[Altman’s] words were almost certainly bullshit.”
“The problem with OpenAI is Sam himself.”
Ben Thompson had some choice words about OpenAI’s TBPN acquisition:
“I’ve previously wondered if OpenAI might be like Twitter, another text-centric company that fell backwards into a huge market and never developed into a functional business because of it.”
“If Twitter is a clown car that fell into a gold mine, OpenAI might be the short bus at the end of the rainbow. There’s supposed to be a pot of gold there, but it never quite seems to materialize.”
POLICY
A DC appeals court declined to block the Pentagon‘s national security blacklisting of Anthropic, though a California court previously blocked a separate designation order.
Treasury Secretary Scott Bessent and Fed Chair Jerome Powell told Wall Street bank CEOs to prepare for the cybersecurity risks presented by Mythos.
Sen. Ted Cruz backtracked on his initial late April timeline for AI legislation, saying that it’s “not set in stone.”
House Dems still haven’t begun negotiations with Republicans, Punchbowl reported.
The House Foreign Affairs Committee is reportedly planning a markup session on April 22 focused on chip export control bills.
It would include the STRIDE and MATCH Acts, which aim to increase America’s leverage to force countries like the Netherlands to adopt similar export control policies to the US.
Florida Attorney General James Uthmeier launched an investigation into OpenAI, citing concerns about harm to children and alleged facilitation of a mass shooting at FSU.
11 states have introduced bills that would halt data center development, as communities across the country grow increasingly frustrated with rising energy costs.
The CIA recently used AI to create its first autonomous intelligence report.
The UK government is reportedly courting Anthropic to expand in London.
Taiwan‘s National Security Bureau reported that China is trying to “poach Taiwanese talent, steal technology, and procure controlled goods,” particularly when it comes to chip manufacturing.
INFLUENCE
The White House is reportedly pressuring lawmakers in Nebraska and Tennessee to weaken AI bills under consideration in the state legislatures.
OpenAI released a policy document proposing a ‘New Deal’ for AI.
It included proposals for higher capital gains taxes, a public wealth fund, and increased workers’ benefits investments.
Politicos across the political spectrum had critiques. Public First Action lead Brad Carson, for example, told WP Intelligence it was a “public relations document.”
Dean Ball, however, told the same journalist that he thought it was a legitimate proposal from OpenAI forecasting their version of a best case scenario for AI policy.
OpenAI also released a Child Safety Blueprint to address CSAM with updated legislation, better methods for reporting to law enforcement, and model safeguards.
OpenAI is backing an Illinois bill that would shield AI companies from liability for “critical harms” — including mass deaths or $1b+ in damages — if they published safety reports and didn’t act intentionally or recklessly.
LawAI’s Charlie Bullock said “the fact that they’re willing to publicly back this shows unbelievable chutzpah.”
xAI sued Colorado over its “algorithmic discrimination” law.
David Sacks applauded xAI’s lawsuit.
Colorado Governor Jared Polis has previously said he wants to water down the bill.
Google endorsed a group of bipartisan bills aimed at assessing AI’s economic impact, retraining workers, and encouraging AI adoption.
Tech lobbyists for Cisco and IBM attempted to roll back Colorado’s right-to-repair law by exempting broadly-defined “critical infrastructure” hardware.
Critics say that could create manufacturer monopolies on data center repairs.
INDUSTRY
OpenAI
OpenAI is reportedly planning to release a new model with Mythos-like advanced cybersecurity capabilities to a “small set of partners.”
It told investors it has a compute advantage over Anthropic, claiming 1.9 GW of capacity in 2025 versus Anthropic’s estimated 1.4 GW.
The memo characterizes Dario Amodei’s comparatively cautious spending as “[looking] less like discipline and more like underestimating how fast demand would arrive.”
CFO Sarah Friar said the company will reserve a portion of IPO shares for individual investors.
She’s reportedly been clashing with Sam Altman over IPO timing.
OpenAI reportedly expects $100b in ad revenue by 2030.
It paused its Stargate data center project in the UK due to high energy costs.
It announced the OpenAI Safety Fellowship for outside researchers to work on safety and alignment projects.
Fellows will be offered workspace at Constellation, a Berkeley office which also houses the Anthropic Fellows Program.
The OpenAI Foundation said it’s finalizing over $100m in grants for Alzheimer’s research.
Elon Musk requested that damages from his $150b lawsuit against OpenAI be awarded to the OpenAI Foundation, and that Sam Altman be removed from its board.
OpenAI accused Musk of “pretending to change his tune,” calling his lawsuit “a harassment campaign that’s driven by ego, jealousy and a desire to slow down a competitor.”
Meta
Meta released Muse Spark, its first model since acqui-hiring Alexandr Wang and setting up Meta Superintelligence Labs.
It’s not yet released the model weights, though reportedly plans to do so in the future.
It’s positioning Muse Spark as a smaller, more efficient model that can compete on the frontier (or, as Wired put it, “give Mark Zuckerberg a seat at the big kid’s table”).
Initial reviews are good-not-great.
Meta reportedly incentivized “tokenmaxxing” on an internal leaderboard called “Claudeonomics” — then shut it down after data from the dashboard was leaked.
It committed $21b to CoreWeave from 2027-2032.
It indefinitely paused work with Mercor while the startup investigates a major security breach.
A Meta-backed data center campus is seeking a first-of-its-kind loan for both construction and power, which would be generated on site.
Anthropic
Anthropic completed a tender offer at its $350b valuation, but employees held onto more shares than investors hoped — hinting at optimism about the company’s future prospects.
The company said its revenue run rate is now over $30b, more than triple the $9b it was in December.
Claude subscriptions no longer cover usage on third-party tools, including OpenClaw.
It signed a deal with Google and Broadcom to expand its compute infrastructure.
And it signed a multi-year deal with CoreWeave.
It’s reportedly considering designing its own AI chips.
It’s also reportedly planning to invest $200m in a project allowing private-equity firms to sell AI tools to their portfolio companies.
Google DeepMind
Broadcom signed a long-term deal to supply Google with custom AI chips through 2031.
In response to recent lawsuits, Google added a Gemini interface that directs users to a crisis hotline if their conversation veers toward suicide or self-harm.
Gemini now has “notebooks,” a NotebookLM integration that organizes chats and files.
In an analysis of 4,326 Google searches, Gemini-3-powered AI Overviews were accurate 91% of the time, but often cited questionable sources.
Others
Intel joined Elon Musk’s Terafab AI chip complex project to produce processors for cars, humanoid robots, and data centers in space.
TSMC reported a 35% quarterly revenue increase, beating estimates.
Amazon said AWS AI revenue run rate is now $15b, while its custom chips business has an annual revenue run rate of over $20b.
OpenAI, Anthropic, and Google are setting their rivalry aside to collaboratively crack down on Chinese adversarial distillation attempts.
AI coding tools have created a “code overload” crisis, the New York Times reported, with tech workers churning out more code than companies know what to do with.
Apple’s App Store is getting a vibecode boost, too: the number of new apps published is up 84% relative to this time last year.
Perplexity’s monthly revenue is up 50% since last month, driven by its pivot from search to AI agents.
Cluely’s CEO Roy Lee admitted to lying about Cluely’s annual recurring revenue (ARR), tweeting that he “got a random cold call from some woman asking about numbers and told her some bs.”
AI startups often take creative liberties with ARR calculations. “The number can mean whatever the founder needs it to mean when they walk in to do a deal,” Stanford professor Chuck Eesley told Bloomberg.
MOVES
Fidji Simo took a leave of absence from OpenAI to focus on her health.
Chief marketing officer Kate Rouch is stepping down while recovering from cancer.
Brad Lightcap, OpenAI COO, will now lead special projects; chief revenue officer Denise Dresser will take over some of his previous duties.
Meanwhile, three OpenAI data center execs are reportedly leaving: Peter Hoeschele, Shamez Hemani and Anuj Saharan.
Eric Boyd joined Anthropic as head of infrastructure, leaving his role as president of Microsoft’s AI platform.
xAI announced another major reorganization of its engineering team since its co-founders quit, according to Business Insider.
Jack Schwaiger resigned from the company yesterday, saying “I have learned the limits of how far I can push myself.”
Kyle Kosic joined Project Prometheus after Jeff Bezos poached the xAI co-founder from OpenAI.
Zhou Jingren, formerly CTO of Alibaba Cloud, now runs the company’s AI division.
Sam Sheffer joined Google DeepMind to “bring vibecoding to the world.”
Long-time Tea Party activist and Trump supporter Amy Kremer joined Humans First, a new AI advocacy group.
RESEARCH
Researchers at AISLE, an AI cybersecurity company, claimed that small, cheap open-weight models successfully detected the same vulnerabilities Anthropic showcased in its Mythos announcement.
Security researcher Chris Rohlf argued that “all bugs are shallow with hindsight,” noting that the bugs found by Mythos survived decades of traditional security analysis.
Anthropic researcher Julia Merz dismissed AISLE’s approach as: “We took the needle the model found, isolated the relevant handful of the haystack, and then gave it to a small child, who found the needle as well.”
Researchers at MIT FutureTech studied over 17,000 worker evaluations of over 3,000 text-based work tasks across industries, and projected that LLMs will be able to complete most of those tasks at a “minimally sufficient quality level” by 2029.
San Diego State University surveyed 94,000 students across 22 California State University campuses, and nearly all reported using at least one AI tool.
65% of students are “skeptical about AI in education,” but roughly the same percentage said AI positively affected their learning.
Over 4 in 5 students responded that they’re worried about AI and job security.
A new Gallup survey of over 1,500 people found that while most of Gen Z uses AI regularly, those feeling hopeful about it dropped from 27% to 18% since last year.
New reports from Morgan Stanley and Goldman Sachs suggest that AI’s impact on jobs is “modest, but certainly real,” per Axios.
BEST OF THE REST
Christina Knight and Scott Singer argued that US-China cooperation on AI risks is more feasible than some think.
The Golden Gate Institute for AI’s Abi Olvera interviewed biosecurity professionals, who said that near-term AI developments might not make bioweapons much more common.
Ajeya Cotra outlined six milestones for AI automation — adequacy, parity, and supremacy in both AI research and AI production — with some lovely, easy-to-understand graphs.
Ryan Fedasiuk shared some practical cybersecurity advice for the post-Mythos era.
Hot on the heels of last week’s is-AI-in-journalism-bad discourse, Tim Requarth published a thorough post on the questionable behavior of widely-used AI detection company Pangram.
Economist Alex Imas made the case for collecting price elasticity data across every industry, so we can better predict how and when AI will displace jobs.
Young New Yorkers, worried about the costs of college and AI ruining their job prospects, are lining up for construction apprenticeships.
Vox’s Sigal Samuel responded to a parent wondering how to think about their kid’s future, now that the old “get good grades, go to college, get good job” formula is falling apart.
Her two cents: “As AI disrupts the labor market, I’m trying to move myself from the hoarding model to the solidarity model…if you focus on political engagement and collective organizing that could actually make some difference to the structural dynamic — and teach your child to ask structural questions and be civically engaged as well — you might be able to sleep a little better at night.”
Preorders are now live for friend-of-Transformer Garrison Lovely’s new book, ‘Obsolete’.
Have you seen the freaking MOON??? You should.
MEME OF THE WEEK
Thanks for reading. Have a great weekend.



Does anyone believe Open AI's promises or guarantees? They're all liars. They get that from the top, their king - Trump. When he fails, I hope to god they do too.