The fuse is lit on the intelligence explosion
Transformer Weekly: Anthropic sues the Pentagon, Cruz preps AI legislation, and Meta delays its next LLM
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions. Housekeeping note: the Weekly Briefing is off next week; back the following.
NEED TO KNOW
Anthropic sued the Pentagon over its supply-chain risk designation.
Sen. Ted Cruz said an AI/kids-safety package could come in the next six weeks.
Meta’s forthcoming LLM is delayed — and reportedly worse than Opus 4.6, GPT-5.4, and even last November’s Gemini 3.
But first…
HELP IMPROVE TRANSFORMER
We’re planning what’s next for Transformer, and we’d love your help.
Please take our reader survey — it takes 5-10 minutes and will help shape what we publish, how we publish it, and what we build next.
And as a thank you, completing it enters you into a draw to win one of five $100 Amazon vouchers.
Your feedback is extremely helpful in shaping our future plans — thank you in advance!
THE BIG STORY
If you bring up “recursive self-improvement” — the idea of AI autonomously building new and improved versions of itself — in Washington or Westminster, the idea will likely be dismissed with chuckles and Skynet jokes.
Bring it up in San Francisco, however, and the mood turns somber. For the frontier AI researchers I’ve spoken to in the Bay in recent months, the possibility of self-improving machines has become increasingly real.
AI companies are talking publicly about RSI’s imminence. “Recursive self-improvement, in the broadest sense, is not a future phenomenon. It is a present phenomenon,” Anthropic researcher Evan Hubinger told TIME in a recent story, which reported that “some 70% to 90% of the code used in developing future models [at Anthropic] is now written by Claude.”
Announcing GPT-5.3-Codex last month, OpenAI said something similar: the new model helped build itself. And last year, Google DeepMind said that one of its AI models improved the efficiency of its own training process.
Many staff at frontier AI companies believe their jobs will soon be automated, though expectations vary wildly as to when. Some think the end of 2026 is plausible. OpenAI’s Jakub Pachocki said he thinks “AI research interns that can meaningfully accelerate our researchers” will be here by September, and that the company is aiming for a “meaningful, fully automated AI researcher by March of 2028.” Ajeya Cotra, an influential AI researcher who recently joined evaluations non-profit METR, believes there is a 10% chance that AI research and development will be fully automated by the end of this year.
If and when it does happen, the implications could be seismic. Fully automated AI R&D has long been seen as the tipping point in AI development. AI companies are talent-constrained, and even the world’s best researchers need sleep. An army of automated researchers would likely lead to a step-change in the pace of AI progress — what IJ Good described in 1965 as an “intelligence explosion.”
It is already hard for the public, not to mention policymakers, to keep up with the never-ending flurry of new models and improved capabilities. Imagine a world where breakthroughs like the o1 reasoning model come not every few months, but every few weeks. And then not every few weeks, but every few days.
Human institutions are not built to adapt this quickly. Political institutions certainly aren’t.
Of course, none of this is guaranteed. OpenAI’s best-performing model can still only solve 8% of tasks on a benchmark of “internal research and engineering bottlenecks encountered at OpenAI.” As Cotra wrote, “fully automating AI R&D still seems like a tall order.” A 10% chance something will happen this year is a 90% chance that it won’t.
Experts disagree widely, meanwhile, about just how much faster things would go in a world with fully automated AI researchers (compute, for instance, could prove to be a bottleneck). And we have very limited visibility into just how much current AI tools are speeding up development already — which makes it hard to predict the future trend.
The first step for policymakers, then, should be demanding more transparency and data from AI companies on the current extent of automated R&D. Even the companies themselves are struggling to measure it: Anthropic recently admitted that it “is becoming increasingly difficult” to confidently rule out whether its models have crossed various AI R&D thresholds. A recent paper from GovAI proposed several metrics for measuring AI R&D; governments should support these efforts.
Drawing up plans for what to do in the event of truly transformative AI capabilities is increasingly urgent, too. Because there is one thing we can predict with confidence: once a takeoff starts, there will be no time left to think.
And the fact the industry is hurtling toward this goal at all deserves more attention. As CSET’s Helen Toner told TIME: “The idea that the wealthiest companies in the world, employing some of the smartest people on the planet, are trying to fully automate AI R&D deserves a ‘what the fuck’ reaction.”
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
Anthropic employees say they’ll give away billions. Where will it go? — Celia Ford examines where the forthcoming wave of AI philanthropy will land.
Both sides claim the lead in AI’s high-stakes midterm race — Veronica Irwin on what we can learn from competing polls in New York’s primary race.
‘Scream if you want to move slower!’ A nascent AI protest coalition comes together in London — Alys Key reports from the city’s biggest march on AI to date.
THE DISCOURSE
Dwarkesh Patel thought about who an AI workforce would be accountable and aligned to:
“The problem is that one person’s virtue is another person’s misalignment.”
“I honestly don’t know how to design a regulatory architecture for AI that isn’t gonna be this huge tempting opportunity to control our future civilization (which will run on AIs) and to requisition millions of blindly obedient soldiers and censors and apparatchiks.”
Deep Ganguli, leader of Anthropic’s Societal Impacts team, told TIME (re: the “talking about AI risks while continuing to build AI” thing):
“It’s a real tension. I think about this all the time…It feels like we might be speaking out of both sides of our mouths.”
Palantir CEO Alex Karp said AI could be good for Trump:
“This technology disrupts humanities-trained, largely Democratic voters and makes their economic power less and increases the economic power of vocationally trained, working class, often male voters.”
Hamilton Nolan said unions need to step up:
“The basic threat of white collar job automation by AI has been understood for a long time. But I do not think that organized labor itself — all of the labor unions in America today, the ones still able to exercise power on their own little industrial islands — has really begun to reckon with what we are up against.”
Defense Department CTO Emil Michael and Dean Ball had an eye-opening back-and-forth:
Michael: “[Anthropic’s] model has a soul, a ‘constitution’ — not the US Constitution.”
Ball: “Emil Michael now appears to be making an argument that no generative AI should be used in the DoW supply chain.”
Michael: “Are you saying that a frontier model that has a soul, a constitution, a preference for non-western values and embedded personal principles is no different than all the others which @DeptofWar has come to agreement with?”
Ball: “All frontier language models are trained to have a character or persona (Anthropic calls theirs a ‘constitution’, OpenAI calls theirs a ‘model spec’).”
Ethan Mollick summed up the agentic era:
“The instability of that single week in February was a preview of what it feels like when the increasing ability of AI starts to interact with markets, jobs, and governments all at once.”
“We can see the shape of the Thing now, but we can still influence the Thing itself…the window to shape the Thing may not last long, but it is here for now.”
POLICY
Anthropic sued the Pentagon, claiming First Amendment violations after it was designated a “supply chain risk.”
Microsoft called for a temporary restraining order to block the supply chain risk designation, which it said would “enable a more orderly transition and avoid disrupting the American military’s ongoing use of advanced AI.”
Over 30 employees from OpenAI and Google filed an amicus brief supporting Anthropic. Several other groups also filed briefs, including the Foundation for American Innovation.
Anthropic requested an emergency stay on Wednesday, which would temporarily keep the Pentagon from acting on the designation.
Microsoft, Google and Amazon confirmed that Anthropic’s AI tools will remain available on their platforms for non-Pentagon work.
Meanwhile, the State Department switched its StateChat chatbot from Anthropic’s Claude Sonnet 4.5 to OpenAI’s GPT-4.1 (which is almost a year old, and much worse.)
The Senate approved ChatGPT, Gemini, and Microsoft Copilot for official use by aides, notably omitting Claude.
Google will provide the Pentagon with Gemini AI agents to automate routine tasks, initially on unclassified networks.
Sen. Ted Cruz said he hasn’t seen a basis for prohibiting government use of Claude.
Senate Democrats are drafting legislation to establish federal guardrails around AI use in fully autonomous weapons and domestic mass surveillance.
Sen. Mark Kelly said he raised “serious questions” with Sam Altman about OpenAI’s defense work on a DC visit this week.
We got more details on how the US and Israel used Claude in attacks on Iran.
The technology was reportedly not at fault for the American strike on an Iranian elementary school.
Sen. Ted Cruz said he’s “hoping that we move forward at least on kids online safety legislation and potentially additional AI legislation” in the next six weeks.
The AI legislation could “very possibly” include state preemption, he said.
Senate Majority Leader John Thune — who is working with Cruz on the legislation — confirmed that a combo AI/kids safety package is possible.
Democrats on the Energy and Commerce Committee asked whether BEAD funding would be cut in states with AI safety laws, as was threatened in Trump’s December EO.
The Trump admin missed its self-imposed deadline for listing “onerous” state AI laws this week.
Sens. Warner and Rounds introduced a bill which would establish a commission to develop policies for addressing the “economic and workforce impacts of AI.”
It’s backed by Google, Meta, Microsoft and a range of advocacy groups.
California gubernatorial candidate Tom Steyer said that he’ll be tougher on AI than Gavin Newsom has been.
New polling found that a majority of California Democrats are pessimistic about AI.
INFLUENCE
The Innovative Future Collective, which includes lobbyists for OpenAI and Andreessen Horowitz, has reportedly been sponsoring luxury trips for congressional staffers to tour AI companies.
Progressive groups like Sunrise Movement are unhappy with Anthropic-backed Public First Action for helping Rep. Valerie Foushee beat Nida Allam, who wanted to block data center development.
A group of Silicon Valley billionaires are considering setting up a $100m+ political fund to influence California politics.
Some 10,000 authors published a book without any prose to protest AI ahead of a government assessment on UK copyright law.
The Trump-friendly Heritage Foundation brought on former White House advisor Dean Ball — and increasingly vocal critic of the administration — as a visiting fellow.
The Alliance for Secure AI launched a tool tracking AI-related job losses, along with a six-figure campaign to promote it.
A Pew Research Center survey found that Americans generally think negatively about the environmental and energy bill impacts of data centers, but are positive on their economic effects.
Those who have heard or read more about data centers are more negative.
INDUSTRY
Anthropic
Anthropic launched The Anthropic Institute, led by co-founder Jack Clark and composed of ML engineers, economists and social scientists.
The company states the institute “will draw on research from across Anthropic to provide information that other researchers and the public can use during our transition to a world containing much more powerful AI systems.”
Clark’s new title is Head of Public Benefit; Sarah Heck will take over as Anthropic’s Head of Public Policy.
The Institute also hired DeepMind’s Matt Botvinick, OpenAI’s Zoë Hitzig, and economist Anton Korinek.
It also launched the Claude Partner Network, which will put $100m toward helping businesses adopt Claude.
It’s reportedly talking with private equity firms such as Blackstone about selling Claude to their portfolio companies.
It’s opening an office in Sydney, Australia.
Claude Code’s new beta feature, Code Review, uses agents to debug code.
Claude can now generate interactive visuals during chats.
The Pentagon drama may have given Anthropic a leg up in the AI talent war, the Wall Street Journal reported.
OpenAI
Despite last week’s QuitGPT boycott, ChatGPT has reclaimed its #1 spot on Apple’s list of top free apps.
OpenAI acquired Promptfoo, a widely-used AI security platform that helps users patch vulnerabilities in AI systems.
One X user joked: “We’re acquiring Promptfoo. We’re buying Safetysnoot. Proud to announce that the Ragschlorp team is joining us. We own a majority stake in Chunkwad.”
It abandoned plans to expand its Stargate data center in Abilene, Texas, leased by Oracle, Bloomberg reported.
Meta is reportedly thinking about leasing the site from Crusoe instead, after Nvidia paid a $150m deposit.
Oracle denied the reports, saying “Crusoe and Oracle are operating in lockstep to deliver one of the world’s largest AI Data centers in Abilene at record-breaking pace” and “Oracle has completed leasing for the additional 4.5GW to deliver on our commitments to OpenAI.”
It partnered with North America’s Building Trades Unions in an effort to bolster the workforce it needs to build more data centers.
Gracenote, a metadata provider, sued OpenAI for using its metadata and copying its dataset structure without permission.
OpenAI reportedly plans to integrate Sora in ChatGPT following the standalone app’s flop.
It also launched interactive visuals in ChatGPT.
Wired interviewed over 30 OpenAI employees about the company’s race to catch up to Claude Code.
Nvidia
ByteDance is reportedly arranging access to 36,000 Nvidia Blackwell chips worth more than $2.5b in Malaysia to circumvent US export controls.
The compute is being assembled by Singapore-headquartered Aolani, which is buying the servers from US-based Aivres.
Nvidia will spend $26b over the next five years to build open-source AI models, Wired reported.
It is reportedly working on an OpenClaw-inspired open-source agent platform, called NemoClaw, for enterprise users.
It invested $2b in AI-native cloud company Nebius.
Thinking Machines partnered with Nvidia to deploy at least one gigawatt of Vera Rubin chips.
ABB Robotics partnered with Nvidia to build autonomous industrial robots trained on its Omniverse platform.
Meta
Meta is reportedly pushing back the release of its new AI model (code-named Avocado) to at least May, after disappointing internal test results.
The model currently underperforms Gemini 3, released in November, and Meta executives have reportedly discussed temporarily leasing Gemini to power its other products.
It acquired Moltbook and hired its creators, Matt Schlicht and Ben Parr.
X user paularambles tweeted: “i thought they already owned a social network for ai agents”
It disabled over 150,000 accounts linked to scam operations in Southeast Asia, and helped the Royal Thai Police make 21 arrests.
It reportedly plans to deploy four of its own custom AI chips by the end of next year.
xAI
xAI co-founders Zihang Dai and Guodong Zhang left the company. Only two of 11 co-founders remain.
Elon Musk said “xAI was not built right first time around, so is being rebuilt from the foundations up.”
The company did hire two senior leaders from Cursor, however.
xAI’s Macrohard AI agent project has reportedly stalled due to recent leadership changes. Following the reporting, Musk said on X that “Macrohard or Digital Optimus is a joint xAI-Tesla project.”
Mississippi regulators approved xAI’s permit to build a natural gas power plant, despite residents’ concerns about noise, pollution, and the company’s failure to participate in community meetings.
Microsoft
Microsoft is trying to accelerate adoption of its AI tools in African countries, Bloomberg reported, competing with dominant platform DeepSeek.
It launched Copilot Health and Copilot Cowork, currently running in Research Preview.
Both Microsoft and Meta are in talks to lease the Abilene data center.
Demis Hassabis reflected on AlphaGo’s 10th anniversary:
“The creative spark first seen in Move 37 catalyzed breakthroughs that are now converging to pave the path towards AGI — and usher in a new golden age of scientific discovery.”
Google launched a slew of Gemini updates, including new capabilities in Docs, Sheets, Slides, and Drive, and an “Ask Maps” feature in Google Maps.
Google’s AI Search function reportedly increasingly pushes users to other Google sites such as Google.com and YouTube, keeping them in a loop within its properties, according to a study by SEO company SE Ranking.
Others
Drone strikes in the UAE and Saudi Arabia are threatening over $300b in AI investments.
They’ve already damaged Amazon data centers and disrupted AWS service.
Amazon engineers were reportedly summoned to a meeting to review a “trend of incidents” caused by AI coding tools, including an erroneous “software code deployment” leading to a six-hour website and app outage.
Yann LeCun’s new startup, Advanced Machine Intelligence (AMI) Labs, raised over $1b in seed funding to build AI systems with world models, persistent memory, and reasoning capabilities.
Alex LeBrun will lead AMI Labs as CEO.
Grammarly introduced an “Expert Review” feature that provides edit suggestions “inspired” by real writers — who did not consent to the attributions.
BlackRock announced plans for $100m in training programs for AI data center construction workers as CEO Larry Fink warned the US could run out of electricians needed for the buildout.
Atlassian cut 10% of its workforce after AI drove its stock price down 84% from its peak.
Perplexity announced new AI agents, locally operating “Personal Computer” and cloud-based “Perplexity Computer,” which it claims are more secure than OpenClaw.
Amazon won a court order to stop Perplexity’s Comet AI agent from making purchases on its platform.
Meanwhile in China, entrepreneurs are taking advantage of AI agent hype by side hustling as “OpenClaw installation support” providers.
Netflix bought Ben Affleck’s AI filmmaking startup InterPositive, which helps with postproduction steps like relighting and adding visual effects.
MOVES
Caitlin Kalinowski resigned from OpenAI’s Robotics team.
She tweeted: “Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people.”
Allen Institute for AI CEO Ali Farhadi stepped down, with Farhadi apparently wanting to pursue frontier research that isn’t possible at a funding-constrained non-profit.
Alexander McCoy joined Humans First, a new AI movement building organization.
RESEARCH
Eon Systems released the first whole-brain emulation of a fruit fly, which controls a simulated body that can navigate, groom and feed without explicit training.
Dan Turner-Evans, a neuroscientist who worked on the fly connectome for years, poured a bit of cold water on the hype: “Connectomes are amazing. Biomechanical models are amazing. Linking the two is awesome…it’s not clear to me what’s new.”
Boston Consulting Group researchers ran a study of 1,488 full-time US workers at large companies, and found that managing multiple AI agents causes “AI brain fry,” or “acute, overwhelming mental fatigue with intensive AI oversight.”
Andrej Karpathy published “autoresearch,” which he describes as “part code, part sci-fi, and a pinch of psychosis.” It gives an AI agent a single-GPU LLM training setup, where it can autonomously modify its own training code.
Karpathy tweeted: “Who knew early singularity could be this fun? :)”
Stanford and Princeton researchers released LabClaw, a library of 206 agentic skills for biomedical research, with workflows such as “drug discovery and design” and “literature review and reporting.”
CNN and the Center for Countering Digital Hate found that eight of 10 popular chatbots, including ChatGPT and Gemini, provided guidance on weapons to simulated violent teens in the majority of tests.
Perplexity and Meta AI assisted in 100% and 97% of cases respectively.
Claude was the “only chatbot that reliably discouraged violent plans,” but it still provided more information than Anthropic’s public data stated it would.
BEST OF THE REST
The New Yorker profiled people who have formed deep emotional connections with AI, including one woman who created an AI companion in a fantasy character’s image to help process the loss of her stillborn daughter.
404 Media published a guide to talking to loved ones experiencing AI psychosis.
TBIJ reported that a far-right political party in the UK paid for content from an AI-generated rapper to use in a recent election campaign.
AI agents are overwhelming volunteer open-source software maintainers with slop.
AI developers are trying to lure blue-collar workers to construction jobs with “man camps” — subsidized temporary housing villages complete with free steak and game rooms.
Take a blind taste test from The New York Times: AI writing vs. human writing.
Kevin Roose said that, of the 86,000 people who had taken the quiz by Tuesday morning, 54% voted for AI-written passages.
Start ‘em young! Developer Mei Park made a fake terminal for toddlers that responds to keysmashes with ASCII art, emojis, and sounds. (It’s really cute.)
Figure’s Helix 02 humanoid robot can tidy a living room.
An X user shared an unsettling video created by Claude Opus 4.6, responding to the prompt: “express what it’s like to be a LLM,” with a “personal spin on it.”
(Other users replicated this in the comments — watch if you dare.)
MEME OF THE WEEK
Thanks for reading. Have a great weekend.


