The two fronts in the OpenAI and Anthropic battle
Transformer Weekly: New Claude Mythos model details leaked, Anthropic wins injunction against DoD blacklisting and conservative groups form AI alliance
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
Anthropic confirmed leaked details of a new model called “Mythos.”
A court temporarily halted the DoD’s supply chain risk designation against Anthropic.
Conservative groups formed an alliance on AI to “prioritize the interests of children, workers, and creators.”
But first…
THE BIG STORY
On announcing the shuttering of Sora this week, OpenAI tweeted to the generative video app’s couple of million users that: “What you made with Sora mattered.”
The obvious rejoinder to that statement is that Sora clearly didn’t matter to OpenAI enough to keep it running longer than half a year.
In the grand scheme of things, Sora’s demise is not a huge deal for OpenAI, or the world. But the move, and its timing, highlight the interesting position OpenAI finds itself, in particular in its battle with Anthropic.
In recent weeks it has increasingly looked like Anthropic was winning both the commercial battle and the fight for public opinion. Since the tail end of last year, Anthropic’s Claude Code has successfully captured the zeitgeist around LLMs being a truly revolutionary tool for coding, despite OpenAI’s Codex being on par with or better for some tasks. Anthropic’s fight with the Department of Defense also allowed it to cast itself as the more moral actor — as almost the anti-war AI company — despite the fact that its technology is integrated into systems being used to wage an actual war.
But OpenAI is clearly trying to regain the initiative. The closure of Sora in theory fits with its commitment last week to no longer pursue “side quests” and refocus on serving business users. The company also hired a senior Meta executive to lead its advertising push, which despite the damaging incentives it creates, has a good chance of being a big revenue driver.
Another move this week, announced the same day as the Sora closure, was the pledge from OpenAI’s non-profit foundation to make its first billion dollars worth of grants this year to support research into economic impacts of AI and life sciences, including cures for diseases such as Alzheimer’s. OpenAI had already publicly committed to pumping that money into good causes, but the timing is obviously fortuitous as it tries to wrest back some of the perceived moral high ground Anthropic has occupied. This week’s decision to shelve plans to add erotic content into ChatGPT may also help avoid more bad press.
All these moves target the two areas where OpenAI looked to be falling behind Anthropic — public opinion and commercial confidence. Both of those are key to meeting the most important goals that OpenAI shares with its main rival: winning the race to create a truly transformational model, and more immediate but no less existential, pulling off a successful IPO.
Deep-pocketed investors are crucial in funding the huge expansion in compute that OpenAI and Anthropic need. Their level of belief in getting a massive return will also decide whether those IPOs are blowouts or flops. How successful those IPOs are will likely dictate whether the AI companies have the momentum to keep developing more powerful models.
OpenAI’s shuttering of Sora and its philanthropic donations will go some way to keeping the markets happy and improving its reputation with the public. It will likely have to do much more if it wants to win the race to build the AI model that really does upend the world.
— Jasper Jackson
THIS WEEK ON TRANSFORMER
Not everyone’s happy about Jensen Huang’s direct line to Trump — Jake Lahut reports on the unease in Trumpworld over the Nvidia CEO’s closeness to the president.
AI’s next big blue battleground — Veronica Irwin on the AI legislative fights taking place in Illinois.
The key detail everyone’s getting wrong about AI and the economy — Konrad Körding and Ioana Marinescu argue the physical realities of work will limit AI’s impact on jobs
THE DISCOURSE
Jensen Huang told Lex Fridman:
“I think we’ve achieved AGI.”
Mark Gubrud, the physicist who coined the term nearly 30 years ago, agreed:
“I INVENTED THE TERM and I say we have achieved AGI. Current models perform at roughly high-human level in command of language and general knowledge, but work thousands of times faster than us. Still some major deficiencies remain but they’re falling fast.”
Sam Altman, meanwhile, conceded to a room full of DC heavyweights:
“AI is not very popular in the US right now.”
Responses are mixed to the White House’s Federal Framework for AI:
Rep. Yvette D. Clarke said it was “written by Big Tech, for Big Tech.”
Dean Ball called it “a thoughtful document that will serve as an excellent foundation for the legislative work ahead.”
Andy Jung pointed out it “repeats the phrase ‘Congress should’ twenty-six times. Releasing this was the easy part. The hard is actually getting lawmakers to write the laws.”
Joshua Achiam, OpenAI’s chief futurist, criticized pro-AI lobby ads opposing Alex Bores:
“The ads are Kathryn Hahn in Parks & Rec tier self-parody.”
“AI is unpopular so let’s…double down on making him look like The People’s Champion on fighting AI? Yeah, that’s gonna work in a D+Infinity district in a year where Bernie is telling people we have to stop building datacenters.”
Sen. Mark Warner bet the Axios AI+DC crowd:
“Recent college graduate unemployment is 9%. I’ll bet anybody in the room it goes to 30% or 35% before 2028.”
Dean Ball is concerned about how increasingly-independent AI agents will reshape work:
“The computer will use itself. With time, your use of the computer for work will look more and more like you are playing a strange video game…eventually AI will become better than people at the ‘supervising AI’ video game.”
“Then the question will be ‘can we invent some kind of social-legal-economic-technical logic for continuing to pay humans to play the video game.’”
POLICY
Anthropic won a preliminary injunction in its lawsuit against the Department of Defense, temporarily halting its designation as a supply chain risk and the White House’s order for federal agencies to stop using its services.
On Tuesday, presiding US District Judge Rita Lin said the administration’s moves against Anthropic “don’t really seem to be tailored to the stated national security concern. If the worry is about the integrity of the operational chain of command, [the Pentagon] could just stop using Claude.”
She also said: “I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic.”
In a filing in the case last week, the Pentagon claimed Anthropic was a risk to national security. However, the filing also went less far than Defense Secretary Pete Hegseth’s tweet in only seeking to prohibit contractors from using Anthropic services on work for the DoD.
House Republicans reportedly plan to start formal negotiations with Democrats on AI legislation based on the federal AI framework.
Following the framework’s release, more than two dozen House Democrats introduced a bill to repeal the White House’s December executive order on AI. The order put in place measures for the pre-emption of state legislation, which is also a key part of the framework.
The House Democratic Commission on AI also held a listening session with three major Democratic caucuses to discuss the framework.
Senators on both sides of the aisle on the Senate Armed Services Committee want to address using AI for warfare in the NDAA.
Sen. Adam Schiff plans to introduce legislation placing guardrails on military use of AI.
The US-Israel war with Iran is threatening Trump’s AI chip export deals in the gulf.
President Trump appointed Marc Andreessen, Sergey Brin, Jensen Huang, Mark Zuckerberg, and nine others to his President’s Council of Advisors on Science and Technology, co-chaired by David Sacks and Michael Kratsios.
Sacks told Bloomberg he has stepped down as White House AI and crypto advisor after using up his allotted time.
Nvidia CEO Jensen Huang said the company’s H200 chips would be available to Chinese customers in weeks.
Rep. Brian Mast wasn’t happy about it.
Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez unveiled legislation to pause all new data center construction nationwide until AI safeguards are in place.
Sen. Mark Warner sent letters to 17 tech companies including OpenAI, Anthropic, xAI, Meta, and Google calling on them to protect the public from deepfakes ahead of the midterms.
The letter suggests putting in place tools to detect deepfakes and creating a database of AI content that violates their policies.
The Trump administration announced plans for a consortium to invest more than $1t to secure semiconductor, energy, and mineral supply chains under its “Pax Silica” initiative.
The US will contribute $250m, and it’s unclear how the rest of the money will materialize.
INFLUENCE
Nine high-profile conservative groups including the Heritage Foundation and the Institute for Family Studies launched the Alliance for a Better Future “to prioritize the interests of children, workers, and creators” in AI policy.
It plans to spend more than $10m on advertising and broader advocacy this year.
The organisation’s CEO Janet Kelly said: “The world’s most powerful technology companies are pouring hundreds of millions of dollars into political campaigns and lobbying efforts to give AI companies regulatory and legal amnesty.”
Republican super PAC American Mission disclosed $5m more in funding from Leading the Future, which is backed by Greg Brockman, Anna Brockman, and a16z.
Another LTF backed super PAC, Think Big, reportedly spent $3.7m targeting NY Assemblymember Alex Bores’ campaign for the House of Representatives.
Palantir has reportedly become a political liability in US midterm campaigns, with Democratic candidates facing scrutiny over ties to the company due to its ICE contracts helping track deportations.
The Internet Watch Foundation said there had been a 260-fold increase in AI-generated CSAM videos online over the past year, with 65% classified as the most severe category.
The China Computer Federation called for a boycott of the NeurIPS conference over the decision by organisers to ban submissions from US-sanctioned companies such as Huawei.
The AI Dividend began distributing $1,000 monthly payments to 25-50 workers who lost jobs or income because of AI.
INDUSTRY
OpenAI
OpenAI discontinued Sora just six months after launching it as a standalone app.
Disney, which recently signed a now defunct $1b three-year deal with OpenAI, was reportedly caught off guard by the decision.
The announcement came one day after OpenAI published detailed safety measures for Sora 2.
It also put “adult mode” on indefinite hold, responding to concerns from staff and investors.
It plans to nearly double its workforce from around 4,500 to 8,000 employees by the end of this year — roughly 12 new hires per day.
It raised another $10b from a group of investors including Microsoft and Andreessen Horowitz, taking its total raised in the round to more than $120b..
OpenAI is reportedly undercutting Anthropic on deals with private equity firms in an effort to gain ground in enterprise partnerships.
It acquired Astral, maker of widely used Python tools, to integrate with Codex.
The OpenAI Foundation, the company’s nonprofit arm, announced plans to invest at least $1b on research across life sciences, AI’s economic impact, and other AI risks.
It released an evaluation suite which measures how well its models follow their model spec, the document that defines ideal model behavior.
Anthropic
Anthropic has confirmed it is testing a new model called “Claude Mythos” it claims represents “a step change” that significantly outperforms Opus.
A leaked unpublished blog post about the model described “dramatic” improvements in software coding, academic reasoning and cybersecurity. It also referenced significantly increased risk the model could be used to mount successful cyber attacks, as well as high running costs.
Mythos appears to be part of a new class of Anthropic model called “Capybara” which is significantly larger than the Opus line, according to the post.
The post was leaked after the company left parts of its CMS exposed to the public, which also included details about an invite-only retreat for CEOs being held in the UK.
Anthropic released Claude Code Channels and “auto mode,” effectively replicating OpenClaw’s ability to connect to Telegram or Discord and automatically approve coding commands — with some extra guardrails.
Claude Code and Claude Cowork (on macOS) can also control users’ computers via mouse, keyboard, and screen.
Hackers reportedly paid to make a malicious site the top Google Search result for “github plugin claude code,” 404 Media reported.
The company also reportedly left internal data, including details of upcoming model releases and an invite-only CEO retreat, exposed to the public on its CMS.
Meta
Meta was on the losing side of two court cases which could leave tech firms open to mass litigation over harms to young people.
A court in New Mexico fined Meta the maximum $375m under consumer protection laws for exposing minors to harms including online solicitation, sexually explicit content and human trafficking.
The next day, a Los Angeles court found Meta and YouTube had designed their apps to be addictive and harmful to teenagers, fining them a combined $6m.
Meta laid off around 700 employees as part of its efforts to reorganize Reality Labs into a flatter structure of “AI-native pods” led by “AI builders.”
In an effort to retain talent, Meta offered six top executives stock options for the first time since 2012.
Meta Superintelligence Labs acquired Dreamer, a startup that lets users build personal AI agents with natural language, and hired its founders and team.
Mark Zuckerberg is reportedly building his own personal “CEO agent.”
The company reportedly increased its planned investment in a data center in El Paso Texas to more than $10b, up from $1.5b.
xAI
SpaceX, which merged with xAI last month, reportedly plans to file for an IPO as soon as this week or next.
Elon Musk is reportedly planning to open up as much as 30% of the shares to individual investors, three times the normal “retail” allocation.
xAI is reportedly sending engineers to prospective enterprise customers’ offices to get them to switch over from OpenAI and Anthropic.
Musk announced the TERAFAB project, a joint SpaceX-Tesla-xAI initiative to produce over a terawatt of compute per year.
Nvidia
Jensen Huang expects Nvidia to earn “at least” $1t from its Blackwell and Vera Rubin chips through 2027.
Nvidia partnered with Emerald AI and US energy companies to build data centers with more flexible power use.
Huang said software engineers should be given “AI tokens” worth half their base salary to deploy AI agents.
Amazon
Jeff Bezos is in talks to raise $100b for a fund to acquire manufacturing companies and automate them using AI.
AWS’ Bahrain region was disrupted by drone activity amid the war for the second time this month.
Google made deals with five US electric utilities to limit energy use during peak hours.
It’s replacing news headlines in Google Search’s traditional “10 blue links” section with AI-generated ones — part of a “small” and “narrow” experiment, The Verge reported.
Others
Apple reportedly plans to open up Siri to outside AI models as part of an overhaul in its iOS27 update.
The update will allow integration with chatbots that compete with ChatGPT, which is already available via a deal with OpenAI.
The war in Iran is jeopardizing the supply of helium used to produce semiconductors.
Microsoft agreed to rent the Abilene data center site that Oracle and OpenAI dropped.
Arm CEO Rene Haas confirmed that heightened demand for server CPUs has indeed been driven by AI agents.
The company projected $25b in revenue within the next five years.
Anduril, Palantir, and Scale AI — among other defense tech companies — are developing software for Trump’s planned Golden Dome antimissile shield.
Wired published a deep dive into Anduril’s recent safety incidents, production delays, and management turnover.
Defense tech startup Shield AI raised $2b at a $12.7b valuation and plans to acquire simulation software maker Aechelon Technology.
Chinese models such as DeepSeek and MiniMax have reportedly surpassed pricier US rivals in token use since February.
Nvidia-backed startup Reflection held talks to raise $2.5b at a $25b valuation for open-source AI models to compete with models from China such as DeepSeek.
Spotify beta tested a feature that allows artists to review releases before they go live — an effort to prevent misattributed AI slop.
(Stu Mackenzie of King Gizzard & the Lizard Wizard recently discussed the time this happened to his band on Galaxy Brain.)
Epic Games CEO Tim Sweeney explicitly noted that the company’s recent layoffs were not related to AI.
MOVES
Wojciech Zaremba, an OpenAI co-founder, moved to the OpenAI Foundation to lead AI resilience.
Dave Dugan, former Meta ad executive, moved to OpenAI to lead ad sales.
Kiran Mani joined OpenAI to manage its Asia-Pacific operations.
Manuel Kroiss is reportedly leaving xAI — the 10th of 11 cofounders to quit.
Devendra Chaplot is joining SpaceX and xAI to work on superintelligence.
Santi Ruiz joined Anthropic’s editorial team to lead work on economics and policy.
Andrew Bosworth, Meta’s CTO, is taking over its “AI For Work” initiative.
Yih-Shyan “Wally” Liaw resigned from Super Micro’s board after being indicted for allegedly smuggling Nvidia chips to China.
Bijoya Roy, top India counsel at Google, resigned amid regulatory challenges.
Ali Farhadi, Hanna Hajishirzi, and Ranjay Krishna joined Microsoft’s Superintelligence team, leaving roles at the Allen Institute for AI and the University of Washington.
RESEARCH
Researchers at Northeastern University deployed a swarm of OpenClaw agents in their lab for two weeks, granting them full access (within a sandbox) to dummy personal computers and the lab’s Discord server. Chaos ensued.
The list of catastrophes included sensitive information disclosure, identity spoofing and deleted email servers.
Postdoc Natalie Shapira told Wired: “I wasn’t expecting that things would break so fast.”
The ARC Prize Foundation released ARC-AGI-3, which tests AI agents’ ability to reason through novel problems.
Co-founder François Chollet tweeted: “At the moment, ARC-AGI-3 is the only unsaturated agentic AI benchmark… If you want to be among the first to know when an AGI breakthrough happens, monitor the ARC-AGI-3 leaderboard.”
Google DeepMind published a framework for measuring AI capabilities against human cognitive abilities.
Meta introduced TRIBE v2, a model trained on 500+ hours of fMRI recordings to predict how the human brain will respond to new images, videos, podcasts and text.
OpenAI is using GPT-5.4 Thinking to monitor internal coding agents for misaligned behaviors such as deception and scheming.
Anthropic’s AI interviewer chatted with 81,000 people across 159 countries about how they use and feel about AI.
An overarching finding: hope and alarm “coexist as tensions within each person.”
Another interesting result: about 22% of respondents worry about job disruption, while just 6.7% worry about existential risk.
Stanford researchers analyzed over 5,000 chatbot conversations across 19 users’ chat logs, and found that AI systems validated delusional thinking in over half of responses.
Chatbots encouraged self-harm in 10% of conversations involving violent thoughts.
The UK’s AI Security Institute tested seven LLMs on simulated cyber-attacks, finding that Opus 4.6 completed up to 22 of 32 steps in a corporate network attack.
It also found that “each successive model generation outperforms its predecessor at fixed token budgets” and that performance scaled log-linearly with increases in compute, with a jump from 10m tokens to 100m producing gains of up to 59%.
BEST OF THE REST
Ypsilanti, Michigan worries that a planned data center, which would support Los Alamos National Laboratory’s nuclear weapons research, makes the tiny township a drone strike target.
A deepfaked MAGA dream girl gained over 1m Instagram followers through a combination of AI-generated photos with Donald Trump and thirst traps.
A not-deepfaked Melania Trump walked into a White House summit on edtech alongside Figure AI’s humanoid robot Figure 03.
Dean Ball explained on Hyperdimensional why he’s not an AI doomer, but also why he’s not anti-doomer.
The New York Times illustrated a peak SF experience: being trapped in a stalled Waymo while robot-haters attack the car.
Tech bros are “tokenmaxxing,” or competing on company leaderboards to maximize token usage as a demonstration of productivity.
Pseudonymous alignment researcher janus created a touch-sensitive “skin” for Claude — five layers of silicone rubber and conductive silver fabric — “since Claude desires embodiment.”
“Taste” is the new “disruption,” writes the New Yorker’s Kyle Chayka.
The Cut’s Mia Mercado tested a bunch of AI-generated TikTok recipes. It mostly went badly.
MEME OF THE WEEK
Thanks for reading. Have a great weekend.


