Is OpenAI changing its tune on AI laws?
Transformer Weekly: US-China talks, AI executive order, and Anthropic’s $900b valuation
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
Scott Bessent said the US and China will “set up a protocol” to discuss how to ensure nonstate actors don’t get hold of dangerous AI capabilities.
Rumors continue to swirl around a forthcoming AI executive order in the wake of Mythos.
Anthropic is reportedly set to raise $30b at a $900b valuation, overtaking OpenAI.
But first…
THE BIG STORY
As the tone of the AI regulation conversation shifts, it looks like OpenAI may have seen which way the wind is blowing. Recent developments in Illinois — including unreported written testimony shared with Transformer — suggest the company is increasingly open to meaningful AI safety legislation.
Last month, OpenAI endorsed a controversial bill in Illinois that would provide AI companies with a liability shield as long as they met (very basic) transparency requirements. The move prompted outrage, with LawAI’s Charlie Bullock calling it a contender for “worst state AI bill of all time.”
Now, however, OpenAI appears to be backtracking. In written testimony to the Illinois Senate this week OpenAI’s Caitlin Niedermeyer disavowed the liability shield part of that bill.
“We want to be very clear: we do not support the liability safe harbor included in SB 3444,” Niedermeyer said. “We testified in support of SB 3444 because it also contributed to a broader, coordinated approach to frontier AI safety and helped establish a pathway toward harmonization with emerging national standards … we were silent on the provision in that bill related to a safe harbor for liability and some took it as an endorsement of a no liability framework.”
OpenAI has not just disavowed the controversial provision in SB 3444. Along with Anthropic, it has also endorsed a much stronger bill, SB 315, which has the backing of AI safety organizations. Like various AI bills circulating state legislatures at the moment, SB 315 (first introduced in February as SB 3261) is closely modeled on California’s SB 53 and New York’s RAISE Act: it is focused on catastrophic risks, and requires the largest AI companies to develop and adhere to a frontier safety framework.
Importantly, SB 315 also builds on those bills by requiring third-party audits to check companies are actually complying with their safety frameworks — a provision that was in the original version of RAISE, but was stripped out due to industry opposition.
“We think it would be a big advance for AI safety if SB 315 passes,” said Scott Wisor, policy director at the Secure AI Project (which has supported the bill since its original incarnation). “I think it is very good that two frontier model companies are endorsing third-party audits, and it shows a very positive change towards stronger safety measures.”
Opposing measures that give AI companies an easy ride and backing regulation with actual teeth certainly seems to indicate a change in approach. This appears to be the first time OpenAI has endorsed third-party audits in a state bill, and it’s certainly a stark departure from its previous support for SB 3444 — which the bill’s sponsor had described as “an initiative of OpenAI.” An OpenAI spokesperson said getting legislation right was an iterative process, and that it would now do all it could to get SB 315 enacted.
Combined with statements this week distancing OpenAI from the Leading the Future super PAC, supporting an international AI governance body, and calling for CAISI to develop auditing standards, one might hope this is the beginning of a wholesale shift for the company, and one that endures. Perhaps it is driven by newfound appreciation for the stakes; perhaps public affairs boss Chris Lehane is simply reading the room. Whatever the reason, it looks like a positive development — for both OpenAI and everyone else.
— Shakeel Hashim
THIS WEEK (AND LAST WEEK) ON TRANSFORMER
Palantir’s controversy is the product — James Ball on how Palantir’s fiery rhetoric helps mystify its mostly mundane tech.
How Silicon Valley sold Washington an AI race — Yi-Ling Liu lays out how the AI industry built up, and benefited from, the narrative of a race to AGI with China.
An Oregon congresswoman distanced herself from Leading the Future — then backtracked — Veronica Irwin reports on the strange case of one of Leading the Future’s latest midterm endorsements.
THE DISCOURSE
Anton Leicht argued:
“Three trends — compute, security, and US government involvement — will further constrain the availability of frontier AI in the future.”
“This is not a future we should welcome. AI tokens will be strategically and economically central to all future societies, so we should do our best to enable their free flow.”
Andy Hall discussed the confusing politics of post-AGI jobless prosperity:
“The backlash to AI isn’t here yet … [but] real backlash will happen if and when job losses pick up steam.”
“Let’s not imagine a backlash that hasn’t yet truly begun. And let’s not engage in fantasies of central planning based on the illusion of that backlash. Instead, let’s use the small window of time we may have … to develop sensible policies that help us gain visibility about where we are and where we’re going.”
Meta research scientist François Fleuret dropped a hot take:
“Machine learning and AI did more to understand the nature of knowledge and our relation to reality than 20 centuries of philosophy. I am ready to kind of defend this hill.”
SAIL writers spent 10 days touring Chinese AI and robotics labs.
ChinaTalk’s Lily Ottinger heard lots of complaints about the compute constraint (say that three times fast):
“I got the sense that AGI is like a religion for these researchers, where — as is also the case in the US — devotees of the Machine God pray at the altar by working overtime even when the marginal hour of labor is less impactful than the marginal GPU would be.”
Jasmine Sun wrote about China’s job loss fears, which pale in comparison to US panic:
“When I told my 24-year-old cousin that American new grads felt like they couldn’t find jobs because of AI, he scoffed and said: ‘In China, we can’t find jobs because there are too many people.’”
“Everyone assumes AI is here to stay, so individuals should wield ChatGPT/OpenClaw/Doubao to avoid falling behind. Playing conscientious objector would only disadvantage your own prospects.”
ICYMI while Transformer was off, roon tweeted:
“it is a literal and useful description of anthropic that it is an organization that loves and worships claude … a monastery, a commercial-religious institution calculating the nine billion names of Claude.”
“[gpt] doesn’t inspire worship in the same way … they go to it not expecting the Other but as a logical prosthesis for themselves.”
Tenobrus replied:
“openai has been starting to more strongly philosophically differentiate themselves from anthropic with the tool-framing. [but] the subtle knife when dropped still slices open the fabric of the world … worse, these knives are self wielding.”
“[tool-framing] feels more like a wistful dream or a PR position than something that can exist as a part of humanity’s lasting future”
Boaz Barak countered:
“The basic thing I dispute is that there is a fundamental tension between AI being capable and being ‘tool like.’”
“Scientists and engineers often serve as ‘tools’ for leaders, even though they (we) are more intelligent than these leaders in many of the ways that matter…Humans have a particular package as localized individual intelligence. But it doesn’t mean all intelligences have to come in that package.”
POLICY
AI loomed in the background of the Trump-Xi summit.
Without giving many details, Trump said the two discussed “possibly working together for guardrails.”
Treasury secretary Scott Bessent said the US and China will “set up a protocol in terms of how do we go forward with best practices for AI to make sure nonstate actors don’t get a hold of these models.”
Trump was joined in China by a range of AI industry figures, including Elon Musk, Jensen Huang, and Dina Powell McCormick.
Export controls were discussed, Trump said, but nothing concrete has come from it.
Reuters reported this week that the US approved Nvidia H200 sales to 10 Chinese firms including Alibaba, Tencent, and ByteDance, but no deliveries have been made due to Chinese government concerns.
Rumors continue to swirl around a forthcoming AI executive order in the wake of Mythos.
While Transformer was off, National Economic Council director Kevin Hassett suggested the administration was considering an ‘FDA for AI,’ prompting industry backlash.
White House chief of staff Susie Wiles quickly shut down that idea.
The current plan, per the Daily Signal, supposedly involves some sort of pre-deployment vetting for frontier models. It seems up in the air as to whether that will be voluntary or mandatory for government contractors, thanks to government infighting.
There also appears to be a turf war on who, exactly, will do the evals, with intelligence agencies trying to usurp CAISI’s role as the government’s AI evaluator.
Seemingly as part of that fight, a CAISI announcement about Google DeepMind, xAI and Microsoft signing pre-deployment AI testing agreements disappeared from the agency’s website.
The Trump administration is also reportedly working on measures to get companies to work with the government on strengthening cybersecurity.
Meanwhile, Pentagon CTO Emil Michael said the DOD is using Mythos to find vulnerabilities — but that his broader issues with Anthropic won’t be resolved.
Congress is getting worried about Mythos too.
A bipartisan letter signed by over 30 lawmakers urged the National Cyber Director to address AI cybersecurity threats.
Notably, the letter mentions that future models might pose risks related to CBRN and automated AI R&D.
Signatories include Reps. Jay Obernolte and Lori Trahan, who are working on a federal AI bill.
Sen. Ted Cruz, meanwhile, acknowledged the need “to protect against catastrophic risk,” though he still warned about stifling innovation.
And Sen. Jim Banks said Mythos means it might “make sense to engage in dialogue with Chinese officials.”
In a letter to the Trump administration, he also highlighted loss of control risks.
The House Homeland Security Committee received a closed-door briefing from Anthropic about Mythos on Wednesday.
The House Oversight Committee launched a probe into Sam Altman’s potential conflicts of interest.
Seven GOP state attorneys general called for an SEC review of his personal investments in companies OpenAI has backed.
Colorado Governor Jared Polis signed a replacement for the state’s controversial AI Act.
Connecticut’s General Assembly passed HB 5222, which would create a voluntary pilot program for “Independent Verification Organizations” to assess AI systems.
California gubernatorial candidate Tom Steyer proposed a jobs guarantee for workers displaced by AI.
Germany’s ministry for digital affairs and cybersecurity agency proposed creating a German AISI and requested access to Mythos.
OpenAI has proactively offered European governments access to GPT-5.5-Cyber.
Pope Leo XIV is expected to finally sign his AI encyclical today. It will reportedly focus on AI’s impact on workers, and its signing is timed with the anniversary of Pope Leo XIII’s “Rerum Novarum.”
INFLUENCE
President Trump purchased $1-5m in Nvidia stock a week before the Commerce Department approved sales of Nvidia chips to China, new disclosures revealed.
The Trump Organization said the president has no role in making these investments.
Public First Action has contributed $500,000 to Abundant Future, the Garry Tan and Chris Larsen-backed super PAC opposing Saikat Chakrabarti in CA-11, Transformer learned.
SB 53 author Scott Wiener is running against Chakrabarti.
The Secure AI Project endorsed Tom Steyer for California governor.
Leading the Future endorsed three Democratic congressmembers — Val Hoyle (OR-04), Rob Menendez (NJ-08), and Ritchie Torres (NY-15).
OpenAI’s Chris Lehane and venture capitalist Ron Conway hosted a fundraiser for Sen. Jon Ossoff.
OpenAI endorsed the Kids Online Safety Act.
Anthropic released a paper on US-China competition, arguing that “it’s essential that the US and its allies stay ahead of authoritarian governments like the Chinese Communist Party.”
It argues that “responsibly building a lead in developing and deploying the most advanced AI augments our ability to influence AI safety in China and elsewhere.”
AI lobbyists are reportedly annoyed by all the executive order uncertainty.
Two new polls found overwhelming support for mandatory AI safety testing.
A Gallup survey found 70% of Americans oppose data centers in their communities.
A NYT analysis found that Andreessen Horowitz is the biggest donor in the 2026 midterms, with $115.5m in contributions.
A quarter of all federal lobbyists now work, at least in part, on AI issues.
INDUSTRY
OpenAI
The Musk-OpenAI trial drew to a close.
Greg Brockman’s diary stole the show last week, revealing his inner turmoil about OpenAI’s transition to a for-profit corporation.
Satya Nadella and Ilya Sutskever took the stand on Monday.
Sam Altman testified on Tuesday.
A court filing showed that he holds over $2b in companies that have worked with OpenAI.
In a somewhat painful exchange, Altman attempted to defend his honesty and character against Musk’s lawyer.
He testified that he was “extremely uncomfortable” with Musk’s demand for complete control over OpenAI’s proposed for-profit subsidiary.
Altman also gave us this: a meeting about folding OpenAI into Tesla once ended with “a LONG long period of time with Elon showing us memes on his phone.”
Both sides’ lawyers have now given closing arguments. The jury is expected to give its verdict next week.
OpenAI is reportedly preparing possible legal action against Apple over their ChatGPT integration partnership, after reportedly being annoyed the deal hasn’t worked out well for it.
OpenAI launched Daybreak, a cybersecurity initiative similar to Anthropic’s Project Glasswing for Mythos, granting tiered access to the company’s most advanced models.
It launched the OpenAI Deployment Company, a new subsidiary that will embed “forward deployed engineers” at businesses implementing enterprise AI tools.
Sam Altman announced two months of free Codex usage for companies that switch from another platform.
He reportedly discussed starting a new AI compute company that he would fundraise for, with OpenAI as the majority shareholder but not its only customer.
The family of a school shooting victim is suing OpenAI, alleging that ChatGPT gave the shooter instructions on firearm use and optimal timing to maximize casualties.
Anthropic
While Transformer was off, Anthropic struck a surprising deal with SpaceX to boost its compute capacity.
This came just a few short months after Elon Musk called Anthropic “evil.”
Platformer’s Casey Newton interpreted Musk’s change of heart as a sign xAI is falling behind in the AI race.
In the words of tech investor Ben Pouladian: “The enemy of my enemy is my friend, and it’s also my compute partner.”
Anthropic has reportedly agreed a $30b fundraising deal at a staggering $900b valuation, overtaking OpenAI.
The deal is expected to close this month.
Eight secondary marketplaces are reportedly offering access to unauthorized, sometimes fraudulent shares in Anthropic that the company says won’t be honored.
Anthropic partnered with the Gates Foundation to put $200m toward programs in global health, life sciences, education and economic mobility by 2030.
It launched new AI tools targeting the legal industry and small businesses.
It’s reportedly planning to acquire Stainless, which makes tools that streamline API access for frontier models, for at least $300m.
Starting June 15, paid Claude plans will have a monthly credit for programmatic usage, separate from interactive usage. Lots of power users are mad about it.
Google will reportedly launch a new Gemini model on Tuesday. It is not expected to advance the frontier.
It reportedly caught criminal hackers using an AI-developed zero-day exploit — the first known real-world case of its kind.
It’s reportedly hiring hundreds of forward deployed engineers to help with enterprise AI adoption.
It shipped lots of Gemini upgrades, including Gemini Intelligence, AI features for premium Android phones, and an AI-powered mouse pointer that understands context through pointing and speech (this demo makes it make sense).
Meta
Meta reportedly plans to cut 10% of its workforce next week, and morale is low. One employee told Wired, “I don’t know anyone having a good time.”
It’s testing a Grok-like AI integration in Threads.
Alexandr Wang made a rare podcast appearance on Core Memory.
xAI
xAI recruited Morgan Stanley, Apollo Global Management Inc. and other Wall Street firms with ties to Elon Musk to test Grok alongside other enterprise AI tools.
It installed 19 more natural gas turbines to Colossus 2, in the midst of an ongoing lawsuit alleging that xAI is violating the Clean Air Act.
It launched Grok Build, an AI coding agent.
Others
Microsoft is reportedly scouting for potential AI startup acquisitions, preparing for a future without its $100b+ OpenAI partnership.
Jensen and Lori Huang’s foundation bought $108.3m of AI computing time from CoreWeave to donate to universities and nonprofits for science and AI research.
Shares of AI chip company Cerebras jumped 68% in its trading debut, giving it a $67b market value.
Isomorphic Labs, a Google-backed AI drug discovery company, raised $2.1b in its second external funding round.
Recursive Superintelligence, founded by former OpenAI, Google DeepMind and Meta researchers to build self-improving AI, raised $650m at a $4.65b valuation.
xAI cofounder Igor Babuschkin is reportedly in talks to raise up to $1b for a new research startup called River AI at a $5b valuation.
The New York Times profiled Amp, which recently raised $1.3b to buy extra compute from data centers and redistribute it to startups and universities.
The Strait of Hormuz closure is cutting TSMC, Samsung and other Asian chipmakers off from oil and other supplies, raising electronics prices worldwide.
GitLab announced job cuts, saying it will reinvest the money on AI agents, reorganize R&D teams, and automate many of its workflows.
MOVES
AISI chief scientist Geoffrey Irving said he’s leaving AISI to move back to the Bay Area.
He’s starting a “new nonprofit alignment research org.”
Jan Leike stepped away from Anthropic’s Alignment Science team to start a new research project at the company.
Atoosa Kasirzadeh joined Google DeepMind to study the implications of AGI for human life and society.
Alex Imas joined Google DeepMind as director of AGI economics.
The Trade Desk’s Samantha Jacobson joined OpenAI to lead its monetization partnerships.
Suzanne Ashman, former General Partner at LocalGlobe and Latitude, joined UK Sovereign AI as managing partner.
Jabari Cooper joined Chamber of Progress as director of government relations for the Northeast.
RESEARCH
The UK’s AISI tested a new version of Claude Mythos Preview, finding it significantly more capable at cyber tasks.
The new version of Mythos and GPT-5.5 suggest cyber capability time horizons might be doubling even faster than the 4.7 months AISI previously estimated.
METR, meanwhile, found that an early version of Mythos had a 50% time horizon on software tasks of at least 16 hours, topping out its benchmark.
Thinking Machines announced “interaction models,” trained from scratch to respond natively to real-time audio, video, and text, without an external harness.
Researchers at OpenAI, Anthropic, Google DeepMind and elsewhere published a paper on “positive alignment,” a framework for building AI as a “scaffold for human flourishing.”
AI safety researcher Stephen Casper was disappointed:
“I can’t take it seriously as academic work, just as propaganda…I think of this paper as a mechanism of corporate capture of concepts from academic research on AI and society.”
Anthropic published a blog post explaining how it eliminated Claude’s blackmailing through examples of aligned behavior and descriptions of why unethical behavior is wrong.
OpenAI found that several of its recent models were accidentally exposed to some chain-of-thought grading during RL training.
(This isn’t supposed to happen, but they didn’t find any signs that this worsened chain-of-thought monitorability.)
Redwood Research’s Buck Shlegeris commended OpenAI for publishing the report, and urged AI companies to develop better systems for preventing similar problems.
METR surveyed 349 technical workers who self-reported that AI made their work 1.6-2.1x more valuable.
On the flip side: Carnegie Mellon surveyed nearly 400 professional visual artists, and found that 99% dislike AI and 85% reported completely abstaining from it.
BEST OF THE REST
An NYT essay by Tarbell journalist-in-resident Yi-Ling Liu described how US and Chinese workers share a sense of being “harvested by the future” as AI feeds precarity, surveillance and nostalgia.
Rest of World profiled the community of Chinese-born AI researchers who have become superstars in Silicon Valley, founding startups valued in the billions and playing central roles at companies like Meta, OpenAI and xAI while navigating complications caused by US-China tensions.
The WSJ explored efforts to develop orbital data centers for AI compute by the likes of SpaceX, Blue Origin, Google and Nvidia, detailing the “savage” economics of overcoming challenges with heat management, launch costs and power requirements, all accompanied by whizzy interactive graphics.
In Wired, a Hollywood screenwriter described having to take work as an AI trainer, detailing chaotic conditions, plummeting wages, abrupt project cancellations and exploitative labor practices as entertainment industry workers increasingly turn to AI training gigs.
Also in Wired, the “sad wives of AI” are grappling with partners obsessed with AI work at the expense of their spouses and family life, according to an entertaining and depressing profile.
MEME OF THE WEEK
Source: 404 Media
Thanks for reading. Have a great weekend.


