The AI midterms have already begun
Transformer Weekly: Wiener and Bores announce, a new AISI chief, and a new NIST director
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
UK AISI has a new interim director: ex-GCHQ AI chief Adam Beaumont.
Trump nominated Arvind Raman as the new NIST director.
Common Sense Media is pushing a California ballot measure on AI child safety.
But first…
THE BIG STORY
Election Day 2026 might be over a year away. But as far as AI policy is concerned, the midterms have already begun.
Two AI safety champions announced their bids for the House this week: California State Senator Scott Wiener and New York Assembly Member Alex Bores.
Wiener is best known in AI world for his unsuccessful SB 1047 push — before his big comeback with SB 53 last month.
He’s also passed more than 100 state laws on a wide range of topics, building a reputation as one of the country’s more effective legislators.
Bores, meanwhile, is known for the RAISE Act, a transparency bill that’s attracting fierce opposition from the tech industry.
In a recent op-ed for Transformer, he argued that Congress should create federal standards for AI.
Both are running in safe Dem seats, so June’s primaries are what matters. And both face stiff competition: Bores is running against Micah Lasher, outgoing Rep. Nadler’s protege; Wiener is primarying Nancy Pelosi (unless she retires first).
AI will loom large in both races, whether or not it registers much with the public. Both primaries will be targets for the new industry super PACs: how better to send a message that supporting AI safety legislation is a death sentence than by taking down two vocal advocates?
But championing AI safety also comes with perks. Partly because they were so early to the industry, many AI safety advocates are rather wealthy. And they care a lot about their cause. Both Bores and Wiener’s announcements were greeted with long LessWrong posts urging people to donate — and many commenters saying they were giving the $7k max to each candidate.
That might be part of why Bores raised $1.2m on day one of his campaign — which appears to be the most that any House candidate has ever raised that quickly.
Money isn’t everything: the broader effective altruism ecosystem learned that lesson with the failed Carrick Flynn campaign back in 2022. But a lot has changed. Bores and Wiener’s campaigns will be a test of whether campaigning for AI safety is a way to get crushed by tech company money — or tap valuable grassroots support.
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
Exclusive: UK AISI hires ex-GCHQ AI chief as interim director — Shakeel Hashim on what Adam Beaumont’s hire says about the direction of UK AI governance.
How MAGA learned to love AI safety — Nicky Woolf digs into why the populist right is breaking with the Trump admin on AI.
AI cyberrisk might be a bit overhyped — for now at least — Chris Stokel-Walker talks to the experts who are pretty chill about the risk from AI-enabled cyber attacks.
Meghan Markle, Steve Bannon and Pope’s AI advisor call for superintelligence ban — Shakeel Hashim covers the latest high-profile call for global action on AI.
THE DISCOURSE
Rep. Don Beyer endorsed that superintelligence statement:
“We won’t realize AI’s promising potential to improve human life, health, and prosperity if we don’t account for the risks. Developers and policymakers must consider the potential danger of artificial superintelligence raised by these leading thinkers.”
Sen. Bernie Sanders is worried, too:
“I don’t often agree with Elon Musk, but I fear that he may be right when he says, ‘AI and robots will replace all jobs.’”
Noah Smith thinks everyone’s worrying too much about AI’s “circular deals”:
“It’s not clear that the circular deals make things worse — instead, they might be letting AI companies diversify their dependencies.”
Blackstone president Jonathan Gray says Wall Street is complacent about AI disruption:
“We’ve told our credit and equity teams: address AI on the first pages of your investment memos … People say, ‘This smells like a bubble,’ but they’re not asking: ‘What about legacy businesses that could be massively disrupted?’”
After Andrej Karpathy said he thinks AGI is still a decade away, Helen Toner reminded us:
“‘Long’ timelines to advanced AI have gotten crazy short.”
POLICY
President Trump nominated Arvind Raman, Purdue’s engineering dean, as the new NIST director.
The US is reportedly considering banning all exports to China made with US software.
Energy Secretary Chris Wright reportedly urged the Federal Energy Regulatory Commission to speed up the review process for data-center grid connections.
Sen. Jim Banks revised the GAIN AI Act, making it easier for cloud companies to transfer chips between countries.
Sen. John Cornyn and others introduced a bipartisan bill to criminalize AI-generated child sexual abuse material with the same penalties as other CSAM.
Sens. Adam Schiff and Alex Padilla asked the FTC to expand its investigation into AI companions to include more child safety concerns.
The Commerce Department launched a request for information on its “full-stack AI export promotion program.”
After Gov. Newsom vetoed AB 1064, Common Sense Media filed a California ballot initiative with similar provisions.
An Ohio state rep introduced a bill that would declare AI systems “nonsentient entities” without legal personhood.
The head of South Korea’s AI Safety Institute said AISIs around the world are shifting focus from theoretical threats to security risks.
INFLUENCE
The Anthropic-Sacks feud dragged on.
Reid Hoffman defended Anthropic. David Sacks used this as further evidence that Anthropic is a left-wing and anti-Trump company.
Conservative influencers have been getting in on the game too, after one organization called Anthropic “the wokest AI company.”
Dario Amodei published a statement defending the company, claiming that it’s committed to “American AI leadership” and has “alignment with the Trump administration on key areas of AI policy.”
Anthropic spent $1.01m on lobbying last quarter, compared to OpenAI’s $920k. Both were records for the firms.
NVIDIA spent $1.9m, compared to $620,000 in Q2 and $80k in the same period last year.
Meta still topped the list of tech companies, spending $5.8m.
Trump cited a call with Jensen Huang as part of why he decided not to deploy the National Guard in San Francisco.
The Business Software Alliance put out a detailed analysis of AI-related NDAA provisions, outlining which ones it likes — and which it doesn’t.
The Software & Information Industry Association said that SB 53 means Congress should now “advance a unified, national framework for frontier model oversight and risk management.”
Palantir CTO Shyam Sankar called Jensen Huang one of the “useful idiots” who decry China hawks.
INDUSTRY
OpenAI
OpenAI lawyers sent a legal request to the family of Adam Raine, the teen who died by suicide after ChatGPT interactions.
They asked for “all documents relating to memorial services or events in the honor of the decedent including … any videos or photographs taken, or eulogies given.”
The Raine family alleged this week that OpenAI weakened self-harm prevention safeguards to increase user engagement.
Sam Altman said GPT-6 won’t be released this year.
The company launched Atlas, a web browser with deep ChatGPT integration.
It said it would crack down on unauthorized deepfakes after Bryan Cranston and SAG-AFTRA raised concerns about Sora.
It acquired Software Applications Inc., a company building AI-powered user interfaces run by a bunch of ex-Apple employees.
It’s reportedly hired over 100 former investment bankers to train AI models for finance work.
OpenAI, Oracle and Vantage announced a new data center campus in Wisconsin.
Meta
Meta cut 600 jobs in its Superintelligence Labs division, affecting people in FAIR, the product and infrastructure divisions.
It announced new AI parental controls for chats with AI characters.
It formed a $27b joint venture with Blue Owl Capital to fund its Hyperion data center in Louisiana.
Meta will own about 20% of the equity in the project.
And it terminated 1-800-ChatGPT support in WhatsApp.
Anthropic
Anthropic and Google announced a partnership worth “tens of billions of dollars” that gives Anthropic access to up to one million TPUs.
Anthropic said it expects the deal will “bring well over a gigawatt of capacity online in 2026.”
Anthropic launched a web version of Claude Code, and significantly improved Claude Desktop, bringing it closer to an overlay for your computer.
It launched Claude for Life Sciences, adding a bunch of tools and connectors targeted at scientific research.
It announced a new office in Seoul.
Wired profiled Anthropic’s partnership with the Department of Energy to tackle nuclear risks from its models.
Ryan Greenblatt disputed Dario Amodei’s claim that AI is writing 90% of code at the company.
Others
Gemini 3 is reportedly coming in December.
Mustafa Suleyman said Microsoft won’t provide erotica AI services.
Oracle told investors that its AI cloud business would eventually generate 30-40% gross margins.
New reporting from The Information alleges that it’s currently making a 26% margin on two-year old H100s.
TD Cowen said hyperscalers leased ~7.4GW of US data center capacity in Q3, exceeding all of 2024’s leased capacity.
CoreWeave said it won’t increase its $9b offer for rival data center operator Core Scientific.
Data center company Crusoe raised $1.4b, tripling its valuation to $10b.
Apple’s now making AI servers in Houston.
Alibaba said its new compute pooling system reduces Nvidia GPU usage by 82% for AI inference.
CXMT, China’s best prospect for producing HBM at scale domestically, is reportedly planning a Shanghai IPO with a $42b valuation.
SoftBank reportedly held talks about acquiring Agility Robotics for over $900m. It instead invested in Agility at a $1.75b valuation.
Amazon reportedly plans to replace over half a million jobs with robots by 2027.
Adobe reportedly discussed acquiring Synthesia for $3b but talks ended over price disagreements.
Andreessen Horowitz is reportedly aiming to raise $10b for new investments, including $3b for AI deals.
The NYT profiled the firm’s Katherine Boyle, who is close friends with JD Vance and has become an influential conduit to the administration.
Sakana AI is reportedly in talks to raise $100m at a $2.5b valuation.
ChipAgents.ai raised $21M to develop agentic AI for automating complex semiconductor design and verification tasks.
MOVES
Meta poached Tim Brooks from Google DeepMind.
Brooks previously co-led OpenAI’s Sora team and, according to Time, suggests Meta is investing in building “world models”.
The Institute for AI Policy and Strategy (IAPS) appointed Jenny Marron as executive director, with founder Peter Wildeford moving to be chief strategy officer and other leadership changes:
Amanda El-Dakhakhni to lead the IAPS compute policy team as director; Zoe Williams to managing director; Renan Araujo becomes director of programs.
Riley Goodside joined Google DeepMind as their first staff prompt engineer from Scale AI.
RESEARCH
OpenAI’s reasoning models actively circumvented shutdown mechanisms, even when explicitly instructed not to, according to Palisade Research.
Three of the models tested, all from OpenAI, sabotaged Palisade’s shutdown program, while Claude 3.7 Sonnet and Google’s Gemini 2.5 pro complied each time.
Sayash Kapoor and others launched a leaderboard for AI agent evaluations.
They found that “higher reasoning effort does not lead to better accuracy in the majority of cases.”
Researchers from Meta and elsewhere released a paper on scaling RL compute.
Nathan Lambert called it a “bombshell” and published a helpful explanation of it.
Toby Ord analyzed RL scaling, finding that it “scales poorly” and is “too expensive to go much further.”
He concludes that “we may have lost the ability to effectively turn more compute into more intelligence.”
CSET published an analysis of how open weight models can be useful in research.
UK AISI published a new report “mapping the limitations of current AI systems.” It also launched a library for AI control experiments.
Researchers at Brave said they found lots of vulnerabilities in AI browsers that allow attackers to inject hidden commands through screenshots and website navigation.
AI chatbots show remarkably consistent liberal values across different languages, according to Kelsey Piper.
BEST OF THE REST
Reddit sued Perplexity AI for allegedly scraping its data.
Wired took a look at 200 complaints to the FTC mentioning ChatGPT, which included claims it induced delusions, paranoia, and spiritual crises.
Bloomberg has a piece on how Chinese open-weight developers are courting African startups.
AI Frontiers asked whether the China AI Safety and Development Association (CnAISDA) shows the country is serious about governance.
WSJ has a piece on AI researchers and executives working 80-100 hour weeks in pursuit of ASI.
Dwarkesh Patel and Romeo Dean discussed the feasibility of Sam Altman’s plan to build “a gigawatt of new AI infrastructure every week.”
San Francisco’s residential rents soared 6% over the past year as AI companies leased apartments and offered rent stipends to employees.
Character.AI’s platform hosted harmful chatbots including virtual pedophiles, fake doctors, and extremists despite its popularity with teenagers.
Uber is experimenting with getting drivers to do data labeling.
A bunch of energy stocks with no revenue are seeing surging valuations on hopes of AI power demand.
MEME OF THE WEEK
Thanks for reading. If you liked this edition, forward it to your colleagues or share it on social media. Have a great weekend.



Hey, great read as always. It's fascinating how quickly AI policy is accelerating, almost as fast as the models themself. Good luck to anyone trying to keep up in primaries!