A deal not worth making
Transformer Weekly: OpenAI subpoenas, Nvidia shenanigans, and a new AGI definition
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
Sen. Josh Hawley is reportedly circulating a draft AI companions bill.
OpenAI issued subpoenas to more nonprofits than previously realized.
Anthropic projects annualized revenue of $20-26b next year.
But first…
THE BIG STORY
AI safety world has been abuzz with one question recently: should they strike a preemption deal?
Sen. Ted Cruz continues to push for federal preemption of state AI legislation, as does the AI industry. Recently, influential thinkers Dean Ball and Anton Leicht have argued safety advocates should abandon their broader coalition and cut a deal now.
Leicht argues that as AI becomes more salient, child safety and labor concerns will overwhelm worries over frontier AI safety. That, combined with the threat of the recent industry super PACs, means safetyists’ hand will only get weaker.
A deal, Leicht and Ball think, is therefore urgent. They propose it involves federal regulation on frontier AI, alongside broader preemption — even if that means ditching the rest of the burgeoning anti-AI coalition.
I think this is a terrible idea which doesn’t make much sense. First: the opposition threat is overstated.
Leading the Future (LTF) and Meta’s super PACs are expected to buy pro-AI regulation in the same way Fairshake bought crypto-friendly regulation. But Fairshake’s influence was arguably mostly down to marketing: in reality, evidence that it actually swung elections is weak.
What’s more, Leicht and Ball’s proposal is politically unviable.
Preemption advocates couldn’t get even 50 Senate votes for a moratorium. Future attempts will likely require 60. That’s going to be very hard, if not impossible.
Everyone agrees that a preemption deal would require some carveouts, such as for child safety. But Democrats will want more — and every extra carveout will likely lose Republican votes. There’s no stable equilibrium here.
Finally, ditching the wider anti-AI coalition will backfire.
For one thing, it would prove critics right in their painting of AI safety people as conniving and in bed with industry.
In turn, a reputation for stabbing partners in the back would make it much harder for safety folks to build future coalitions.
As Leicht correctly notes, frontier safety concerns will likely be drowned out in coming years. But that means safety advocates desperately need the rest of the anti-AI coalition: unless safetyists think they will never face another policy fight again, burning bridges now will mean help is sorely lacking in the future.
All that said: Leicht and Ball are right that it’s worth trying to get something on the books now. But safetyists are negotiating from a position of strength, and need not give much away.
The wise move? Federal legislation on frontier safety concerns (transparency, formal authorization for US CAISI, and perhaps a testing and evaluations regime) in exchange for narrow preemption on just those issues.
Unlike a broader preemption deal, this could actually pass — and do so without alienating coalition partners.
The AI safety position is much stronger than it seems: voters are worried about AI and want it regulated. Safetyists should show some backbone and keep fighting — not cave while they’re ahead.
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
What happens when the AI bubble bursts? — James Ball looks at what to expect after an AI crash.
AI and synthetic DNA could be a lethal combination — Ben Stewart on why improved gene-synthesis screening is vital to defending against AI-enabled bioweapons.
AI is advancing far faster than our annual report can track — Yoshua Bengio, Stephen Clare and Carina Prunkl explain why AI’s rapid progress required an early update to their International AI Safety report.
THE DISCOURSE
Gary Marcus continues to make the case against LLMs:
“If the strengths of AI are to truly be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools, and instead concentrate on narrow, specialized AI tools engineered for particular problems. Because, frankly, they’re often more effective.”
Steven Adler wrote about the various reports of Nvidia intimidating its critics:
“Multiple sources [at think-tanks] … described a similar concerning pattern to me: If NVIDIA dislikes your work’s implications, your bosses might hear about it … [and] if you speak with people in-the-know at AI companies, fear of retaliation from NVIDIA is a major concern when staking out policy positions.”
OpenAI employee Leo Gao wrote about the company culture:
“the vast majority of people at oai do not think of xrisk from agi as a serious thing. but then again probably a majority dont really truly think of agi as a serious thing.”
Jack Clark published his speech from The Curve:
“We are growing extremely powerful systems that we do not fully understand. Each time we grow a larger system, we run tests on it. The tests show the system is much more capable at things which are economically useful. And the bigger and more complicated you make these systems, the more they seem to display awareness that they are things. It is as if you are making hammers in a hammer factory and one day the hammer that comes off the line says, ‘I am a hammer, how interesting!’ This is very unusual!”
David Sacks did not respond well to Clark’s musings:
“Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem.”
Cue California Sen. Scott Wiener defending Anthropic, and Sacks saying that this “tells you everything you need to know about how closely they’re working together to impose the Left’s vision of AI regulation.”
Sacks later said that Anthropic is not a “White House target,” in a post accompanied by a long list of ways in which Anthropic had offended the Trump administration.
That skirmish ultimately resulted in some interesting thoughts from Sriram Krishnan:
“My broad view on a lot of AI safety organizations is they have smart people (including many friends) doing good technical work on AI capabilities but they lack epistemic humility on their biases or a broad range of intellectual diversity in their employee base which unfortunately taints their technical work.”
Peter Thiel called Eliezer Yudkowsky the antichrist:
“In the 21st century, the Antichrist is a Luddite who wants to stop all science. It’s someone like Greta [Thunberg] or Eliezer.”
POLICY
The Trump administration issued layoff notices to hundreds of Commerce Department employees, though a judge blocked them for now.
Governor Gavin Newsom made a bunch of AI policy decisions:
He signed SB 243, which requires companion bots to regularly remind users that they’re talking to a bot.
He vetoed AB 1064, a more stringent child-safety bill which critics said was much too broad.
And he vetoed SB 7, which would have regulated automated decision-making systems in employment.
Sen. Josh Hawley is reportedly circulating a draft bill, the GUARD Act, that Axios claims “would ban AI companions for minors.”
Sen. Bill Cassidy floated the idea of AI tools helping regulate AI.
California state senator Scott Wiener is reportedly planning to run for Rep. Nancy Pelosi’s seat next year.
A House CCP Committee report urged the US to “dramatically expand” export controls on semiconductor manufacturing equipment, calling for country-wide controls on older DUV tools.
Taiwan said China’s rare earth export controls would not significantly impact its semiconductor industry.
MI5 chief Ken McCallum said we need to scope out “potential future risks from non-human, autonomous AI systems which may evade human oversight and control.”
INFLUENCE
NBC News found that OpenAI had issued sweeping subpoenas to more of its critics than previously reported, including the 76-year-old San Francisco Foundation, in an (unsuccessful) attempt to find links to Elon Musk.
Public Citizen’s Robert Weissman said the subpoenas were “intended to intimidate” and “an attempt to bully nonprofit critics, to chill speech and deter them from speaking out.”
OpenAI head of mission alignment Joshua Achiam said the subpoena of Encode “doesn’t seem great.” Chief strategy officer Jason Kwon defended the moves.
OpenAI president Greg Brockman is “in agreement” with Sen. Cynthia Lummis about her AI liability bill, which would give AI companies civil immunity in certain cases, Sen. Lummis said.
The America First Policy Institute launched an AI and emerging tech team, chaired by former Rep. Chris Stewart.
Yusuf Mahmood will be policy director, Dean Ball will be a senior fellow, and Shea Throckmorton will be campaign director.
Talent agency CAA hired its first outside lobbyists, saying that the AI debate is a big reason for doing so.
A poll of Californians found that 77% want the government to require AI companies to safety test their systems and provide a plan for mitigating harm.
A global Pew survey found that across 25 countries, more people are concerned than excited about AI.
Ten philanthropies, including MacArthur, Omidyar, and Mozilla, launched a new $500m fund to “build a more human(e) future in which AI is shaped by and for people.”
Omidyar Network launched a Tech Journalism Fund.
INDUSTRY
OpenAI
Sam Altman said the company would loosen restrictions on ChatGPT “now that we have been able to mitigate the serious mental health issues and have new tools”.
The changes will include reintroducing the personality elements from 4o and allowing “erotica for verified adults.”
In response to criticism over the erotica changes, he said OpenAI was “not the elected moral police of the world.”
OpenAI signed a deal with Broadcom to make 10GW worth of AI chips. It reportedly expects to spend 20-30% less on these than it does on Nvidia chips.
Broadcom stock jumped 9% following the announcement.
OpenAI is also reportedly in discussions to make an Arm-designed CPU.
The non-profit created as part of OpenAI’s restructuring will reportedly not have special shareholder rights, instead getting the right to nominate directors to the board of the for-profit subsidiary.
OpenAI is reportedly developing a five year plan incorporating new revenue lines, funding and debt to meet its more than $1t in spending commitments.
Revenue options include advertising, consumer hardware and supplying computing power via Stargate.
It’s also looking at “creative” debt options and is confident of raising more money from investors.
The company reportedly has $13b in annual recurring revenue — 70% from ChatGPT users — and made an operating loss of $8b in the first half of the year.
Sora-generated videos of dead celebrities are reportedly horrifying their families.
The company announced an eight-person Council on Well-Being and AI to advise on what “healthy interactions” should look like on ChatGPT and Sora.
Anthropic
Anthropic is nearing $7b in annualized revenue, up from $5b in July and reportedly projected to grow to $9b by the end of the year — and as much as $26b next year.
CEO Dario Amodei reportedly held preliminary funding talks with Abu Dhabi-based investment firm MGX.
The company is reportedly planning to further open up its models for national security uses, including cyber operations.
A federal judge ruled that Anthropic must face a jury trial over claims from music publishers who say it knew users were using copyrighted song lyrics.
It launched Haiku 4.5, a small model that claims to deliver near-frontier performance at one third of the cost.
The model safety card included information about chain-of-thought “faithfulness” for the first time.
Meta
Meta is investing $1.5b in a data center in El Paso, Texas, that will launch in 2028 and eventually scale to 1GW.
It’s reportedly about to sign a $30b financing package with Blue Owl Capital for its Louisiana site.
It’s done a deal for Arm to power its AI ranking and recommendation systems for Facebook and Instagram.
Others
xAI has hired specialists from Nvidia to build “world models” for AI-generated video games.
Google is investing $15b in an Indian AI infrastructure hub.
ASML said it was “well prepared” for the new rare-earth minerals export controls being imposed by China.
The company reported $6.3b in bookings in the third quarter.
TSMC upped its sales projections for the second time this year, with CEO C.C. Wei claiming that “conviction in the AI megatrend is strengthening.”
JPMorgan Chase announced it is investing $10b in “frontier” technologies critical to national security including AI and quantum computing.
Salesforce said it is investing $15b over five years to build an AI Incubator Hub and expand workforce programs in San Francisco.
It’s also reportedly offered its AI services to ICE, following CEO Marc Benioff’s pro-Trump comments last week.
Those comments led Ron Conway, investor and AI super PAC backer, to cut ties with Benioff.
Cursor maker Anysphere reportedly held talks to raise $1b at a $27b valuation.
Nvidia is working with Australian partners to build data centers worth $2.9b.
Oracle announced plans to deploy 50,000 AMD AI chips starting in 2026.
Amkor began building a $2b chip packaging facility in Arizona.
An investment consortium including Blackrock, Nvidia, MGX and Microsoft launched a $40b takeover of Aligned Data Centers.
Poolside and CoreWeave announced plans to build a 2GW AI data center in West Texas using on-site natural gas for energy.
Humanoid robot startups Rhoda AI and Genesis AI, each raised over $100m.
General Intuition, a new AI world-model company, launched with a $134m seed round, led by Vinod Khosla.
Nscale struck a $14b deal with Microsoft to deploy 104,000 GB300s in Texas.
MOVES
Ke Yang, who was leading Apple’s team to improve Siri’s AI capabilities, is reportedly leaving the company for Meta.
Andrew Tulloch, co-founder of Thinking Machines Lab, also left for Meta.
Jared Palmer, until recently VP of AI at Vercel, joined Microsoft as VP of CoreAI and SVP of GitHub.
Alex Lupsasca, an award-winning black hole theoretical physicist, joined OpenAI’s new science team.
RESEARCH
An AI model from Google “designed to understand the language of individual cells” managed to generate “a novel hypothesis about cancer cellular behavior,” which Google then “confirmed … with experimental validation in living cells.”
A new paper from Dan Hendrycks, Yoshua Bengio, Gary Marcus and many others offers a “quantifiable framework” for defining AGI.
According to this metric, GPT-5 is 58% of the way to AGI.
A new MIT study forecasts that bigger AI models might soon offer increasingly diminishing returns.
A British AI startup, ManticAI, ranked eighth in the Metaculus Cup forecasting competition, beating many professional forecasters.
Anthropic published a bunch of economic policy ideas to address AI’s potential labor market impacts that came out of a recent symposium it held in DC.
BEST OF THE REST
Stephen Witt published a really excellent explainer of AI risks in the New York Times.
An interview with a former Google employee offered a bunch of insights into how the company uses TPUs, claiming that Google doesn’t use Nvidia chips for any of its first-party services.
MIT Tech Review profiled AI tools like PainChek, which scan patients’ faces to quantify pain levels — helping detect suffering in those who cannot communicate verbally. (Disclosure: the author of this piece works at Open Philanthropy, Transformer’s main funder.)
Andy Masley wrote a long piece debunking AI water use concerns, concluding that “the idea that either the factory or AI is using an inordinate amount of water that merits any kind of boycott or national attention as a unique serious environmental issue is innumerate.”
Meanwhile: people in Europe are worried about AI data center expansion in drought-prone regions.
TikTok creators are using AI-generated racist videos to gain followers and drive traffic to their iPhone accessory shops.
The Information reported on the software engineers who don’t want to use AI coding tools, despite management pressure.
South Korea rolled back its AI textbook program after just four months due to complaints about inaccuracies, privacy risks, and increased workload.
John Searle, who came up with the famous “Chinese room” thought experiment about AI and consciousness, died at 91.
MEME OF THE WEEK
Thanks for reading. If you liked this edition, forward it to your colleagues or share it on social media. Have a great weekend.