We’re all behind The Curve
Transformer Weekly: GAIN AI Act, China’s rare earth crackdown, and AI bubble talk
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
The Senate included the GAIN AI Act in the NDAA.
China is targeting chip imports and rare earth exports.
And Transformer is hiring.
But first…
THE BIG STORY
Stepping out of The Curve — a conference of the most AGI-focused people around — felt like coming back from the Moon.
While the rest of the world worries about whether AI is a bubble about to pop, I spent last weekend with many of the people working at the frontier of AI discussing what happens if the technology fulfills its potential.
One group is talking about the circularity of AI investment deals. The other is talking about the circularity of recursive self-improvement.
One group is asking whether the market can survive an AI crash. The other is asking whether democracy can survive an AI boom.
It’s not hard to see why the prospect of a crash is capturing all the attention. We are still, after all, living with the impact of the last one more than 15 years ago. Those laser-focused on the catastrophic risks of AI should be thinking about the potential economic and political fallout of a crash extremely seriously, not least as it would make preparing for and addressing those risks all the more difficult.
But if the people at The Curve are right, even another Great Recession would pale in comparison to the challenges just over the horizon, crash or not.
At the conference, OpenAI executive Josh Achiam led an all-too-serious discussion about the impending culture war over AI personhood, the risks of an economy dominated by AI agents, and even the urgency of figuring out space governance before Elon tries to colonize other planets. Elsewhere, a debate took place about whether AI advances would lead to regime change in the US.
These conversations sound hypothetical, like science fiction even. (And there were some skeptics at the event.) But the pace of progress so far still suggests that we might end up confronting many of these scenarios all too soon. And as Achiam repeatedly said, as a society we’re not spending nearly enough time trying to solve them.
Indeed, some of those taking these scenarios the most seriously are companies building the technology that will lead to them, and their response has been to ramp up political spending in a way that risks fundamentally distorting the information and governance environment — something that concerned many at The Curve.
The conference closed with a talk (moderated by yours truly) from Anthropic co-founder Jack Clark, titled “Technological Optimism and Appropriate Fear.” Clark’s argument was simple: he’s optimistic about how far the technology can advance — which is precisely why he’s so scared.
But the distance between The Curve and the rest of the world watching and waiting for a crash remains vast. A year and a half ago, Leopold Aschenbrenner warned that “there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness.” Those people are having an even harder time being heard over the economic alarm bells now ringing back on earth.
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
What the GAIN AI Act could mean for chip exports — Celia Ford explains how the US government could require American chipmakers to prioritize US buyers.
An intense battle over the RAISE Act is entering its final stretch — Marty Swant and Shakeel Hashim on New York’s plan to regulate AI, and the industry campaign to kill it.
The RAISE Act can stop the AI industry’s race to the bottom — Assembly Member Alex Bores on why the act he sponsored will help counter incentives to release dangerous models.
NOTICES
We’re looking for a social media contractor to amplify Transformer’s journalism across social platforms. If you read this newsletter, you are already a top candidate. More details and application form here.
Tarbell, Transformer’s publisher, is hiring for two roles on its AI reporting fellowship team — a Program Manager / Director and (Senior) Program Associate. More details at those links.
Tarbell and FAR.AI are hosting an exclusive event for journalists in Washington, DC next month featuring talks from top AI policy and industry insiders. We’re accepting a limited number of applications — apply here.
THE DISCOURSE
Rep. Nathaniel Moran is worried about automated AI R&D:
“If these models can learn from their mistakes, adapt autonomously, and eventually design and train their own successors, then who controls that process matters. A lot.”
For a deep-dive into why this is so important, see our recent explainer on automated AI R&D.
AI bubble chatter continues:
Sam Altman: “There are many parts of AI that I think are kind of bubbly right now.”
Jeff Bezos: “This is kind of an industrial bubble.”
Harvard economist Jason Furman calculated that early 2025 GDP growth would have been next to zero without AI-related investments, Fortune reported.
Shunyu Yao left Anthropic for DeepMind, publishing a spicy blog on his way out:
“I decided to leave due to two main reasons … ~40% of the reason: I strongly disagree with the anti-china statements Anthropic has made. Especially from the recent public announcement, where China has been called “adversarial nation”. Although to be clear, I believe most of the people at anthropic will disagree with such a statement, yet, I don’t think there is a way for me to stay … The remaining 60% is more complicated. Most of them contains internal anthropic informations thus I can’t tell.”
Nathan Benaich and co. published the 2025 State of AI Report:
“2025 was the year reasoning got real … The business of AI finally caught up with the hype … Power and land are now as important as GPUs … The existential risk debate has cooled, giving way to concrete questions about reliability, cyber resilience, and the long-term governance of increasingly autonomous systems.”
John Mac Ghlionn said the question is not whether AI will topple governments, but how many will fall:
“Starving peasants didn’t spark the French Revolution. It exploded when ambitious lawyers, merchants and intellectuals found their path to power blocked by aristocratic privilege.”
Mechanize explained why it’s trying to automate all work:
“Full automation is inevitable, whether we choose to participate or not. The only real choice is whether to hasten the inevitable, or to sit it out.”
People immediately, and correctly, dunked on this line-of-thinking: “Those who do bad things want to believe their actions don’t matter because the outcome is inevitable,” Rudolf Laine said.
POLICY
The Senate passed the National Defense Authorization Act, including the GAIN AI Act, which would require American chipmakers to prioritize US customers.
ICYMI: Read our explainer on the GAIN Act, and why Nvidia hates it.
The US reportedly approved billions of dollars worth of Nvidia chip exports to American customers in the UAE.
A similar deal for Saudi Arabia is reportedly imminent.
The Commerce Department is reportedly investigating Megaspeed, a Singapore-based data center company, for helping smuggle Nvidia chips to China.
The NYT has an excellent investigation into the company.
China announced sweeping export controls on rare earth materials, which will have big impacts on the chip supply chain (among other things).
Dean Ball thinks this is a “very big deal” which “could mean ‘lights out’ for the US AI boom.”
Chinese customs officials are also reportedly carrying out “stringent checks” on chip imports in an effort to stop Chinese companies buying H20s.
Both moves come ahead of US-China trade negotiations at APEC later this month.
China blacklisted research firm TechInsights after it exposed Huawei’s reliance on foreign hardware.
A new Senate Democrats report warned that AI-related automation could destroy 100m jobs in the next 10 years.
It was deservedly mocked for its methodology, which was basically “ask ChatGPT to guess.”
The EU outlined strategies to boost AI adoption and research in the bloc, including a €600m investment in AI for science research.
The EU Commission once again delayed its decision on whether to delay the AI Act’s high-risk rules.
INFLUENCE
Sam Altman still thinks superhuman machine intelligence is still the biggest threat to humanity’s existence.
Jensen Huang came under fire for his recent comments that being a China hawk is a “badge of shame” and “not patriotic.”
Politico has a great piece on how SB 53 made its way into law.
One notable detail: Chris Lehane’s public letter to Gavin Newsom was reportedly “not well received by some in the Legislature.”
While playing Fortnite last week (don’t ask), Newsom said the new bill “does a little, not enough.” (Wonder why!)
Encode’s Nathan Calvin wrote about how OpenAI used its legal fight with Elon Musk to send a sheriff’s deputy to his house to demand his private texts and emails concerning SB 53.
Consumer advocates are pressing Newsom to sign AB 489, a bill that would prevent AI chatbots from posing as licensed health providers.
The California Federation of Labor Unions wrote to OpenAI saying it does “not want a handout from [OpenAI’s] foundation,” urging the company to “stand down from advocating against AI regulation” and “to divest from any PACs funded to stop AI regulation.”
Sam Altman reportedly plans to meet MGX and Mubadala on a Middle East visit, as part of a broader effort to cozy up with tech leaders in Asia.
OpenAI griped to EU antitrust enforcers about “difficulties” it faces “in competing with entrenched companies” like Google, Microsoft, and Apple.
OpenAI said its GPT-5 models are its least politically-biased yet.
AGI politics are brewing, and people have thoughts.
Bloomberg’s Joe Weisenthal predicted that, “when and if this becomes a real national topic of debate, Big AI will find itself fairly friendless in DC,” facing opposition from both parties.
Anton Leicht argued that AI safety advocates should make a preemption deal before their concerns get drowned out by other issues.
Jason Hausenloy at the Center for AI Safety argued that people need not be so afraid of the AI super PACs, noting that lobbying has sharply diminishing returns.
INDUSTRY
OpenAI
OpenAI and Nvidia have formed a web of “circular” deals, taking in companies such as Oracle and Coreweave, worth potentially $1 trillion, Bloomberg reported. (This story is where that diagram of gray circles you’ve likely seen on X came from.)
Adding to that web, OpenAI struck a deal on Monday to buy tens of billions of dollars of chips from AMD, and will also get a 10% stake in the Nvidia rival if various milestones are met. AMD’s stock rose 24% after announcing the partnership.
Around half of the value of the increase will effectively go to OpenAI, writes Bloomberg’s Matt Levine: “Schematically, OpenAI could buy AMD stock to predictably profit from the stock-price bump it created…it might look like insider trading…but buying the stock from AMD is fine. So that’s what they did.”
Nvidia CEO Jensen Huang told CNBC: “I’m surprised that [AMD] would give away 10% of the company before they even built it … it’s clever, I guess.”
Monday was also DevDay for OpenAI:
Sam Altman announced that ChatGPT reached 800m weekly active users.
The company launched yet another attempt at an app store.
Former Apple designer Jony Ive and Altman vaguely discussed the company’s hardware efforts.
The FT learned that OpenAI has struggled to solve technical issues with the product — reportedly, a screenless, palm-sized device that responds to verbal requests.
Sora hit 1m downloads less than five days after its launch, becoming the most downloaded iPhone app.
OpenAI announced two upcoming changes to Sora: giving rightsholders more control over character generation, and a revenue model that shares income with rightsholders.
Codex is reportedly pulling ahead of Anthropic’s Claude Code in certain coding capabilities, although it still lags in usage.
OpenAI is expanding its affordable ChatGPT Go plan to 16 new countries across Asia.
OpenAI acquired Roi, an AI-powered personal finance app.
xAI
In more AI ouroboros news, xAI reportedly doubled its funding round to $20b.
That’s with an investment of $2b from Nvidia, which will lease chips to Elon Musk’s company for its Colossus 2 data center.
Colossus continues to cause controversy over power consumption and environmental impact in Memphis, reports WSJ.
Anthropic
Anthropic announced plans to open an office in Bengaluru, India in early 2026.
It’s also reportedly trying to set up a deal with Reliance Industries, India’s most valuable company, owned by the controversial Ambani family.
Thousands reportedly attended a pop-up in NYC for Anthropic’s “Keep Thinking” campaign designed to position it as a counterweight to “AI slop.”
Claude models are being integrated into IBM’s software targeting enterprise customers.
Anthropic signed its biggest ever enterprise deal, rolling out Claude to 470,000 Deloitte employees.
Google DeepMind released Gemini 2.5 Computer Use, which lets AI agents interact with web browsers by clicking, typing and scrolling.
Google Cloud launched Gemini Enterprise, a $30 a month AI platform for workplace tasks which competes with Microsoft’s Copilot and OpenAI’s ChatGPT Enterprise.
Google launched a new AI bug bounty program offering up to $30K for finding security vulnerabilities in its AI products.
Google DeepMind showed off CodeMender, an AI agent that autonomously detects and patches software vulnerabilities, though it’s still in the “research phase”.
A federal judge heard arguments on what remedies are appropriate following Google’s loss in its ad tech monopoly case against the US, with the DOJ proposing a breakup.
Others
OpenAI and Anthropic are reportedly struggling to get insurance matching the potential claims they face: OpenAI supposedly has only $300m in coverage (and some think much less).
Relatedly: Read our recent op-ed on how insurance can help make AI more secure.
VC firms poured $192.7b into AI startups in 2025, representing over half of all VC investment globally. That’s despite overall VC investment being down in the year.
AI companies accounted for 46% of global VC funding in the third quarter, with Anthropic alone accounting for 29%.
SoftBank agreed to buy the robotics business of Swiss engineering group ABB for $5.4b, its latest investment in physical AI.
Qualcomm reportedly acquired Italian electronics company Arduino to boost its position in the robotics industry.
Figure launched its third-generation AI-driven humanoid robot designed for home use and commercial applications.
Tesla reportedly walked back its overambitious 2025 production targets for Optimus robots due to technical challenges, particularly with the hands.
Reflection AI raised $2b to build an American open-source parallel to DeepSeek.
Prime Intellect, a decentralized AI startup, has another idea for competing with DeepSeek: let anyone run reinforcement learning.
MOVES
Rishi Sunak, former UK Prime Minister, joined Microsoft and Anthropic as a senior adviser.
Anthony Armstrong was appointed xAI’s CFO.
Robert Hoffman joined AMD as senior vice president, head of global government relations & regulatory affairs.
Maryam Cope joined Arm as its head of government affairs and innovation policy.
NetChoice promoted Amy Bos to vice president of government affairs and Zach Lilly to director of government affairs.
Jeff Alstott is stepping down as director of RAND’s Center for Technology and Security Program.
Former VMware CEO Raghu Raghuram joined a16z as a managing partner and GP for AI infrastructure investments.
Justine Moore has moved to lead a16z’s investments in AI creative tools and companionship.
Tom Westgarth left the Tony Blair Institute.
Eric Zhang (ex-Modal) joined Thinking Machines.
Tommy Collison (ex-Retool, and brother of Stripe’s Patrick and John) joined Anysphere, the team behind Cursor.Item
RESEARCH
GPT-5 Pro achieved a new record for frontier LLMs on ARC-AGI-2 — designed to test for progress towards AGI — scoring 18.3%.
It still trails the never-released o3-preview on ARC-AGI-1, though.
Epoch AI estimated that OpenAI has the compute capacity to run approximately 7m “digital workers” for tasks that GPT-5 can perform.
It takes as few as 250 malicious documents to poison an LLM of any size, according to research by Anthropic, and the UK’s AI Security and Alan Turing Institutes.
“Our results challenge the common assumption that attackers need to control a percentage of training data; instead, they may just need a small, fixed amount… We’re sharing these findings to show that data-poisoning attacks might be more practical than believed.”
METR’s latest evaluation gave Claude Sonnet 4.5 a time horizon on software engineering tasks of 1hr 53min, below the current highest estimate for a model of 2hr 15min.
That’s a non-statistically significant improvement over Opus 4.1 of 8.8%, but a statistically significant 66% longer than Sonnet 4.
The result is squarely on METR’s exponential trendline.
SemiAnalysis launched InferenceMAX, a new benchmark to compare how well different chips run AI models.
A Samsung researcher developed a tiny open-source model which outperforms models 10,000x larger on specific reasoning tasks.
Data centers built for AI may demand 10 times more power by 2030, according to DNV.
The report says AI electricity demand will reach 3% globally by 2040 and 11% by 2060.
BEST OF THE REST
Young people in China are increasingly turning to AI chatbots for mental health support, Yi-Ling Liu reported for Rest of World.
A new survey found that nearly one in five high schoolers report they or someone they know has had a romantic relationship with AI. Over 40% used AI for friendship.
ChinaTalk explored a fascinating difference between US and Chinese AI companion apps — while most American products are designed for the heterosexual male gaze, female users and male characters dominate Chinese apps.
Researchers are working on diamond-based cooling systems for AI chips.
Chinese ghost city Ordos has transformed into a real-world lab for self-driving vehicles.
At his COLM keynote speech, Anthropic’s Nicholas Carlini asked if, given the harms, LLMs are worth it.
The Oatmeal made a comic about AI art, and how “nobody seems to want it…and yet it thrives, like an Arby’s built inside a protected forest.”
MEME OF THE WEEK
Thanks for reading. If you liked this edition, forward it to your colleagues or share it on social media. Have a great weekend.
Thanks for this reporting. Quickly becoming one of the only email newsletters I open :)