AI CEOs want to slow down. The world’s too busy to help
Transformer Weekly: Obernolte’s big AI bill, Altman’s fundraising efforts, and Mast v Sacks
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
Rep. Jay Obernolte is getting ready to introduce his Great American AI Act.
Sam Altman has been meeting with Middle Eastern investors for a giant OpenAI fundraising round.
Rep. Brian Mast got in a fight with David Sacks over his AI Overwatch Act.
But first…
THE BIG STORY
At Davos this week, the AI leaders racing to build what they believe will be the most powerful technology in history said it would probably be better for everyone if they slowed down.
“If we can, maybe it would be good to have a slightly slower pace than we’re currently predicting,” Google DeepMind boss Demis Hassabis said, “so that we can get this right societally.” Anthropic’s Dario Amodei agreed: “I would prefer that. I think that would be better for the world.” Even JPMorgan Chase CEO Jamie Dimon — hardly an AI doomer — suggested that the rollout of AI might need to be slowed down to “save society” from huge civil unrest.
In another panel, Hassabis went even further. Asked whether he would advocate for a pause in AI development if every company and country joined in, his response was simple: “I think so.”
But while Hassabis and Amodei admitted that a slowdown would be preferable, they also insisted that they can’t do it alone. Such a pause would require “international collaboration,” Hassabis said, with Amodei noting that “it’s very hard to have an enforceable agreement where they slow down and we slow down.”
The message was one of helplessness. The executives freely admit to being trapped in a prisoner’s dilemma: each would prefer to slow down, but none will do so unilaterally. They are crying out for someone — anyone — to step in and help them.
And yet the other big story of this year’s Davos meeting was the collapse of the international world order — the only thing that might have given the CEOs the out they crave.
Canadian prime minister Mark Carney put it plainly. “We are in the midst of a rupture,” he said. “The multilateral institutions on which the middle powers have relied … the very architecture of collective problem solving, are under threat.”
Countries must “stop invoking [the] rules-based international order as though it still functions as advertised,” he warned. Instead, they must “call it what it is — a system of intensifying great power rivalry, where the most powerful pursue their interests.”
Given President Trump’s actions and rhetoric, Carney is surely right. Any agreement based on trust in the US to uphold its side of the bargain looks out of reach. But the timing is awful. If the AI CEOs are to be believed, they are on the cusp of building immensely powerful — and destabilizing — technology. Only governments working together can rein them in. Yet the prospect of international coordination feels increasingly like a pipe dream.
The gap between what’s needed and what’s achievable has rarely been so visible. The AI CEOs are crying out for help. If only someone was in a position to answer.
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
Against the METR graph — Nathan Witkin argues that the bellwether of AI capability is fatally flawed.
Teaching AI to learn— Celia Ford unpacks why everyone is talking about continual learning
THE DISCOURSE
Elon Musk shared an excerpt from the diary of OpenAI’s Greg Brockman, tweeting:
“They openly discuss their conspiracy to commit fraud and steal the charity.”
From Greg’s diary:
“This is the only chance we have to get out from Elon. Is he the ‘glorious leader’ that I would pick? We truly have a chance to make this happen. Financially, what will take me to $1B?”
Brockman responded:
“I have great respect for Elon, but the way he cherry-picked from my personal journal is beyond dishonest. Elon and we had agreed a for-profit was the next step for OpenAI’s mission. The context shows these snippets were actually about whether to accept Elon’s draconian terms.”
OpenAI defended Brockman:
“Elon is trying everything he can to slow down OpenAI for his personal benefit.”
Sam Altman threw Elon under the bus:
“Elon said he wanted to accumulate $80b for a self-sustaining city on Mars…and when we discussed succession he surprised us by talking about his children controlling AGI.”
Chris Painter subtweeted people grinding to “escape the permanent underclass”:
“It’s important to emphasize how selfish orienting one’s life around that goal is … a person who sincerely has this worldview would be better off working to democratize access to AI’s benefits.”
ICE supplier-in-chief Alex Karp thinks AI could make mass immigration obsolete:
“There will be more than enough jobs for the citizens of your nation, especially those with vocational training…these trends really do make it hard to imagine why we should have large-scale immigration.”
POLICY
Rep. Jay Obernolte said his Great American AI Act could be “weeks” away from introduction.
He said he’s currently in talks with the administration over technical details.
The bill will supposedly preempt state laws with a federal framework that “gives people some comfort that Congress is capable of acting on the issue of AI regulation,” Obernolte told Punchbowl.
The House Foreign Affairs Committee advanced the AI Overwatch Act.
Rep. Brian Mast’s bill would give Congress oversight on chip export controls. This week, Rep. Greg Meeks became the first Democrat to support the bill.
The HFAC markup was preceded by a fight between Mast and David Sacks, while a dozen right-wing influencers posted suspiciously similar tweets attacking the act.
Nvidia suppliers have reportedly paused production on H200 components while everyone waits to see if China will allow imports of the chips.
California AG Rob Bonta sent xAI a cease and desist, stating that Grok’s creation of nonconsensual sexual deepfakes is illegal.
PJM Interconnection outlined a plan to require data centers to generate their own power or accept restrictions on grid usage.
Katie Porter and Sam Liccardo, two of California’s top Democrats, opposed the state’s proposed billionaire tax.
California state Sen. Steve Padilla introduced SB 903, which would ban AI tools from providing therapy.
Dean Ball called it “raw rent-seeking by occupationally licensed therapists.”
The FTC is investigating Silicon Valley acqui-hires to make sure big companies aren’t skirting the merger review process.
South Korea introduced the “AI Basic Act,” which it claims to be the world’s first comprehensive set of AI laws.
The laws include rules requiring human oversight in “high-impact” areas, and AI labeling requirements.
Startups in the country aren’t happy about it.
The UK reportedly wants AI training data transparency requirements, which it hopes will let it avoid copyright reforms by letting companies strike licensing deals instead.
INFLUENCE
Elon Musk reportedly donated $10m to pro-Trump Senate candidate Nate Morris, a sign he’s willing to spend big in the midterms.
Pete Hegseth was reportedly specifically criticizing Anthropic when he complained about AI models that “won’t allow you to fight wars.”
Dario Amodei still isn’t buttering up the administration, though: at Davos he said exporting advanced AI chips to China is like “selling nuclear weapons to North Korea.”
Big tech and AI companies spent $109m on lobbying last year.
Meta spent $26.29m, Amazon $17.78m, and Google $13.1m.
Nvidia spent $4.9m, a16z $3.53m, and OpenAI $3m.
OpenAI’s Chris Lehane published advice for candidates in the 2026 midterms.
He proposed pushing policies that democratize AI access, better aligning corporations with public interest, and investing in infrastructure.
“Voters don’t believe the accelerationists and don’t want to believe the doomers,” Lehane wrote.
The White House Council of Economic Advisers released a report on “Artificial Intelligence and the Great Divergence,”
It argues that AI has parallels to the Industrial Revolution and could lead to an economic divergence between countries that embrace it — and those that don’t.
Paris Hilton appeared alongside Rep. Alexandria Ocasio-Cortez to endorse the DEFIANCE Act.
Around 800 celebrities, including Cate Blanchett and Scarlett Johansson, launched a campaign accusing AI companies of “theft at a grand scale.”
The National Artificial Intelligence Association is hosting its first AI Policy Summit on February 10.
UK MPs warned that the nation’s “wait-and-see” approach to AI risks in the financial sector exposes consumers to “serious harm.”
Over 200 Silicon Valley tech workers signed a petition asking CEOs to pressure Trump to pull ICE from Minneapolis and other affected cities — a small number, but “one of the first mass demonstrations of opposition in Trump’s second term” from the industry, according to the Washington Post.
INDUSTRY
OpenAI
Sam Altman has reportedly been meeting with Middle Eastern investors as part of a $50b+ fundraising round.
The company’s reportedly hoping to reach a valuation of $750-830b.
Altman said that OpenAI added more than $1b in ARR in the past month from its API business.
OpenAI officially introduced advertising to ChatGPT.
Ads will appear in ChatGPT free and Go tiers in the coming weeks. The company says ads will be clearly labeled and won’t influence chat responses.
CFO Sarah Friar announced that OpenAI’s focus for 2026 will be “practical adoption,” and that revenue — in part from ads — will “fund the next leap.”
The Verge’s Tom Warren pointed out that, in October 2024, Sam Altman said, “I kind of think of ads as a last resort for us as a business model.”
Pseudonymous user signüll tweeted: “openai is essentially building ‘facebook 2.0’ & all of the old facebook peeps are doing it.”
OpenAI is rolling out age prediction to safeguard ChatGPT users under 18.
Chris Lehane said OpenAI is “on track” to share its first AI device later this year.
OpenAI will pay to prevent Stargate energy costs from driving up local utility bills.
The Gates Foundation partnered with OpenAI to accelerate AI adoption in African health clinics.
Service Nowpartnered with OpenAI to embed its models into IT software.
Anthropic
Anthropic’s annualized revenue has reportedly topped $9b, more than doubling since last summer.
It reportedly lowered its gross profit margin projections from 50% to 40%, though, thanks to higher-than-expected inference costs.
The company is reportedly set to raise over $10b at a $350b valuation.
Sequoia, an OpenAI investor, is reportedly planning to invest.
Claude’s constitution — the document shaping how it behaves — got an overhaul.
It now says that Claude might have “some kind of consciousness or moral status,” and considers its “psychological security, sense of self, and well-being.”
Recent job listings suggest Anthropic is scouting experts in speech and audio, biology, cybersecurity, and vision.
Meta
Meta Superintelligence Labs has delivered its first base models internally, CTO Andrew Bosworth said.
The new models are “looking really good,” Bosworth said, but there’s lots of work still to do.
He acknowledged that Llama 4 was a “disappointment” which “wasn’t amazing at anything.”
YouTube creators will soon be able to make Shorts starring deepfakes of themselves.
Google DeepMind reportedly hired the CEO and several engineers of Hume AI, an AI voice company.
Waymo launched in Miami.
Others
SpaceX is reportedly preparing to go public by July to raise enough cash to build data centers in space.
Humans&, the new “human-centric frontier AI lab” founded by former Anthropic, Google, and xAI researchers, raised $480m at a $4.48b valuation.
Fei-Fei Li’s World Labs is reportedly in talks to raise $500m at a $5b valuation.
Apple is reportedly developing an AI wearable that could be released as soon as next year.
Microsoft is encouraging thousands of employees, including non-developers, to use Claude Code.
Micron intends to buy a fabrication site in Taiwan to expand DRAM production in light of the ongoing memory chip shortage.
UAE-based AI company G42 expects to get Nvidia, AMD and Cerebras chips “in the next couple of months.”
Apollo Research is becoming a public benefit corporation, and will start selling “AGI safety products.”
MOVES
OpenAI hired Ann O’Leary, a former chief of staff for Gavin Newsom and policy advisor to Hilary Clinton, as VP of global policy.
xAI co-founder Greg Yang left following a Lyme disease diagnosis.
Engineer Sulaiman Khan Ghori announced his departure from xAI, days after revealing on a podcast that the company used temporary carnival permits to fast-track its Memphis data center and relies heavily on AI agents as “virtual employees.”
Anthropic appointed Tino Cuéllar, former California Supreme Court Justice and current Carnegie Endowment president, to its Long-Term Benefit Trust.
Cuéllar, one of the co-leads of Gavin Newsom’s AI working group, also announced that he’s stepping down from Carnegie in July.
Evidence Action’s Kanika Bahl and the Centre for Effective Altruism’s Zach Robinson left Anthropic’s trust.
Anthropic also hired former Microsoft India MD Irina Ghose to lead its Bengaluru expansion.
The Wall Street Journal retraced last week’s Thinking Machines drama.
“Murati’s issues with [Barrett] Zoph started over the summer when she began to suspect that he was having a relationship with a colleague whom he had lobbied to bring over from OpenAI,” the WSJ reported.
Zoph, Luke Metz, and Sam Schoenholz reportedly asked for Zoph to be put in charge of all technical decision-making. “Murati responded that Zoph was already CTO and asked why he hadn’t been doing his job for months.”
The NYT backed up much of the reporting, while adding that Murati has become close to Dario Amodei in recent months.
Meanwhile, Yifei Zhou, formerly of xAI, joined Thinking Machines.
Daniel Filan joined METR to work on assessing loss-of-control risk.
Meta Asia-Pacific policy chief Simon Milner announced his retirement.
RESEARCH
Anthropic fellows published research on the “Assistant Axis,” a pattern of activity that makes models act like “assistants” when responding to users.
Jailbreaks can lead to “persona drift,” steering models to exhibit harmful behaviors like encouraging self-harm.
Pseudonymous researcher antra criticized the paper for showing “no respect or humility in front of systems we barely comprehend.”
Researchers found that reasoning models build “societies of thought” — simulated idiosyncratic agents that debate with each other.
Geodesic Research observed that exposing LLMs to examples of badly-behaved AI during pretraining makes them more misaligned, while adding positive examples improves behavior.
Peking University researchers found that, by using reinforcement learning to teach a model both knowledge and reasoning skills, it can adapt to new knowledge more efficiently.
Google DeepMind shared how it built probes that can successfully detect harmful activation patterns in real-world Gemini conversations.
Google economists argued that recent entry-level job declines in AI-exposed occupations stem from monetary policy tightening, not AI displacement.
ARIA announced that it’s investing in research “to see if autonomous systems can reason, plan, and run experiments in the real world.”
BEST OF THE REST
Wired’s latest package is all about China including:
Tarbell journalist-in-residence Yi-Ling Liu’s look at the Cyberspace Administration of China’s public algorithm registry of generative AI tools, which inadvertently provides a detailed map of a nation’s AI ecosystem.
Johanna Costigan’s exploration of how Gen Z women in China are driving an AI boyfriend boom, including hiring cosplayers to embody digital companions on real-world dates.
Core Memory interviewed former OpenAI research VP Jerry Tworek, who left the company citing its shift toward “more conservative ways.”
The AI boom has created a shortage of electricians and plumbers needed to build data centers, with the Bureau of Labor Statistics projecting a shortfall of roughly 81,000 electricians annually through 2034.
“AI artists” such as Xania Monet and Breaking Rust racked up hundreds of millions of streams in 2025.
Zvi Mowshowitz documented ChatGPT responses to users asking it to depict how they treat it, from friendly images to disturbing portrayals suggesting abuse or revenge.
Claude Code power users may be consuming as much electricity a day as running a dishwasher, according to an estimate by developer Simon P. Couch. He calculates that his median Claude Code sessions consume 41 Wh — 138x more than a typical chatbot query — with daily usage reaching 1,300 Wh.
Cursor got a fleet of coding agents to build a functional web browser from scratch in just under a week.
MEME OF THE WEEK
Thanks for reading. Have a great weekend.



There's a lot of info here, but what stands out to me is the CEOs trying to slow AI down. Because they know everyone hates it? People don't care about more access to AI tools. They just want them to go away.
Didn't expect this. How do you see real international colaboration happening? Always insightful work!