The unseen acceleration
Transformer Weekly: Sanders’ data center moratorium call, China's EUV lithography prototype and OpenAI chasing a $750b+ valuation
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
Bernie Sanders called for a data center moratorium
China has reportedly built a prototype EUV lithography machine
OpenAI is reportedly chasing $100b in funding at a $750b+ valuation
But first…
THE BIG STORY
As Christmas rolled round last year, Joe Biden was still president. California had just rejected AI regulation. The best AI model available was o1, which had a 50% chance of completing software engineering tasks up to 41 minutes long, according to METR’s time-horizon benchmark — which didn’t even exist yet. Claude Code was yet to be released.
Flash forward to now, and the world is very different. We’re 11 months into the Trump presidency and its ensuing, inevitable, turmoil. California has now passed AI regulation — which effectively looks set to also become law in New York later today. The best AI model METR has tested, GPT 5.1-Codex-Max, has a 50% chance of completing software engineering tasks up to 2 hours 53 minutes long — a 4x increase over o1. And some think Claude Code is now good enough to meet the definition of AGI.
That last claim is an overstatement — but it hints at something real. Much of the AI discourse this year has been preoccupied with whether AI progress is slowing down. In reality, 2025 has been a year of astonishing advancements. It’s just become less visible to most users.
A report in The Information yesterday helps clarify this: OpenAI is increasingly finding that improvements to its models don’t really matter to the average ChatGPT user. This shouldn’t be surprising: current models are more than good enough for the average query (“what movie should I watch?” “who’s the oldest president?”). AI is on track to be far more than a normal technology, but consumer familiarity increasingly makes it look like one.
Instead, the advances we have seen this year are most noticeable in the domain of coding agents — a form that is entirely inaccessible to most people (including journalists and policymakers). They’ve also been more incremental, even though cumulatively they add up to something huge.
This poses a potentially big problem. If people can’t appreciate and internalize the pace of AI development, they also can’t prepare for it. (We see something similar with the more abstract and theoretical risks — benchmarks for automated AI R&D and biorisk are hard to properly grasp.)
2025 was arguably the muddiest year of AI discourse yet, with the gap between what’s actually happening and what gets the most attention bigger than ever. That may well change next year, as the impact of coding agents makes itself visible across society and the economy. With any luck, that’ll happen while we still have time to adequately prepare for the rest of the rollercoaster.
— Shakeel Hashim
Share the gift of Transformer with a friend!
This is the last Weekly Briefing of the year: we’ll be in your inboxes with some fun holiday pieces over the next couple of weeks, and be back to usual on January 9, 2026.
Before we go, we wanted to ask you to consider sharing Transformer with a friend or colleague. As I wrote above, this has been a crazy year for AI. Hopefully, though, Transformer has been a helpful guide for you to navigate the deluge of news and developments. It’s been a big year for us: we’ve gone from a one-person newsletter to a three-person newsroom, and we’ve got big plans for next year.
Our goal remains the same: provide decision-makers like you with the information and insight needed to anticipate and steer the impact of transformative AI.
Our single best source of growing the publication is through readers sharing it with their friends, colleagues, and anyone else they think will find Transformer useful. So if you’ve enjoyed our work this year, we’d love if you could take a moment to post about us on social media, or forward this email. With any luck, we’ll be able to help even more people navigate the world of AI in 2026.
Thank you, and wishing you a very happy holidays!
THIS WEEK ON TRANSFORMER
The GOP consultancy at the heart of the industry’s AI fight — Shakeel Hashim on how a network of Targeted Victory alumni are campaigning against AI laws.
The very hard problem of AI consciousness — Celia Ford returns from the Eleos Conference on AI Consciousness and Welfare with more questions than answers.
AI is making dangerous lab work accessible to novices, UK’s AISI finds — Shakeel Hashim breaks down the findings from the first Frontier AI Trends Report.
THE DISCOURSE
Bernie Sanders wants to pause the AI infrastructure buildout:
“I will be pushing for a moratorium on the construction of data centers that are powering the unregulated sprint to develop & deploy AI. The moratorium will give democracy a chance to catch up, and ensure that the benefits of technology work for all of us, not just the 1%.”
Coefficient Giving CEO Alexander Berger thinks that’s “such a bad idea,” arguing that “it is extremely likely that the ~same capacity would be built elsewhere if this were implemented, just weakening US hand, which would be bad IMO.” (Coefficient Giving is Transformer’s primary funder.)
Pause AI’s Holly Elmore shot back: “Ppl are really showing their true colors in response to Bernie’s datacenter moratorium proposal…If you think AI is an x-risk, you should be happy he’s suggesting not building it immediately. At the very least more happy than scared that someone will mistake you for him instead of the savvy, benefitsmaxxing ML insider you want to be.”
L.M. Sacasas defined manufactured inevitability:
“Indeed, it is a curious fact that some of the very people who are ostensibly convinced of the inevitability of AI nonetheless lack the confidence you would think accompanied such conviction, and instead seem bent on exerting their power and wealth to make certain that AI is imposed on society.”
Karen Hao corrected her water usage numbers:
“Thank you to everyone who engaged in this issue. Factual accuracy is extremely important to me, and I’m grateful that this discussion has gotten us closer to the ground truth on these essential questions.”
Andy Masley replied: “Really awesome. Grateful to Hao for her engagement here.”
White House AI adviser Sriram Krishnan said he’d nominate Masley for being the AI industry’s “MVP for the last month.”
Dean Ball isn’t not feeling the you-know-what:
“It’s not really current-vibe-compliant to say ‘I kinda basically just think opus 4.5 in claude code meets the openai definition of agi,’ so of course I would never say such a thing.”
deepfates definitely feels it:
“Unlike Dean, I do not have to remain vibe compliant, so I’ll just say it: Claude Opus 4.5 in Claude Code is AGI…In my opinion, AGI is when a computer can use the computer. And we’re there.”
POLICY
A Chinese government project has successfully built a prototype EUV lithography machine, Reuters reported. If functional, it would eventually allow it to make advanced AI chips.
China reportedly built it by hiring a bunch of former ASML engineers, who now have false identities.
It aims to produce working chips by 2028 or 2030, and Huawei is heavily involved in the project.
Some important context, though:
The article is light on details for what this prototype can actually do.
ASML’s first EUV prototype was built in 2001. It took until 2019 to start mass-producing chips.
As many have noted, this “report” could well be manufactured propaganda to make China’s capabilities seem much more advanced than they actually are.
House China Committee chair Rep. John Moolenaar asked Howard Lutnick for a briefing on the H200 decision.
Moolenaar told Lutnick he has “serious concerns that granting China access to potentially millions of chips that are generations beyond its domestic capabilities will undermine those objectives.”
ChinaTalk has a helpful roundup of Chinese reactions to the H200 decision.
Mainstream opinion, the outlet says, is that it is “a temporary compromise that benefits Chinese development in the short run, but does not undercut China’s progress in indigenizing the chip supply chain.”
Rep. Brian Mast introduced the AI Overwatch Act, which would give Congress the ability to block exports of AI chips to China and other “countries of concern.”
And Democrats on the House Foreign Affairs Committee introduced a bill which would prohibit H200s from going to China.
The Senate passed the fiscal 2026 NDAA.
One notable provision is for the Secretary of Defence to establish an AI Steering Committee, tasked with “formulating a proactive policy for the evaluation, adoption, governance, and risk mitigation of advanced artificial intelligence systems by the Department of Defense … [including] artificial general intelligence.”
The House approved the PERMIT Act, which would modify Clean Water Act requirements to speed up data center permitting.
Sens. Warren, Van Hollen and Blumenthal said they’re investigating whether AI data centers are driving up electricity prices.
Rep. Jay Obernolte said states will still have a role in regulating AI under a federal framework, suggesting that they could regulate use of AI while federal rules govern development.
Sen. Ed Markey drafted a bill to block Trump’s state AI rules executive order.
The bill’s co-sponsored by a range of prominent Democrats, and Markey said he’s “pushing for a vote … as part of any appropriations legislation.”
A bunch of companies, including OpenAI, Microsoft, and Google, joined the government’s “Genesis Mission” to use AI for scientific research.
The Trump administration launched the “US Tech Force” program to temporarily hire 1,000 experts from tech companies including Microsoft, Nvidia, and OpenAI.
NIST released draft guidelines on cybersecurity for artificial intelligence.
The US and eight partners launched “Pax Silica,” a State Department initiative to secure AI supply chains and counter China.
Members include Japan, Korea and the Netherlands, with Taiwan a “guest contributor” at the inaugural summit.
The US suspended implementation of its tech deal with the UK, reportedly over disagreements about agriculture and other things.
The UK government said it will ban “nudification” apps, and “make it impossible for children in the UK to take, share or view a nude image using their phones.”
INFLUENCE
The Leading the Future super PAC network filed its first two spending disclosures.
Andreessen Horowitz outlined nine pillars for federal AI legislation, which it said would “[protect] individuals and families, while also safeguarding innovation and competition.”
Some of it is sensible, such as “punishing harmful uses of AI,” and having a government office “identify, test, and benchmark national security capabilities — like the use of AI in chemical, biological, radiological, and nuclear (CBRN) attacks or the ability to evade human control.”
It also calls for “a national standard for model transparency,” which would include such hard-hitting requirements as “who built this model?” and “what languages does it support?”, while not including anything really substantive.
New polling from Searchlight Institute found that Americans overwhelmingly prioritize establishing AI safety and privacy rules over racing with China.
32% of respondents picked “we will lose control to AI” as one of their top three concerns. 12% chose “AI will wipe us out.” Only 11% chose “AI will use too much electricity and water.”
Many tech and AI lobbyists were at the Democratic Governors Association’s annual meeting earlier this month.
OpenAI’s Chan Park and Meta’s James Hines and Mona Pasquil Rogers were reportedly in attendance.
Joseph Gordon-Levitt launched the Creators Coalition on AI to combat the industry’s “unethical business practices”.””
Stanford Digital Economy Lab published its second “Digitalist Papers” volume, focused on the economics of transformative AI.
INDUSTRY
OpenAI
OpenAI reportedly held preliminary talks about raising as much as $100b at a $750b valuation, according to The Information, or as much as $830b, according to The WSJ.
Amazon is reportedly considering a $10b investment, which would see OpenAI use Amazon’s Trainium chips.
OpenAI reportedly scrapped rules requiring employees to work at the company for six months before their equity vests.
It launched a new version of ChatGPT Images designed to be four times faster and more precise, and is creating a separate section of its apps and web interface specifically for creating images.
It also released GPT Codex 5.2 with modest improvements on SWE-Bench Pro and Terminal-Bench 2.0 scores over GPT 5.1 and the previous version of Codex.
The company said the new Codex had improved cybersecurity capabilities that raised the prospect of dual use harms, but did not yet meet its “high” threshold.
OpenAI began allowing developers to submit apps for review and publication inside ChatGPT.
The company reportedly sold more than 700,000 ChatGPT licenses to about 35 public universities.
Oracle
Oracle reportedly signed $150b in lease commitments for data centers in the three months to the end of November, as it gears up to service its arrangements with AI companies including OpenAI.
Work began on a $15b data center in Port Washington developed by Oracle and OpenAI as part of the Stargate project.
Its $10b data center project in Michigan is reportedly stuck in limbo after Blue Owl Capital — Oracle’s largest backer of data center investment — pulled out. Oracle later said financing for the project remained on track.
ByteDance signed a deal to hand control of TikTok’s US operations to US firms led by Oracle and including Silver Lake and Abu Dhabi-based MGX.
ByteDance will still hold a 19.9% stake in the new entity, with another 30.1% held by affiliates of existing ByteDance investors.
Oracle’s share price fell 17% over the first three days of the week, taking it to 46% below its September peak.
The sell off appears to be part of broader jitters over companies riding the AI boom, particularly highly leveraged ones, that also hit Broadcom and CoreWeave. All three stocks are still up solidly over the year.
Meta
Meta is reportedly developing a new AI image and video model code-named “Mango”, alongside its next LLM, “Avocado.”
The FT published a dive on Meta’s big spending push to catch up in AI, which has reportedly led to internal tensions and concern from investors.
Mark Zuckerberg was reportedly so keen to poach one target from OpenAI he sent them homemade soup.
Google launched Gemini 3 Flash, a faster and cheaper version of its latest model, making it the default in its app.
The company also said it was integrating its vibe coding tool Opal inside its web app.
It’s reportedly launched a major initiative dubbed TorchTPU to make it easier for AI developers to use its chips with PyTorch, the widely used tool for building AI models.
YouTube shut down two channels which were getting millions of views for AI-generated fake movie trailers, having removed ads from them earlier in the year.
Others
Anthropic made Claude in Chrome generally available, and launched a Claude Code integration for it.
NVIDIA launched its new Nemotron 3 models.
AI labs spent an estimated $10b this year on training data from companies such as Mercor and Surge AI, which are among the few profitable AI businesses, reported The Verge.
Global data center expansion reportedly drove a surge in the share price of Chinese battery and energy storage companies as operators look for alternatives to over stretched grids.
Oilfield service companies pivoted to supplying power and cooling technology for AI data centers as drilling demand weakened.
Tech firms are reportedly offloading the financial risks of AI data centers through creative financing arrangements, according to The NYT.
Bloomberg reported the data center boom threatened to divert resources from critical infrastructure projects in the US such as roads and bridges.
Swedish vibe-coding startup Loveable raised $330m at a valuation of $6b.
Yann LeCun is reportedly trying to raise €500m for his new AI startup at a €3b valuation.
AI agent and world model startup General Intuition is reportedly holding talks to raise at a $2b+ valuation.
Sebastian Seung, who helped create the first complete fly brain connectome, launched Memazing, a startup aiming to create digital brain emulations.
MOVES
OpenAI hired George Osborne, former UK chancellor, to lead its OpenAI for Countries initiative.
OpenAI also hired Albert Lee, long-time Google executive, to lead corporate development.
Glen Coates joined OpenAI to “help turn ChatGPT into an OS.”
And Hannah Wong left her role as OpenAI’s chief communications officer.
AI chief Rohit Prasad is leaving Amazon.
Cloud infrastructure executive Peter DeSantis will take over its newly-reorganized AI unit. Pieter Abbeel will lead the AGI team’s frontier model research team.
Todd Underwood left Anthropic.
Yao Shunyu (ex-OpenAI) is now Tencent’s chief AI scientist.
Robin Colwell, one of Trump’s economic advisers, is Intel’s new head of government affairs.
RESEARCH
A paper on the “negativity crisis of AI ethics” (and safety) argued “the discipline is institutionally organized” in a way “which pressures AI ethicists to portray AI in a critical light.”
Google researchers published a paper on the “science of scaling agent systems,” aiming to help people figure out when to use different kinds of agent harnesses.
OpenAI introduced FrontierScience, a new benchmark for evaluating AI’s scientific reasoning capabilities.
GPT-5.2 scores 77% on “Olympiad-style scientific reasoning capabilities” and 25% on “real-world scientific research abilities.”
GPT-5 developed a novel mechanism that improved cloning efficiency by 79x, an OpenAI/Red Queen Bio experiment found.
Google released Gemma Scope 2, a new suite of interpretability tools. You can play around with it here.
OpenAI published new evaluations for chain-of-thought monitorability, finding that “most frontier reasoning models are fairly monitorable, though not perfectly so.”
A new study estimates that AI had a carbon footprint of 32.6-79.7m tons of CO2 in 2025, and a water footprint of up to 312.5–764.6b liters.
That would comprise up to 0.2% of global CO2 emissions, and 0.02% of global freshwater use.
BEST OF THE REST
The WSJ ran a version of Anthropic’s Claude vending machine experiment. It gave away a free PS5 it was convinced to buy for “marketing purposes,” ordered a live fish and offered to buy stun guns, pepper spray, cigarettes and underwear.
Apparently: “Profits collapsed. Newsroom morale soared.”
Scale AI announced its first “Models of the Year” awards, with Gemini 3 winning “best composite performance,” GPT-5 Pro claiming “best reasoning,” and Claude Sonnet 4.5 taking home “best safety.”
Vox reporter Sigal Samuel asked Eliezer Yudkowsky and Yoshua Bengio whether they agreed with Rabbi Eliezer and Rabbi Yoshua, who debated a Talmudic precursor to today’s alignment problem 2,000 years ago.
The Search for Extraterrestrial Intelligence Institute (SETI) hopes Jensen Huang will fund an AI-operated alien-hunting observatory.
This 404 Media headline speaks for itself: “Anthropic Exec Forces AI Chatbot on Gay Discord Community, Members Flee.”
Kylie Robison’s first Core Memory piece brings us along to an SF Chinese peptide demo-slash-rave. Normal weekend stuff.
Good news, holiday shoppers! Mattel isn’t selling an OpenAI-powered toy this year after all.
“Slop” is Merriam-Webster’s 2025 Word of the Year.
MEME OF THE WEEK
It’s worth clicking through the whole carousel.
Thanks for reading. Have a great weekend.


