We’re in the dumbest timeline
Transformer Weekly: Trump signs EO, Hochul guts the RAISE Act, and GPT-5.2 launches
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
President Trump signed an executive order on state AI regulation.
NY Gov. Kathy Hochul is proposing to replace the RAISE Act with verbatim language from SB 53.
OpenAI released GPT-5.2, and signed a deal with Disney.
But first…
THE BIG STORY
Processing the deluge of news from the past week, it’s hard not to be left thinking that when it comes to AI, we’re living in the dumbest possible timeline.
Almost whichever way you look at it — whether gung-ho e/acc or ultra-cautious AI safety advocate — President Trump seems determined to drag us into the worst of all worlds.
Export controls — or the lack thereof — is one clear example. By allowing China to buy Nvidia’s H200s, the world’s best dealmaker has given away one of America’s biggest advantages in the AI race.
The decision, which appears to have been motivated by clearly incorrect data on China’s domestic chipmaking capacity, has been slammed by almost everyone — including one of the founders of e/acc. But because it makes Trump feel good to get a 25% cut of Nvidia’s revenue, into the dumbest timeline we walk.
Last night’s executive order on preemption is another case. While less inflammatory than the leaked draft, the new EO is still intended to scare states into not regulating AI.
One might think that’s good for e/accs and the AI industry — but that’s unlikely to be the case. It’s legally messy, possibly unconstitutional, and will probably just be ignored by blue states like California and New York while court battles rage on. That will have the effect of creating less regulatory certainty, rather than more. It’s no surprise, then, that the AI industry hasn’t actually advocated for this EO: they don’t want it. But since when has David Sacks listened to the experts?
This level of incompetence and mismanagement would be bleak in any scenario. Taking place against the backdrop of rapid AI progress, it becomes hard not to laugh. The past month has seen three new frontier model releases, each with a valid claim to be better than the last. OpenAI is warning of imminent cybersecurity risks. And China continues to advance, with its only bottleneck — compute — now removed.
Anyone taking the pace and challenge of AI seriously has to live with an administration that seems determined to pick the dumbest possible path at every turn. Rather than stepping up, ensuring the US actually leads, and tackling the risks, the Trump administration is instead abdicating all responsibility and handing wins to China.
As crunch time for AI approaches, those at the wheel seem perfectly happy driving a clown car. It’s hard to see how things could get more ludicrous. But I’m sure Trump will show us next week.
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
New York’s governor is trying to turn the RAISE Act into an SB 53 copycat — Issie Lapowsky with the exclusive on how Kathy Hochul’s rewrite of RAISE strips out key measures on potentially harmful models and higher penalties.
Why AI reading science fiction could be a problem — Lynette Bye explores why some researchers are worried about the examples we’re feeding AI.
When high scores don’t mean high intelligence: how to build better benchmarks — Oxford Internet Institute researchers on how social sciences can help us assess AI better.
THE DISCOURSE
Pope Leo XIV continues to have wise AI takes:
“Recognizing and safeguarding what characterizes the human person and guarantees his or her balanced growth is essential for establishing an adequate framework for managing the consequences of artificial intelligence.”
Tim Dettmers said “AGI will not happen”:
“AGI, as commonly conceived, will not happen because it ignores the physical constraints of computation, the exponential costs of linear progress, and the fundamental limits we are already encountering. Superintelligence is a fantasy because it assumes that intelligence can recursively self-improve without bound, ignoring the physical and economic realities that constrain all systems.”
His piece received strong pushback from Dean Ball, Yo Shavit, and Boaz Barak.
Meta didn’t want Yann LeCun speaking publicly, according to Bloomberg:
“Prior to his departure, some employees had been encouraged to keep LeCun, who was a big proponent of open-source technology, out of the spotlight, including at public speaking events, the people said. Meta no longer saw him as emblematic of the company’s AI strategy, and couldn’t trust that he’d stay on message, they added.”
Jasmine Sun wrote an excellent NeurIPS scene report:
“On the ground, NeurIPS feels like one long holiday party, where grad students from around the world fly in to break from tuning their hyperparameters to drink champagne on some tech company’s dime.”
POLICY
President Trump signed an executive order on “ensuring a national policy framework for artificial intelligence.”
Like the leaked draft, it establishes an AI Litigation Task Force to sue states, and make states with “onerous AI laws” ineligible for BEAD funding.
Unlike the draft, it does not specifically criticize SB 53, and it says the legislative recommendation for a federal AI framework will “not propose preempting otherwise lawful State AI laws relating to … child safety protections,” among other things.
Some states are already planning to sue the administration over the EO.
President Trump also allowed Nvidia to sell H200 chips to “approved customers” in China.
The government will get a 25% cut of Nvidia’s revenue on the chips. The new rules also apply to AMD and Intel.
The decision was reportedly informed by White House analysis that Huawei will produce several million 910C chips next year.
That number is significantly higher than forecast by industry experts.
The move was slammed by Democrats. Congressional Republicans were more timid.
China immediately called a meeting with top tech companies to assess their demand for the chips.
ByteDance and Alibaba are reportedly asking for permission to buy H200s.
Nvidia is reportedly considering increasing production capacity to satisfy the demand.
DeepSeek, meanwhile, is reportedly using smuggled Blackwell chips to train its new model.
Shortly before Trump announced the policy change, the Justice Department unsealed a guilty plea related to trafficking Nvidia H100 and H200 chips to China.
China is reportedly considering a $70b incentive package to support its domestic chipmaking industry.
The House passed the NDAA. It includes several minor AI provisions, including the creation of an AI security framework for defense systems.
House Dems. launched a new “Commission on AI and the Innovation Economy,” after Republicans abandoned the bipartisan task force.
It will be led by Reps. Lieu, Foushee, and Gottheimer, with Reps. Pallone and Lofgren serving as ex officio co-chairs.
Reps. Foster and Sessions introduced the REAL Act, which would require federal agencies and officials to label AI-generated content.
Sens. Hassan and Banks introduced a bill creating two national competitions, run by the DHS, for research in AI interpretability and adversarial robustness.
42 state attorneys general warned AI companies that their chatbots’ “delusional outputs” could violate state laws.
The Pentagon launched a Gemini-powered chatbot for military personnel.
The International Network of AI Safety Institutes was rebranded as the International Network for Advanced AI Measurement, Evaluation and Science.
DeepMind announced an expanded partnership with the UK’s AI Security Institute focused on “foundational security and safety research.”
It also announced plans to build an “automated science laboratory” in the UK next year.
The EU launched an antitrust investigation into Google’s use of online content for training AI models.
Intel won a €140m reduction to its EU antitrust fine for abusing its position in the chip market, which was trimmed to €237.1m.
INFLUENCE
Leading the Future, the a16z and Greg Brockman-backed super PAC network, announced its first candidate ads.
It’s supporting Chris Gober, a Republican in Texas, and opposing Alex Bores.
OpenAI filed a ballot measure in California proposing safety controls for AI chatbots.
It appears designed to compete with a stricter proposal from Common Sense Media, which was recently opposed by the California Chamber of Commerce.
Fathom held a Congressional briefing on independent verification and auditing of AI systems.
A coalition of environmental groups urged Congress to enact a moratorium on AI data centers, citing their energy and water usage.
OpenAI has reportedly raised concerns over the Tarbell Center for AI Journalism, Transformer’s publisher.
Semafor reported on potential conflicts of interest posed by Tarbell’s fellowship program, which places AI journalists in newsrooms.
Cillian Crosson, Tarbell’s director, noted that “[Tarbell] fellows and their host newsrooms possess total autonomy; Tarbell never directs coverage, assignments, or angles.”
The piece was quickly amplified by David Sacks and people associated with the new AI industry-funded super PACs.
A new survey found that the UK public wants independent AI regulation.
More than 100 UK parliamentarians, including former defence secretary Des Browne, joined a campaign demanding binding regulations on frontier AI systems.
INDUSTRY
OpenAI
GPT-5.2 launched with a set of impressive benchmark scores.
The thinking variant scored 70.9% on GDPVal, OpenAI’s attempt at judging how well LLMs perform across a range of common work tasks, double GPT-5.1’s score.
Disney invested $1b in OpenAI.
The deal will see 200 characters, from Mickey Mouse to Darth Vader, appear in user generated videos on Sora as part of a three-year licensing agreement.
Disney CEO Bob Iger said the deal was “a way in” to AI, adding: “No human generation has ever stood in the way of technological advance, and we don’t intend to try.”
The company sent a cease-and-desist letter to Google shortly before the deal was announced.
ChatGPT reportedly maintained its lead in usage with 900m monthly users, up 5% from August to November.
Google is catching up, though, with Gemini growing its monthly users 30% to 346m.
ChatGPT topped Apple’s list of most downloaded free iPhone apps (excluding games) in the US for 2025.
OpenAI CEO Fidji Simo told reporters that ChatGPT’s “adult mode” was expected to launch in Q1 2026, with the company aiming to improve age prediction before a rollout.
The estate of a grandmother murdered by her son filed a wrongful death suit against OpenAI, alleging that delusional conversations with ChatGPT encouraged him to believe she was part of a conspiracy against him.
Stein-Erik Soelberg spent months talking to ChatGPT about being surveilled by a shadowy conspiracy, excerpts from which he posted to social media, before killing his mother and then himself.
Soelberg’s son Erik told The WSJ: “I think what OpenAI is doing and what they have done to make the AI remember a conversation can really turn ugly fast… You don’t know how fast that slope is going downhill until a tragedy like the one with my father and grandmother happened.”
OpenAI said it was increasing investment in cybersecurity safeguards in response to its latest models approaching the “high” capability threshold on its Preparedness Framework.
The company has reportedly become more hesitant in publishing research on AI’s negative economic impact, leading to the recent departure of two staffers.
An outside economist who worked with the company reportedly told Wired that OpenAI was increasingly publishing research which cast AI in a positive light.
OpenAI and Instacart launched a grocery shopping experience inside ChatGPT, allowing users to plan meals and checkout without leaving the interface.
The company denied that a Target shopping prompt in ChatGPT was advertising, despite users’ perceptions.
Google reportedly told advertising clients ads are coming to Gemini in 2026.
It launched a pilot program testing AI article overviews on the Google News pages of publications including The Guardian, The Times of India and The Washington Post.
Google announced AI glasses with Gemini will launch in 2026, in a partnership with Samsung, Warby Parker and Gentle Monster.
Google reportedly plans to make two versions of its TPUv8, one with Broadcom and another with MediaTek, to reduce what it pays to silicon design partners.
It launched a $4.44 monthly AI Plus subscription in India to compete with OpenAI’s ChatGPT Go.
NextEra Energy and Google Cloud expanded their energy partnership to develop new data center campuses with dedicated power plants across the US.
They currently have 3.5GW of power — enough for about 2.5m homes — operating or contracted.
Meta
Meta’s new model is reportedly codenamed Avocado.
It’s considering not releasing the model weights, which would mark a significant change in strategy for the company.
The model is in part distilled from other models, including Gemma, gpt-oss, and Qwen, according to Bloomberg.
Alexandr Wang has reportedly clashed with chief product officer Chris Cox and chief technology officer Andrew Bosworth over the direction of TBD Lab.
Cox and Bosworth reportedly want it to prioritize improving Meta’s social media and ads business, while Wang wants to catch up with OpenAI and Google.
Meta announced partnerships with news outlets paying for their content delivered through Meta AI. Initial partners include CNN, Fox News, and USA TODAY.
It acquired Limitless, which makes a pendant that records and transcribes conversations.
Anthropic
Anthropic comms chief Sasha de Marigny said the company has “no immediate plans to go public” despite reports it is preparing for an IPO as early as next year.
On regulation, she told an Axios event: “The frontier AI developers need to basically put nutrition labels on the models that they’re deploying to the public, to billions of people every day so that they know what it’s capable of, because we’re only in the very early innings of seeing what that is.”
Broadcom revealed that the mystery customer for its $10b custom chip order was Anthropic.
CEO Hock Tan said Anthropic had placed another $11b order in the most recent quarter.
Nvidia
Nvidia revealed it has built location verification technology that could help prevent AI chip smuggling to restricted countries.
SoftBank and Nvidia discussed investing $1b in robotics AI company Skild AI at a $14b valuation.
Others
Microsoft unveiled $23b in new AI investments, with $17.5b earmarked for India.
It discussed custom chip design with Broadcom, potentially switching from Marvell.
Amazon said it would invest $35b in AI and cloud infrastructure in India through 2030, aiming to create 1m jobs.
Mistral AI launched Devstral 2, a new coding model, and a new vibe coding interface named, inventively, Mistral Vibe.
WSJ reported on the race between SpaceX and Blue Origin to develop orbital AI data centers, including plans by Google and Planet Labs to launch test satellites in 2027.
Nvidia-backed Starcloud ran Google’s Gemma LLM on an orbital satellite with an H100 GPU.
Unconventional AI launched with $475m in funding at a $4.5b valuation.
It’s developing biology-inspired, energy-efficient AI hardware.
VC firms invested $600m into US critical-mineral startups this year as tech tries to reduce reliance on China.
Arm signed an agreement with South Korea to “strengthen the country’s semiconductor and artificial intelligence sectors.”
That includes setting up a chip design school that will train 1,400 specialists.
The NYT sued Perplexity AI for allegedly copying millions of articles without permission to power its AI products.
A new platform, AXM, launched to help creators control how their work is used in AI training.
MOVES
OpenAI hired Slack CEO Denise Dresser as chief revenue officer. She’ll manage OpenAI’s enterprise business.
Google veteran Amin Vahdat was named chief technologist for AI infrastructure, overseeing its $90b capex strategy.
Brandon Amos left Meta Superintelligence Labs to join Reflection AI to work on “safe, open, and accessible” frontier models.
Apple chip chief Johny Srouji told staff he’s “not leaving anytime soon” despite earlier reports he had discussed an exit.
Hinge CEO Justin McLeod stepped down to launch an AI-driven venture called Overtone.
Sang Michael Xie joined OpenAI to work at the “intersection of synthetic data and RL.”
Claire Larkin and Ben Snyder joined Encode AI as policy advisers.
Nick Clegg joined Hiro Capital as a general partner. Yann LeCun will advise the VC firm.
Kylie Robison joined Core Memory to write about tech alongside Ashlee Vance.
RESEARCH
NeurIPS attracted a record 26,000 attendees this year.
An analysis of accepted papers at the event by AI World found that cutting-edge AI research is increasingly concentrated in Beijing, Shanghai and San Francisco.
According to The Information, plenty of attendees were wondering whether existing methods for developing AI would keep leading to major breakthroughs.
A random survey of attendees found only 69.5% out of 115 knew what AGI stands for when asked, slightly up from 63% last year.
Anthropic donated the Model Context Protocol, which allows models to access other services to the Linux Foundation.
It joined OpenAI, Google, Microsoft, AWS, Block, Bloomberg, and Cloudflare in setting up the Agentic AI Foundation to “advance open-source agentic AI.”
A new paper from the UK AISI Model Transparency team found that it can be hard to detect “sandbagging” from AI models.
Pew found that 64% of US teens use AI chatbots, with ChatGPT being the most popular accounting for 59% of usage.
OpenAI released a survey of 9,000 claiming that AI tools are saving workers 40 to 60 minutes of working time a day.
The International Committee of the Red Cross warned that AI models are fabricating research papers and journals, causing problems for librarians and researchers.
Google published two new papers which help AI have long-term memory.
BEST OF THE REST
A mysterious company — BorderPlex Digital Assets — proposed a $165bn AI data center project in rural New Mexico.
The FT made a fun interactive article to illustrate the US’ 19GW power gap for AI data centers.
The WSJ, meanwhile, reported on how China is building the world’s biggest power grid.
The Information published a long read on how Broadcom CEO Hock Tan transformed the company into a trillion-dollar competitor to Nvidia.
“The Architects of AI” are TIME’s 2025 Person of the Year.
Edmonton police launched a pilot program using AI-powered body cameras to identify people on a 7,000-person “high risk” watch list.
The NYT got hold of CCTV footage showing the moment a Waymo car killed beloved San Francisco bodega cat Kit Kat.
Reddit moderators are being overwhelmed by AI-generated content.
Matt Clifford and Rory Stewart are doing a five-part podcast on AI.
Sam Lessin and others ran an “Etiquette Finishing School” to teach tech founders social skills. Attendees were handed a gift bag with a comb, shampoo, and mouthwash.
Arm booked out an entire Cambridge Christmas market for its staff. “Thousands” of families were turned away, with some kids reportedly crying.
“It’s not in the spirit of Christmas, it’s the anti-spirit of Christmas,” one disappointed visitor said.
MEME OF THE WEEK
Thanks for reading. Have a great weekend.



Why is the preemption EO "possibly unconstitutional"?