The UN can take on AI without Trump
Transformer Weekly: Nvidia invests in OpenAI, Newsom might sign SB 53, and Alibaba’s shooting for ASI
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
Nvidia said it will invest up to $100b in OpenAI
Gavin Newsom signaled he’ll sign SB 53
Alibaba declared its aim to build AGI and ASI
But first…
THE BIG STORY
As the main UN General Assembly meeting kicked off this week, you’d be forgiven for optimism about global AI governance.
The week began with Nobel laureates, former heads of state, and frontier AI developers calling for global red lines on AI development.
“An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks,” they urged.
AI also provided one bright spot in President Trump’s otherwise unhinged speech to the assembly.
“My administration will lead an international effort to enforce the Biological Weapons Convention, which is going to be meeting with the top leaders of the world by pioneering an AI verification system that everyone can trust,” Trump said. “Hopefully the UN can play a constructive role and it will also be one of the early projects under AI.”
But when the Security Council discussed AI on Wednesday, Michael Kratsios made international cooperation seem further away than ever.
“We totally reject all efforts by international bodies to assert centralized control and global governance of AI,” Kratios said, arguing that “broad overregulation incentivizes centralization, stifles innovation, and increases the danger that these tools will be used for tyranny and conquest.”
“Ideological fixations on social equity, climate catastrophism, and so-called existential risk are dangers to progress and obstacles to responsibly harnessing this technology,” he added.
The rest of the administration quickly followed suit.
“AI Catastrophism is the new Climate Catastrophism,” Jacob Helberg posted. “Catastrophists always claim a massive expansion of regulatory control is necessary to ‘protect’ you from an imminent calamity. WE TOTALLY REJECT IT.”
“One world government + centralized control of AI = tyranny,” said Sriram Krishnan.
This is, obviously, not a good development. Advanced AI systems pose significant risks, many of which can and will cross borders. The only way to tackle these is some sort of international agreement on how to govern them — agreeing safeguards that keep systems under human control, for instance.
As Anton Leicht notes, the administration’s approach “has a clear expiration date.” It works for the world we’re in now, but if systems continue to get more powerful “America would be very well served to have strong international mechanisms in place.”
If the US wants a say in what those mechanisms look like, it should engage now — lest it cede control to other nations.
And other nations seem to understand the stakes. While the Trump administration sticks its head in the sand, other countries showed a surprising awareness of the AI reality.
China’s Ma Zhaoxu said it’s “essential to ensure that AI remains under human control.” The UK’s David Lammy said “superintelligence is on the horizon.”
Russia’s Dmitry Polyanskiy was, surprisingly, the most sage, noting that building “a technology that is not fully understood or controlled, without sufficient AI safety measures … could, much like an ‘arms race,’ endanger humanity’s very existence.”
While US buy-in is ultimately necessary for concrete action, other nations can still help shift the world towards consensus, laying the groundwork for a future administration to act more wisely.
Initiatives such as the new “global dialogue” on AI governance and International Scientific Panel on AI don’t have any teeth, but can, much like similar initiatives in climate change, help nudge things in the right direction.
As Trump noted in his speech, “The UN has such tremendous potential … but it’s not even coming close to living up to that potential.” That’s true — but it can make some headway without him.
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
No, ChatGPT isn’t ‘making us stupid’ — Celia Ford has the run down on what we know and don’t know about LLMs and the brain.
Nobel laureates and AI developers call for ‘red lines’ on AI — Shakeel Hashim on the heavy-hitters calling for international agreements on AI by the end of next year.
Insurance might be the key to making AI secure — Cristian Trout, Rajiv Dattani, and Rune Kvist argue that insurance can help reward responsible development of AI.
How the UK can seize on Trump’s immigration mistakes — Julia Willemyns on how H-1B visa changes present a generational opportunity for the UK to scoop up AI talent.
THE DISCOURSE
OpenAI’s Leo Gao said the AI industry should probably stop:
“Racing to build superintelligence before we know how to make it not kill everyone (or cause other catastrophic outcomes) seems really bad and I wish we could coordinate to not do that.”
Anthropic’s Evan Hubinger concurred, sort of:
“I think it would be better if nobody was building AGI. I don’t expect that to happen, though.”
OpenAI’s Jerry Tworek offered a different view:
“At OpenAI regularly you hear a lot of complaints about how bad things are … A bit of Eastern European culture that became part of the company DNA … We all collectively believe AGI should have been built yesterday and the fact that it hasn’t yet [is] mostly because of a simple mistake that needs to be fixed.”
Sam Altman was one of many tech CEOs to kiss the ring and endorse Trump’s new $100,000 fee on H-1B visas.
“We need to get the smartest people in the country, and streamlining that process and also sort of outlining financial incentives seems good to me.”
In general, the tech industry has been very quiet in pushing back on the changes, despite the likely adverse impact on the industry (especially startups).
Matt Yglesias thinks tech CEOs’ faith in AGI is causing dangerous indifference to Trump’s many mistakes:
“Corporate America, and the US stock market, have a bad case of AGI fever, a condition in which belief in a utopian future causes indifference to the dystopian present.”
POLICY
Gavin Newsom suggested that he’ll sign SB 53.
“We have a bill — forgive me, it’s on my desk — that we think strikes the right balance,” he said in an interview with Bill Clinton this week.
California lawmakers passed two more competing AI safety bills, AB1064 and SB243, aimed primarily at AI companions and their interaction with children. Tech industry lobbying around the bills has reportedly been intense.
The Trump administration is reportedly considering requiring a 1:1 ratio for domestic and international semiconductor production.
“The policy’s goal is to have chip companies manufacture the same number of semiconductors in the US as their customers import from overseas producers. Companies that don’t maintain a 1:1 ratio over time would have to pay a tariff, according to people familiar with the concept.”
Michael Kratsios and OMB director Russell Vought issued a memo on R&D priorities for fiscal year 2027, with AI at the head of emerging technologies all agencies should “prioritize and invest in.”
They wrote: “Federal investment in fundamental research into novel AI paradigms and computing architectures will support continued American leadership in this field. Areas of emphasis include AI architectural advancements; data-efficient and high-performance AI techniques and systems; the interpretability, controllability, and steerability of AI systems; and AI adversarial robustness, resilience, and security.”
The White House OSTP also issued a request for information on regulations “that unnecessarily hinder the development, deployment, and adoption” of AI.
The General Services Administration approved Meta’s Llama and xAI’s Grok for use by government agencies.
Senate Majority Leader Schumer and senior Democrats introduced a bill addressing AI and neurotechnology risks.
The Chinese government is reportedly taking greater control of its data center capacity as part of plans to create a “Stargate of China” to challenge the US.
Taiwan suspended chip export controls on South Africa — the first time it has imposed controls unilaterally — just two days after placing them on the China ally.
The UK government recovered £480m in fraud using a new AI tool that will now be licensed internationally.
INFLUENCE
Mark Zuckerberg and Sam Altman have reportedly been cosying up to Trump since the President’s rift with Elon Musk.
Both Meta and OpenAI have deliberately pursued closer ties with the admin, seeking protection from regulation and support for commercial opportunities, though many in Trump’s circle remain skeptical of the former Democrat donors.
Meta launched another super PAC to fight state-level AI regulations.
The American Technology Excellence Project will invest “tens of millions” to support tech-friendly candidates and oppose those it sees as hostile.
The Business Software Alliance launched an ad campaign featuring CEOs urging faster AI adoption.
The Tony Blair Institute has reportedly built incredibly close ties to Larry Ellison, and subsequently advocated for NHS data projects that would benefit Oracle.
Ellison has reportedly donated or pledged £257m to TBI.
Peter Thiel reportedly said regulating AI would hasten the coming of the Antichrist.
INDUSTRY
OpenAI
Nvidia said it will invest up to $100b in OpenAI to build 10GW of data center capacity. The structure of the deal has raised lots of questions.
Altman and Jensen Huang reportedly agreed the investment directly “largely without formal advice from the bankers,” sealing the deal when they accompanied Trump on his UK state visit earlier this month.
It reportedly may involve OpenAI leasing at least some chips from Nvidia at a 10-15% discount to buying them.
The deal has raised antitrust concerns, questions about how so much capacity will be built and powered, and worries about its “circular nature.”
It led SemiAnalysis’s Dylan Patel to joke about an “infinite money glitch” in San Francisco.
OpenAI still has other capacity deals going on, announcing five new AI data centers built with Oracle and Softbank to create nearly 7GW of compute as part of its Stargate project.
In a blog, Altman said he wants to “create a factory that can produce a gigawatt of new AI infrastructure every week.”
Altman reportedly told staff he wants OpenAI to build 250GW of data center capacity by 2033. He said he plans to spend more than half his time on infrastructure deals and financing for the rest of the year.
The company launched ChatGPT Pulse, “a new experience where ChatGPT proactively does research to deliver personalized updates based on your chats, feedback, and connected apps like your calendar.”
It will deliver a briefing for you every morning: as the company notes, it’s the first step towards building a proper personal assistant.
The company has reportedly poached over two dozen Apple employees and signed a deal with Apple assembly firm Luxshare, seemingly for the consumer hardware project Jony Ive is leading.
Sora can closely mimic films, TV shows, and other copyrighted content, the Washington Post reported, suggesting it was trained on them.
OpenAI and SAP announced a “sovereign AI” project, which is just OpenAI models running on German data centers.
Microsoft
Microsoft announced a new $4b data center in Wisconsin, to sit alongside its soon-to-be-finished $3.3b facility there.
Its $1b facility in Kenya, being built in partnership with the UAE’s G42, is reportedly delayed as focus shifts to building data center capacity in the US and Gulf instead.
Microsoft added Anthropic’s Claude models to 365 Copilot, providing an alternative to OpenAI’s models.
And it’s testing microfluidic cooling for AI chips, which sends fluid through tiny channels etched directly into processors.
Alibaba
Alibaba declared its aim to develop AGI and ASI. It said it would increase AI investment beyond its initial $53bn target.
It launched Qwen3-Max and Qwen3-Omni, and previewed Qwen3-Max-Thinking. All seem good, but not at the frontier.
Meta
Oracle is reportedly in talks with Meta for a $20b cloud capacity deal.
Louisiana is reportedly giving Meta tax breaks for its new data center in the state.
Meta recently filed with US regulators to enter the wholesale power-trading business.
Meta’s reportedly considering using Gemini to improve ad targeting, as its internal models aren’t good enough.
Its Superintelligence team launched “Vibes” this week, a feed of shortform AI slop videos.
As Dean Ball said: “Shame is underrated, and this is shameful.”
DeepMind updated its Frontier Safety Framework.
It added a new Critical Capability Level (CCLs) on “harmful manipulation,” and fleshed out its CCLs for automated AI R&D.
It launched a MCP server for its Data Commons, making public datasets much more accessible to AI systems.
Denys Sheremet ended his 16-day hunger strike outside the company’s London office.
xAI
xAI is reportedly raising $10b at a $200b valuation. That would value it more than Anthropic (which raised at $183b earlier this month).
It sued OpenAI, alleging it stole trade secrets by hiring its employees.
It launched a new “cost efficient” model, Grok 4 Fast.
The Information profiled Brent Mayo, an ex-oilman managing xAI’s data center buildout and dealing with public opposition.
X’s algorithm, meanwhile, will soon be “purely AI” and customizable.
Others
A judge gave preliminary approval to the $1.5b Anthropic copyright settlement.
Oracle named Clay Magouyrk and Mike Sicilia as co-CEOs, with current CEO Safra Catz becoming executive vice chair.
It sold $18b in bonds this week, with demand reportedly hitting nearly $88b.
CoreWeave, meanwhile, raised $29b in debt.
The Information has a piece on the AI infrastructure lenders to know.
UK data center company Nscale raised $1.1b, including $500m from Nvidia.
Scale AI is reportedly considering getting into the humanoid robot training game.
The Information has a piece on how Anthropic and OpenAI are training models on simulations of enterprise AI software, so they get good at doing real-world office tasks.
Huawei released a fine-tuned version of DeepSeek’s R1 which it said was “nearly 100% successful” in preventing discussion of politically sensitive topics.
Humanoid robot developer 1X is reportedly raising $1b at a $10b valuation.
MOVES
Meta poached OpenAI strategic explorations lead Yang Song to be research principal of Meta Superintelligence Labs.
“Yang is responsible for one of the rare pretraining research breakthroughs at OpenAI that worked at scale in the last ~year,” former colleague Rohan Pandey said.
Stephen McAleer joined Anthropic to work on safety.
KJ Bagchi is rejoining the Chamber of Progress as vice president of US policy and government relations. He’ll oversee the group’s tech policy work, among other things.
UK Science and Technology Secretary Liz Kendall hired Kirsty Innes, formerly of TBI and Labour Together, as her special adviser.
Innes has already come under fire for comments she made earlier this year about AI and copyright.
Anthropic leased two floors at 505 Howard St. in San Francisco, opposite its existing HQ in the area being marketed as “AI Alley.”
Hunger striker Guido Reichstadter is still sitting outside that HQ.
RESEARCH
OpenAI launched GDPval, a new eval that measures performance on real-world tasks.
Claude Opus 4.1 is the best-performing model on the benchmark, producing “outputs rated as good as or better than humans in just under half the tasks.”
Scale AI launched SWE-Bench Pro, a new (harder) coding benchmark, and SEAL Showdown, a new leaderboard based on real-world user preferences.
There are lots of interesting papers from the Economics of Transformative AI Workshop — Zvi Mowshowitz has some summaries and discussion here.
A new paper found that “workslop” — AI-generated content that appears polished but lacks substance — might be having a significant effect on lost productivity.
A new paper found that LLMs can exhibit “strategic dishonesty”, which can in turn undermine AI safety evals.
The Partnership on AI called for real-time failure detection in AI agents.
Duality Technologies is trying to use “fully homomorphic encryption” to protect LLM interactions.
BEST OF THE REST
Accenture cut 11,000 jobs and warned it would “exit” staff who cannot be retrained for AI.
Wired took a look at putting data centers in space, something Sam Altman, Jeff Bezos and Eric Schmidt are apparently all keen on.
The Pragmatic Engineer published a deep dive into how Anthropic built Claude Code. (Spoiler: Claude Code built a lot of itself.)
Works in Progress has a piece explaining why AI isn’t replacing radiologists. (Disclosure: the author works at Open Philanthropy, Transformer’s main funder.)
Chinese AI researchers in the US are increasingly looking at going home, while new students are deciding to stay put in China, due to visa restrictions and discrimination concerns, reports The Information.
Students sued a Kansas school district over an over-aggressive AI tool that flagged art as pornography and deleted emails.
Blind people are experimenting with Meta’s AI glasses to help with tasks such as reading menus or buying groceries, with mixed results.
Lower capacity energy grids in countries such as Mexico and Nigeria are forcing AI companies to use fossil fuel generators, reports Rest of World.
Mobile apps for vibe coding have failed to gain significant numbers of users let alone generate revenue, according to TechCrunch.
Meta launched an AI assistant to help users find matches on Facebook Dating.
The Oakland Ballers baseball team let an AI manage a game. It made nearly identical decisions to the human manager. Fans still got angry.
Thanks for reading. If you liked this edition, forward it to your colleagues or share it on social media. Have a great weekend.