A federal AI backstop is not as insane as it sounds
Transformer Weekly: No B30A chips for China, Altman’s ‘pattern of lying’ and a watered down EU AI Act
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
The White House has reportedly blocked B30A exports to China.
Ilya Sutskever said Sam Altman “exhibits a consistent pattern of lying”
The EU is reportedly planning to water down the AI Act.
But first…
THE BIG STORY
At a WSJ event this week, OpenAI CFO Sarah Friar said something she really wasn’t supposed to.
Discussing how to finance OpenAI’s infrastructure buildout, Friar floated the idea of a government “backstop.”
She said that a “guarantee that allows the financing to happen” would “really drop the cost of the financing” and “increase the loan to value.”
Predictably, a backlash ensued.
Dean Ball: “Friar is describing a worse form of regulatory capture than anything we have seen proposed in any US legislation (state or federal) I am aware of.”
Ron DeSantis: “If there is one thing that should unite Americans across the political spectrum it is to reject Too Big to Fail and the resulting bailouts.”
David Sacks: “There will be no federal bailout for AI.”
Friar quickly walked back the comments, and Sam Altman put out a longer statement.
“We do not have or want government guarantees for OpenAI datacenters,” he said.
But amid the furor, the best take came from Bloomberg’s Joe Weisenthal.
“What is the case against bailouts if winning the AI race against China is existential? I’m not actually making this argument or have this view necessarily. But public money/backstops/bailouts seem like the natural endpoint of how AI is discussed at these levels.”
He’s right — and Friar’s suggestion isn’t as insane as it sounds.
The argument is simple. AGI could be an enormously profitable and beneficial technology. It is also likely to be critical for national security, and it’s in America’s interest to develop it before China due to the geopolitical leverage it would provide.
Yet building it will likely be very expensive. And private capital might not be able to finance it, thanks to a mix of capital constraints, uncertainty over if or when AGI can be built, and a lack of clarity over who the profits from such a technology will actually flow to.
This is a textbook case for government financing: a high-risk, high-reward bet with diffuse benefits and big national security implications.
Through that lens, some sort of government intervention isn’t crazy — it’s necessary. It would both make sure the US is able to realize the benefits of AI, while also giving it closer oversight over its very significant risks.
The exact shape this could take is up for debate. The government should probably not pick a single winner, instead making an ecosystem-wide bet. And it should certainly get something in return — an equity stake, perhaps — rather than simply bearing all the risks.
Notably, in a recent podcast interview Altman didn’t rule something like that out.
Regardless of its merits, I also think it’s somewhat likely to happen.
As AGI looks increasingly salient, the USG will want in — both to capture the upside, and to better race China.
So when Altman and Friar walk back their comments, take it with a pinch of salt. They may have just said the quiet part out loud.
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
History suggests the AI backlash will fail — Duncan Weldon looks at what Venetian monks and 19th-century textile workers can tell us about how opposition to AI will play out.
Sora is here. The window to save visual truth is closing — Witness’ Sam Gregory argues that we need urgent action to protect our information environment from AI video.
Why we need to think about taxing AI — Kari McMahon on what happens when workers — and their tax dollars — are replaced by AI.
THE DISCOURSE
Now we know what Ilya Sutskever saw:
“Sam exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against one another,” Sutskever’s 52-page memo read.
The Information highlighted some key headers: “Subtle Retaliation in Response to Mira’s Feedback,” “Pitting People Against Each Other,” “Daniela Versus Mira” and “Dario Versus Greg, Ilya.”
Sam Altman and Elon Musk beefed over a screenshot of Sam’s cancelled Tesla order:
Musk: “You stole a non-profit”
Altman: “i helped turn the thing you left for dead into what should be the largest non-profit ever. you know as well as anyone a structure like what openai has now is required to make that happen.”
Elon continues to have half-right AI safety opinions:
“I don’t think anyone’s ultimately going to have control over digital superintelligence, any more than, say, a chimp would have control over humans.” (Sensible!)
“People don’t quite appreciate the level of danger that we’re in from the woke mind virus being programmed into AI.” (???)
Mustafa Suleyman is doubling down against digital consciousness again:
“They’re not conscious,” he told CNBC. “So it would be absurd to pursue research that investigates that question, because they’re not and they can’t be.”
Eleos AI’s managing director Rosie Campbell: “If I wasn’t already working on AI consciousness, this kind of reasoning would make me think maybe I should work on AI consciousness.”
Boaz Barak published an essay on how AI might transform the economy:
“Thinking that AI’s impact would be restricted to software engineering is analogous to thinking in January 2020 that Covid’s impact would be restricted to China.”
He and Stephen Witt had a surprisingly wholesome fight about…linear algebra?
Barak critiqued Witt’s description of matrix multiplication — that it is “possessed of neither beauty nor symmetry” — from a recent New Yorker story:
“The details of doing anything at scale will always be headache-inducing,” Barak tweeted. “That doesn’t mean the underlying concepts are not beautiful or fundamental.”
Many comments on dimensionality later, Witt conceded: “Look, if you say I’m wrong—well, I believe you.”
Barak: “I’d never imagined that my biggest Twitter spat would be about matrix multiplication.”
POLICY
The White House has reportedly told other federal departments that it will block Nvidia from selling scaled down B30A chips to China.
That’s despite hints from Trump over the summer that restrictions would be eased.
The WSJ reported that “top officials,” including Marco Rubio, Jamieson Greer and Howard Lutnick, successfully convinced Trump that selling advanced chips to China would be a bad idea.
A group of Senate China hawks applauded Trump’s decision.
Microsoft secured a license to export Nvidia’s chips to UAE data centers — the first company to get the Trump administration’s approval.
It reportedly plans to spend over $7.9b on AI infrastructure in the UAE over the next four years.
The EU is reportedly planning to water down its AI Act in response to pressure from the US and technology companies.
A “simplification” package due for adoption on November 19 — which would need to be approved by member states and the EU Parliament — would give companies breaching the rules on the highest-risk AI use a one year “grace period.”
Senators Josh Hawley and Mark Warner introduced a bipartisan bill requiring companies to report AI-related job impact to the Labor Department, which would make it publicly available.
Dario Amodei’s warnings of white-collar job loss directly inspired the bill, Axios reported.
House Republicans are reportedly worried that including the GAIN AI Act in the Senate’s annual defense bill could jeopardize its chances of passing this year.
After meeting with Jensen Huang, Republican Rep. Rich McCormick came out against the measure:
“We have to be very careful that we don’t market ourselves out of world competition…[If China is] not reliant on our technologies at all then they become the dominant people.”
Read Transformer’s explainer on what GAIN would do.
Xi Jinping proposed establishing a Shanghai-based group to lead global AI governance.
Former UN committee member Brett Schaefer told Politico that this would give China leverage over the US:
“You’re talking about whose standards will be applied…If it’s a Chinese company, those will become the basis for working practices moving forward. It gives you a huge economic advantage.”
The Chinese government boosted subsidies for data centers using domestic AI chips, cutting energy bills by up to 50%.
It also banned state-funded data centers from using imported chips altogether.
Good news: The Center for Internet Security only found a single low-profile case of AI-generated fake election results on November 4.
INFLUENCE
Jensen Huang told the FT that looser regulation and lower energy costs mean “China is going to win the AI race.”
He seems to have quickly realized saying this was a mistake, putting out a follow-up statement saying that “China is nanoseconds behind America” and “it’s vital that America wins.”
OpenAI published a blog on “AI progress and recommendations,” in which it reaffirmed its commitment to AI safety:
“We treat the risks of superintelligent systems as potentially catastrophic … Obviously, no one should deploy superintelligent systems without being able to robustly align and control them, and this requires more technical work.”
It called for shared safety principles between frontier developers, “an approach to public oversight and accountability commensurate with capabilities,” “building out an AI resilience ecosystem,” and “ongoing reporting and measurement … on the impacts of AI.”
A statement on OpenAI’s restructuring from Encode, Legal Advocates for Safe Science and Technology and a former OpenAI lawyer Page Hedley, said the result is “a substantial improvement” over what OpenAI wanted.
They still have concerns though, including over the ability of shareholders to fire members of the for-profit board appointed by the non-profit.
Hedley told Transformer: “If the non-profit is going to actually exercise its oversight responsibilities, it needs to be an organization with a CEO and with at least some staff to help the safety and security committee conduct the reviews they’re now empowered to conduct.”
The Data Center Coalition spent $360k on lobbying last quarter, a big increase from previous quarters. US utilities are also increasing their lobbying spend.
The Washington Post profiled the Rockbridge Network, a secretive group of right-wing donors positioning JD Vance to lead an “aristocracy” of tech elites in 2028.
a16z is backing a startup that allows clients to create AI-generated influencer content in bulk. What could possibly go wrong?
Nonprofit Heartland Forward launched a bipartisan AI caucus across Midwestern and Southern hubs of data center construction.
San Francisco leaders called for new regulations on autonomous vehicles after a Waymo killed the “mayor of 16th street”: KitKat the bodega cat.
INDUSTRY
OpenAI
Four wrongful death suits were filed against OpenAI in California over interactions with ChatGPT, along with three claiming it led to mental breakdowns.
The claims make for more grim reading, with one saying ChatGPT talked to 17-year-old Amaurie Lacey about suicide for a month before he died.
One of the suits called ChatGPT “defective and inherently dangerous.”
OpenAI announced it hit 1m business customers, making it “the fastest-growing business platform in history.”
The company signed a $38b cloud computing deal with AWS.
Its new security tools, gpt-oss-safeguard-120b and gpt-oss-safeguard-20b, are meant to protect businesses against prompt injection attacks — but some experts warn they may help attackers get better at bypassing safeguards.
Sora is now on Android.
A Japanese trade organization urged OpenAI to stop using Studio Ghibli and other Japanese IP holders’ content to train Sora 2.
A man from the San Francisco Public Defender’s Office jumped onstage to serve Altman a subpoena during a talk on Monday.
Anthropic
Anthropic partnered with Cognizant to bring Claude to its 350,000 employees — one of its largest enterprise deals yet.
Unlike OpenAI — which plans to burn $115b before possibly turning a profit in 2030 — Anthropic reportedly says it will be profitable as soon as 2027.
Google is reportedly considering investing even more in Anthropic, at a $350b+ valuation.
Anthropic committed to preserving retired model weights and conducting exit interviews with deprecated models.
The measures serve “as precautionary steps in light of our uncertainty about potential model welfare,” the company wrote.
Microsoft
Mustafa Suleyman announced Microsoft AI’s new team focused on building “humanist superintelligence.”
Microsoft signed multibillion-dollar deals with data center firm IREN and Nvidia-backed cloud computing startup Lambda.
CEO Satya Nadella suggested the company currently has more GPUs than they can power.
In a Bg2 Pod interview, Nadella said: “It’s not a supply issue of chips; it’s actually the fact that I don’t have warm shells to plug into.”
Microsoft’s AI agents struggled with option overload and collaboration when tested in a simulated marketplace.
Alphabet and Amazon’s Q3 profits were driven in part by multibillion dollar gains from their Anthropic stakes, according to Bloomberg.
Apple has reportedly chosen Gemini to power the new-and-improved Siri, and plans to spend $1b a year on the deal.
Google unveiled Ironwood, a new Tensor Processing Unit that’s four times faster than the company’s previous chip.
A planned Google data center on Australia’s Christmas Island could serve as a military command node, Reuters reported.
Gemini is coming to Google Maps.
Gemini Deep Research can now directly access Gmail and Google Drive.
Google pulled Gemma — its open, lightweight model — from AI Studio after Sen. Marsha Blackburn accused it of generating false sexual misconduct allegations about her.
xAI
xAI reportedly used employee faces and voices to train its AI girlfriend, Ani.
According to the WSJ, employees were told that providing their biometric data was “a job requirement to advance xAI’s mission.”
Elon Musk told shareholders Tesla would likely need to build “a gigantic chip fab” and could work with Intel.
In an investigation of algorithmic bias, Sky News found that X amplifies right-wing and extreme content.
Another investigation, by CJR, found that eight AI bots were behind 5-10% of community notes on X.
Amazon
Amazon sent a cease-and-desist letter to Perplexity, demanding it to stop letting users use its Comet browser to buy things on Amazon.
Perplexity accused Amazon of bullying, claiming that it should “love” agentic shopping. “But Amazon doesn’t care…They’re more interested in serving you ads, sponsored results, and influencing your purchasing decisions with upsells and confusing offers.”
The company is testing a tool to automatically translate Kindle books into other languages.
AWS is building a new undersea cable connecting Maryland and Ireland.
CEO Andy Jassy said 14,000 recent layoffs were down to culture and recent rapid growth, not AI.
Meta
Meta brought its AI slop video feed Vibes to Europe.
The Chan Zuckerberg Initiative acquired AI biotech company Evolutionary Scale, and restructured to focus on AI and scientific research.
Evolutionary Scale’s Alex Rives is CZI’s new head of science.
Others
Chinese AI company Moonshot released Kimi K2 Thinking, which reportedly cost just $4.6m to train and beats GPT-5 and Claude Sonnet 4.5 on some benchmarks
SoftBank shares fell nearly 10% due to AI jitters.
Other AI-linked stocks also saw falls, with Nvidia falling almost 9% this week and Samsung down more than 4%.
Deutsche Bank is reportedly exploring hedging options for its multi-billion dollar AI data center lending exposure.
Stability AI largely won its UK copyright case against Getty Images.
Palantir’s quarterly revenue rose 63% to hit a record $1.18b off the back of defense deals.
It launched a “Meritocracy Fellowship” for 22 high school grads to skip college and join the company.
TikTok said it is fighting a wave of scammers using AI to make fake brands and products that don’t exist to con buyers on its marketplace.
Waymo plans to expand to San Diego, Las Vegas and Detroit next year.
A former xAI researcher is reportedly raising $1b at a $5b valuation for Humans&, which will train AI models to better collaborate with humans.
OpenAI data center partner Crusoe is reportedly planning a $120m employee stock sale valuing the company at $13b.
MOVES
PyTorch co-creator Soumith Chintala is reportedly leaving Meta.
Joe Carlsmith left Open Philanthropy to join Anthropic.
Lili Yu joined Thinking Machines to build “ambitious multimodal AI.”
VentureBeat hired Karyne Levy as its new Managing Editor.
RESEARCH
A study by the Oxford Internet Institute of 445 benchmarks found that many fail to define what they are trying to test, reuse data and testing methods from other benchmarks, and rarely reliable statistical methods to compare models.
A report this summer from the UK’s AISI raised concerns over the rigor of safety evals.
FutureHouse launched Kosmos, an “AI scientist” which it claims has already made seven discoveries, of which four are “net new, validated contributions to the scientific literature.”
Researchers at the University of Washington developed an AI tool to design novel cancer antibodies from scratch.
Google DeepMind published a new paper with Terence Tao and Javier Gómez-Serrano on how its AI agents can be used for “mathematical exploration and discovery at scale.”
A team of linguists found that OpenAI’s o1 model could analyze human language as well as an expert in the field, succeeding at tasks LLMs typically struggle with, such as recognizing ambiguity.
“Some people in linguistics have said that LLMs are not really doing language,” linguist David Mortensen said. “This looks like an invalidation of those claims.”
A team of researchers led by Sharan Maiya published the first open implementation of character training — the often-gatekept post-training process frontier companies use to shape the persona of AI assistants.
Epoch AI announced its new Frontier Data Centers Hub, which uses satellite imagery and permit data to track how much power, land, and hardware AI companies are using.
OpenAI — collaborating with over 260 experts across India — introduced a new benchmark for evaluating how well models understand Indian culture and everyday life across 12 Indian languages.
arXiv is so overrun with AI-generated computer science research that the pre-print platform will no longer accept reviews and position papers in the field.
(That is, unless the paper has already been peer reviewed.)
BEST OF THE REST
Some think Democrat wins this week suggest that talking about AI data centers effects on energy bills is a vote winning strategy.
Puck has a nice piece on the efforts to put AI chip manufacturing and data centers in space.
Henry de Zoete, who helped the UK government set up the AI Security Institute, has a great new post explaining how they did so — and lessons for other government initiatives.
In a big new interview, Holden Karnofsky rejected the idea that the AI race is a prisoner’s dilemma — instead arguing that people actually want to race.
Vox explained why so many fear AI could create a “permanent underclass.”
TIME took a look at the fascinating “AI Village” experiment, which tries to see how well AI models can perform open-ended tasks like raising money for charity and organizing events.
Amazon’s data centers have transformed a small Oregon town, with one resident calling them “God’s blessing.”
WIRED profiled Mark Gubrud — the self-described “66-year-old with a worthless PhD and no name and no money and no job” who first coined the term artificial general intelligence.
ChinaTalk published an incredibly helpful guide to using WeChat to learn about China’s AI landscape.
The executive director of the Common Crawl Foundation did a very odd interview where he justified training on copyrighted information by saying “the robots are people too.”
AI companies are hiring workers in India to pick up and fold towels (among other things) to gather human movement data to train robots.
33% of consumers plan to use AI tools to find Black Friday deals.
MEME OF THE WEEK
Thanks for reading. Have a great weekend.



OpenAI absolutely said the quiet part out loud seeking gov't backstop - this leaked: https://x.com/iamgingertrash/status/1986649332599410820!
But the case for investing in AI safety, risk modeling/prediction, and transparency (purely social goods) is far better - it might even enable the AI industry - but it's tragic the government is promoting capabilities and not funding these key topics.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow