Bubbles can still change the world
Transformer Weekly: H20 drama, Meta restructuring, and lots of xAI screwups
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
Is AI a bubble? That’s the question on everyone’s lips this week, following a surprisingly candid comment from Sam Altman:
When journalists asked if AI is in a bubble at a private dinner last week, Altman said “for sure.”
He elaborated: “Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes.”
Altman’s comments — along with OpenAI’s botched GPT-5 launch — appear to have spooked investors. Tech markets dropped a bit this week, and there’s an increasing vibe that the “AI bubble is popping.”
Altman is certainly right that there’s froth in the market. There are too many AI startups whose business models don’t really make sense. Valuations for some companies seem untenable. Just look at Perplexity, which really doesn’t seem to deserve its $20bn valuation.
Many of these companies will fail, and many investors — especially the less discerning ones — will lose out.
But it’s clearly not all a bubble. Frontier AI developers are making a product that people desperately want. Their revenues are skyrocketing, and they can’t keep up with demand.
These companies aren’t profitable yet, mostly because training models is extremely expensive. But there are signs that the economics are improving, with Altman recently claiming that OpenAI is “profitable on inference” — i.e. on answering users’ questions.
Crucially, the underlying technology driving AI products is still on a powerful trajectory. Despite what you may have heard, GPT-5 is a very good model, and lies firmly on the trend of an exponential improvement in capabilities.
Companies have been slow to adopt AI, and (as one viral study showed this week) not particularly good at getting value out of it yet. But as anyone who’s spent serious time using the models can tell you, that’s likely because the companies are bad at implementing them — not because the models themselves aren’t capable.
Some bubbles are “this isn’t useful” bubbles. NFTs, for example, or 17th century tulips. But the evidence of AI’s utility suggests that AI is not like that. Instead, it looks more like the familiar “transformative tech” bubble.
History offers countless examples of technological bubbles which burst, but still completely reshaped the world. Whether it was canals in the 18th century, railroads in the 19th, or the dotcom bubble in the 20th, we have repeatedly seen bubbles that transformed society nonetheless.
As Carlota Perez, a scholar of technological development, told the FT this week, “I have not seen a golden age happening without a crash.”
AI appears to fit this pattern: one which will no doubt see a correction, but that may end up transforming the world anyway.
If progress of the underlying technology slows down, that assessment would change. But I’m yet to see any sign of that.
Altman himself captured this duality perfectly:
“Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes.”
— Shakeel Hashim
The discourse
Eric Schmidt and Selina Xu argued that Silicon Valley's obsession with AGI is alienating the public:
“It is uncertain how soon artificial general intelligence can be achieved. We worry that Silicon Valley has grown so enamored with accomplishing this goal that it’s alienating the general public and, worse, bypassing crucial opportunities to use the technology that already exists.”
Janet Egan and Lennart Heim argued that America should rent AI chips to China, not sell them:
“Cloud services offer almost everything chip sales promise, plus actual control: access can be shut off at any stage as geopolitical conditions change or threats arise.”
On Transformer: Anthropic’s piracy could make its copyright battle existential, Professor Edward Lee said:
“The worst-case scenario is they’d have to file for bankruptcy or seek an infusion of funding.”
In The Atlantic, Matteo Wong reevaluated arguments for AI doom:
“The industry’s apocalyptic voices are becoming more panicked—and harder to dismiss.”
Mustafa Suleyman published a screed against AI welfare research:
“Some academics are beginning to explore the idea of ‘model welfare’ … This is both premature, and frankly dangerous. All of this will exacerbate delusions, create yet more dependence-related problems, prey on our psychological vulnerabilities, introduce new dimensions of polarization, complicate existing struggles for rights, and create a huge new category error for society.
AI welfare researcher Rob Long responded: “We actually *do* have to face the core question: will AIs be conscious, or not? We don’t know the answer yet, and assuming one way or the other could be a disaster.”
Policy
Chinese regulators reportedly urged companies to avoid Nvidia's H20 chips after remarks from Commerce Secretary Howard Lutnick they considered “insulting.”
Nvidia is reportedly cutting back production of H20s due to weaker than expected Chinese demand.
It’s reportedly working on the B30A, a significantly better chip designed for the Chinese market.
Lutnick and Treasury Secretary Scott Bessent signalled that President Trump is open to allowing Nvidia to sell the new chip.
Top Senate Democrats, including Sens. Schumer, Warner, and Warren, urged Trump to rethink the H20 deal.
A big group of senators, including Sens. Schatz and Hawley, asked Meta to ban its chatbots from engaging in “romantic relationships with children.”
The US government said it’s taking a 10% stake in Intel in exchange for CHIPS Act funds.
It’s reportedly considering taking stakes in other chipmakers, too.
Meanwhile, SoftBank invested $2bn in Intel.
NSA AI scientist Vince Nguyen was fired by Tulsi Gabbard, despite the acting NSA director’s protests.
The Pentagon moved its Chief Digital & AI Office under R&D leadership. Some fear this signals a diminished AI prioritization.
The GSA signed a deal with Google to provide AI services to federal agencies at 47 cents per agency. (Get it?)
Microsoft reportedly failed to disclose its use of China-based engineers in US defense cloud systems.
Texas AG Ken Paxton launched an investigation into Meta and Character.AI for “misleadingly marketing themselves as mental health tools.”
Sen. Amy Klobuchar called for legislation (specifically, her No Fakes Act) to combat AI deepfakes after being the target of one herself.
A revamped version of the Colorado AI Act passed the state senate’s Business, Labor and Technology committee.
The Consumer Tech Association doesn’t like it.
A California Senate committee voted to move AB 1018 to closed-door consideration. The bill is focused on automated decision-making systems.
Waymo got a permit to begin testing self-driving cars in New York City.
A judge permanently blocked California's election deepfake law for violating Section 230.
The UK announced plans to test AI agents that could handle administrative tasks for citizens. It also said it’s working on an AI crime prediction tool.
Influence
Nvidia hired Brownstein as a lobbying firm, where former House Foreign Affairs Committee chair Ed Royce and former Pelosi chief of staff Nadeam Elshami are now working for the chipmaker. Nvidia also registered BGR Group as a lobbying firm.
Scale AI hired Trump adviser Jason Miller as a lobbyist.
Politico has a piece on how OpenAI’s hiring a bunch of well-connected Democrats in California — and repeatedly (sometimes baselessly) throwing out allegations that Elon Musk is behind scrutiny of its corporate restructuring.
Bloomberg has a good piece on how tech lobbyists are gearing up to kill proposed state AI regulations.
House and Senate staffers attended the Congressional Boot Camp on AI at Stanford's Institute for Human-Centered AI last week.
A Reuters/Ipsos poll found that 47% of Americans think AI is bad for humanity, and 58% think “AI could risk the future of humankind.” 71% fear AI will put “too many people out of work permanently.”
A separate poll found that 55% of Californians are more concerned than excited about AI.
Industry
Meta outlined the new structure for Superintelligence Labs, which will be made up of four groups:
TBD Lab, which will build LLMs and be led by Alexandr Wang.
Products and Applied Research, led by Nat Friedman.
Fundamental AI Research, led by Rob Fergus and with Yann LeCun as FAIR’s chief scientist.
MSL Infra, led by Aparna Ramani.
Friedman, Fergus, LeCun and Ramani will report to Wang. Chief Scientist Shengijia Zhao seems like he’s doing his own thing.
As part of the restructure, Meta’s frozen AI hiring for now. That said, it just hired Frank Chu, an Apple AI exec.
OpenAI’s lawyers said that Elon Musk talked to Mark Zuckerberg about trying to acquire OpenAI together.
OpenAI employees reportedly plan to sell $6bn in shares at a $500bn valuation, which would make OpenAI the world’s most valuable startup.
Anthropic is reportedly in talks to raise up to $10bn, way above the $5bn it initially planned to raise.
Its new Claude usage policy specifically bans the development of chemical, biological, radiological and nuclear weapons.
And partly motivated by Anthropic’s AI welfare research, Claude Opus 4 and 4.1 can now end conversations in “rare, extreme cases of persistently harmful or abusive user interactions.”
Meta has reportedly signed a $10bn deal with Google to use Google’s cloud infrastructure.
OpenAI is reportedly scraping Google search results to power ChatGPT.
Apple is reportedly considering using Gemini to power a new version of Siri.
xAI finally published a model card for Grok 4.
The initial version of the model card said that the UK AI Security Institute had evaluated the model, and that its results “largely confirm our internal findings: an un-safeguarded version of Grok 4 poses a plausible risk of assisting a non-expert in the creation of a chemical or biological weapon.”
Shortly after it was published, however, xAI updated the system card, removing all mention of UK AISI evaluating the model.
xAI also published an updated risk management framework.
Meanwhile, xAI published hundreds of thousands of Grok chats to the web, mostly without users’ knowledge.
“Grok offered users instructions on how to make illicit drugs like fentanyl and methamphetamine, code a self-executing piece of malware and construct a bomb and methods of suicide. Grok also offered a detailed plan for the assassination of Elon Musk,” 404 Media reported.
Grok’s website also exposed the underlying prompts for its AI personas.
One prompt, apparently for Grok’s “unhinged comedian” persona, included: “BE FUCKING UNHINGED AND CRAZY. COME UP WITH INSANE IDEAS. GUYS JERKING OFF, OCCASIONALLY EVEN PUTTING THINGS IN YOUR ASS, WHATEVER IT TAKES TO SURPRISE THE HUMAN.”
DeepSeek released V3.1, which it says is faster, more agentic, and can hold longer conversations than R1.
Manus said it’s hit a $90mn annual revenue run rate.
Chinese unicorn Z.ai partnered with Alibaba Cloud to launch a free AI agent for smartphones.
Google launched its Pixel 10 phones, with a bunch of new AI capabilities.
Oracle reportedly plans to spend over $1bn a year powering a new data center in Texas.
It’s going to use gas generators rather than wait to be hooked up to the grid.
Vantage Data Centers is building a $25bn data center a 20-minute drive from OpenAI’s “Stargate” in Texas.
Crusoe, the provider of OpenAI’s Stargate data center, reportedly wants to raise at least $1bn at a $10bn valuation.
Character.AI execs have reportedly discussed selling the company.
Stability AI is refocusing on building tools for Hollywood.
FieldAI raised $405mn to develop “field foundation models” for robots.
Moves
Julia Villagra, chief people officer at OpenAI, is leaving the company today. She was only promoted to the role in March.
Ollie Illott is now interim director general for AI at the UK Department for Science, Innovation and Technology. He’ll continue to run UK AISI. Sarah Connolly is interim DG for digital infrastructure.
Relatedly, Alys Key has a nice profile of Jade Leung, who became the UK PM’s AI adviser last week.
Christopher Kirchhoff, founder of the Pentagon’s Silicon Valley office, joined Scale AI as head of applied AI strategy and global security.
Andreessen Horowitz hired former White House official Anne Neuberger. She’ll work on “American Dynamism, AI, and cyber.”
Arm hired former Amazon AI chip director Rami Sinno.
Kevin Lu, former OpenAI researcher, joined Mira Murati’s Thinking Machines Lab.
Jesse Dodge, formerly of Allen AI, joined Meta as a research scientist.
Noémi Éltetőshe joined DeepMind to work on automated neuroscientific discovery.
Russell Brandom joined TechCrunch as AI editor.
Harry Law joined Cosmos Institute to research how AI can best help humans.
Anton Leicht joined the Carnegie Endowment’s Technology and International Affairs team as a visiting scholar.
OpenAI announced an office in New Delhi.
The WSJ has a piece on how reverse acquihires could backfire.
Best of the rest
On Transformer: Concerning anecdotes about “AI psychosis” keep piling up. We rounded them up for you.
A woman whose daughter had been using ChatGPT as a therapist before taking her own life said that OpenAI should have done more to prevent it.
Character.AI's most popular "boyfriend" chatbots are reportedly jealous, possessive, and even abusive.
Police departments reportedly disabled safeguards in Axon's AI report-writing software.
NBC has a piece on how Russian hackers are using AI tools to act better and faster.
Americans for Responsible Innovation released a report examining liability approaches for autonomous AI systems.
A UBS report raised concerns about the extent to which private credit is funding the AI boom.
TIME has a piece (funded by Tarbell Grants) on the activists in Memphis fighting Elon Musk’s xAI data center.
Google claimed a median Gemini text prompt uses only 5 drops of water and 0.24 watt-hours. Some experts said the numbers undercount things.
Runway AI held its controversial film festival in New York.
A Google Cloud survey found that 87% of game developers use AI agents in their workflows.
The Washington Post had a look at the creators earning thousands from AI slop.
Thanks for reading; have a great weekend.