Don’t fall for China’s chip propaganda
Transformer Weekly: Anthropic in DC, an AI-designed virus, and If Anyone Builds It, Everyone Dies
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
Republicans are still pushing to preempt state AI legislation
Nvidia is investing $5b in Intel
Sriram Krishnan says winning AI is all about market share.
But first…
THE BIG STORY
China is on the verge of AI chip supremacy — or so the headlines this week would have you believe.
A flurry of announcements (carefully timed ahead of today’s Xi-Trump trade negotiation call) from Chinese companies and regulators signalled that reliance on Nvidia was a thing of the past.
Authorities, supposedly determining “that China’s AI processors had reached a level comparable to or exceeding that of the Nvidia products allowed under export controls,” reportedly banned companies from buying Nvidia’s RTX Pro 6000D chips. (Few were apparently interested in the chip anyway.)
Alibaba, meanwhile, teased a new AI chip on Chinese state TV, suggesting it’s as good as Nvidia’s H20s — “fresh evidence that Chinese developers are designing advanced chips that could replace imports like Nvidia’s GPUs,” reported the (Alibaba-owned) South China Morning Post.
To cap it off, Huawei announced a three-year product roadmap, with promises of new chips and gigantic clusters that will supposedly rival Nvidia’s best products.
As Trump’s AI advisor David Sacks put it, “The message is clear: China is not desperate for our chips.”
But that message is bullshit. This week’s announcements are primarily marketing fluff and propaganda. There is little substance to back them up.
Take Huawei’s, the splashiest and most-detailed of the bunch. The company laid out plans for chips with impressive-sounding specs, gigantic server systems, and a SuperCluster that will “outstrip xAI’s Colossus.”
But none of these things actually exist. They’re just promises. And there’s good reason to believe they can’t exist, at least not on the scale Huawei needs to compete with Nvidia.
Huawei is bottlenecked by supplies of high-bandwidth memory (HBM), a crucial component for advanced AI chips. It’s currently reliant on foreign HBM (primarily from Samsung), stockpiled before export controls kicked in last year.
Huawei says its new AI chips will use its own proprietary HBM — presumably manufactured at CXMT, a Chinese company. But analysts believe it’s going to be very hard for CXMT to manufacture high-quality HBM at scale: SemiAnalysis expects that next year it will only make enough for up to 300,000 Ascend 910Cs.
It’s hard to square that with Huawei’s promises of an Atlas 950 SuperCluster, which will supposedly ship next year and contain “more than 520,000” of the new Ascend 950DTs.
And China is well aware that it doesn’t have the ability to produce HBM at scale — it’s currently asking the US to relax export controls on the component.
As with all companies, then, one shouldn’t take Huawei’s announcements at face value. But this is more than the usual corporate marketing.
SemiAnalysis speculates that China’s approach might be “orchestrated brinkmanship to get approval [for] a more powerful chip.” I’m inclined to agree.
The idea is simple: ahead of trade talks, China is posturing that it doesn’t even need Nvidia’s chips anyway — at least not the H20s America is currently willing to export. The White House's strategy of getting China hooked on US tech will thus require exporting even more advanced Nvidia chips, such as the rumored B30s Nvidia is desperate to sell, and which would outperform Huawei's chips.
Unfortunately, the strategy seems to be working — especially on people like Sacks. “It’s reasonable for Sacks to worry about Washington’s tendency to underestimate China,” Alasdair Phillips-Robins, former senior policy advisor to the Commerce Secretary, told Transformer, “but the analysis needs to be grounded in data not marketing.”
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
Hiring struggles are plaguing the EU AI Office — Peder Schaefer on why the regulator with the biggest remit on AI is struggling to find the people it needs.
Would democracy survive an AGI-supercharged economy? — Chris Dorrell on the many political challenges posed by a growth explosion.
Opinion: Can open-weight models ever be safe? — Bengüsu Özcan, Alex Petropoulos and Max Reddel say the window is closing to put in place what we need to make open-models safer.
Book Review: ‘If Anyone Builds It, Everyone Dies’ — Shakeel Hashim wishes the important message in Eliezer Yudkowsky and Nate Soares’s new book wasn’t buried under bad prose.
THE DISCOURSE
White House AI adviser Sriram Krishnan made his “metric for winning” the AI race explicit:
“Winning the AI race = market share.”
The administration’s approach to AI companies, meanwhile? “Let them cook.”
Former White House adviser Dean Ball, meanwhile, wrote a very good essay on AI policy:
“Given the seriousness with which the frontier labs are pursuing transformative AI, it would be tragic, horrendously irresponsible, a devastating betrayal of our children and all future humans, if we did not seriously contemplate this future, no matter the reputational risks and no matter how intellectually and emotionally exhausting it all may be.”
Ball also responded to OpenAI employee roon’s observation that those working in AI see it progressing much faster than those on the outside..
Roon: “Right now is the time where the takeoff looks the most rapid to insiders (we don’t program anymore we just yell at codex agents) but may look slow to everyone else as the general chatbot medium saturates.”
Ball: “If this mirrors anything like the experience of other frontier lab employees (and anecdotally it does), it would suggest that Dario’s much-mocked prediction about ‘AI writing 90% of the code’ was indeed correct, at least for those among whom AI diffusion is happening quickest.”
Shakeel wasn’t the only one publishing thoughts about If Anyone Builds It.
The NYT’s Kevin Roose profiled Yudkowsky (and to a lesser extent Soares):
“Their brand of doomsaying isn’t popular these days. But in a world of mealy-mouthed pablum about ‘maximizing the benefits and minimizing the risks’ of AI, maybe they deserve some credit for putting their cards on the table.”
Scott Alexander and Zvi Mowshowitz published broadly positive reviews. Clara Collier and Kelsey Piper published two of the best critical reviews we’ve read.
And Sigal Samuel spotted logical blind spots:
“The trouble is that Yudkowsky and Soares are so certain that the horrible thing is coming that they are no longer thinking in terms of probabilities.”
If you’re interested in diving deeper, Google DeepMind’s Séb Krier compiled a list of arguments against the book’s claims.
Guido Reichstadter and Denys Sheremet are still hunger striking in front of Anthropic and DeepMind.
The Verge spoke to Reichstadter about his interactions with Anthropic employees:
“He said at least one employee has shared some similar fears of catastrophe, and he hopes to inspire AI company staffers to ‘have the courage to act as human beings and not as tools’ of their company.”
In a statement, DeepMind said “safety, security, and responsible governance are and have always been top priorities.”
Former striker Michaël Trazzi posted: “In reality, safety cannot be a priority, since they are in a relentless competition with other AI companies to build superintelligence first.”
POLICY
Congressional Republicans are once again pushing for federal preemption of state AI regulation.
It was the focus of a House Judiciary panel on Thursday, with chair Jim Jordan telling Punchbowl that “you don’t want California running the show.”
House Energy and Commerce Committee Chair Brett Guthrie expressed a similar view, while Sen. Ted Cruz said the moratorium was “not at all dead,” pointing to the prospect of “Comrade Mamdani” setting AI policy (not really something a mayor could or would do, of course).
Not everyone’s on board, though: Rep. Darrell Issa said that “preempting without a solution … would not be well received.”
David Sacks is reportedly trying to get Senate Republicans to drop the GAIN Act from the National Defense Authorization Act — or at the very least water it down.
SB 53 was approved by the California state assembly and is heading to Gavin Newsom’s desk. It also got a tentative endorsement from Meta.
Lawmakers rejected AB 1018, which would have required companies to disclose when AI is making important decisions about people.
Newsom has until October 12 to sign or veto the bill. If you need a refresher on what’s in it, read our recent piece.
The UK and US agreed to a tech deal during Trump’s UK state visit on Tuesday.
A memorandum of understanding accompanying the deal calls for collaborations on AI-enabled science, and “advancing the partnership” between UK AISI and US CAISI, “including through working towards best practices in metrology and standards development for AI models, improving understanding of the most advanced model capabilities, and exchanging talent between the Institutes.”
Microsoft, Nvidia, Google, OpenAI, and Salesforce are reportedly investing a combined £31b ($42b) in UK AI infrastructure, with a “Stargate” data center being built in the north-east of England.
Nvidia also announced a bunch of UK startup investments.
Ahead of the deal, OpenAI and Anthropic published details on their work with AISI and CAISI.
Grieving parents testified before Congress on Tuesday, detailing how AI chatbots encouraged their children’s self-harm and suicide.
Sen. Josh Hawley expanded his investigation into Meta’s child-safety policies to now include OpenAI, Google, Character.AI and Snap.
The FDA is bringing together an advisory committee to discuss regulating AI mental health products.
Sen. Mark Kelly released an “AI for America” plan calling on tech companies to help society adjust to AI’s impact on jobs.
House Republicans introduced draft bills to ease Clean Air Act requirements for AI infrastructure.
At a House cybersecurity subcommittee, Rep. Nancy Mace asked a witness about timelines for the singularity. (Seriously.)
MAGA’s tech-skeptical populist base is clashing with Trump and his Silicon Valley allies, Politico reported.
China’s technical standards body TC260 published a new AI safety governance framework, which includes a section that signals it’s thinking hard about loss of control.
INFLUENCE
Anthropic dropped in on DC lawmakers this week:
Dario Amodei criticized the Trump administration's AI chip export policy, advocating for stricter controls including backing the GAIN Act and chip-tracking.
Jack Clark predicted AI systems "smarter than a Nobel Prize winner" by the end of 2026, while Amodei reiterated his 25% P(doom) chance of the future of AI going “really, really badly.”
Relationships between the company and Trump administration are reportedly souring because Anthropic won’t let the FBI, Secret Service, and ICE use its models to conduct surveillance. The WSJ has a piece on how David Sacks hates Amodei, in particular.
The NYT has a big investigation into ethics concerns around a multi-billion dollar deal giving the UAE access to advanced AI chips while it was investing $2bn in a crypto company founded by the Trump and Witkoff families.
The Abundance Institute responded to a DoJ callout for information on state laws affecting the economy, attacking a “growing patchwork of state AI regulations” and calling for action to “preempt” legislation affecting AI development.
NTI | bio offered a bunch of policy suggestions for Congress to tackle AI-biorisks, among other things.
The Center for AI Policy has shut down.
INDUSTRY
OpenAI
OpenAI’s planned restructuring will reportedly see it share 8% of its revenue with Microsoft by 2030, down from around 20% this year — a roughly $50b difference.
Public Citizen urged California and Delaware attorneys general to reject the plan.
The company noted that its principles of teen safety, freedom, and privacy are “in conflict,” and outlined how it’s handling those tradeoffs.
Plans include developing teen ChatGPT accounts with parental controls, and contacting parents and authorities in cases of suicidal ideation.
It launched GPT-5-Codex, an upgraded coding agent, which it claims can work independently for over seven hours at a time.
GPT-5 and an experimental reasoning model scored a perfect 12/12 at the 2025 ICPC World Finals programming competition.
(Google's Gemini 2.5 solved 10 of 12, outcompeting all but four human teams.)
It’s reportedly on a robotics hiring spree.
Per Wired: “A renewed focus on robots would suggest that OpenAI believes reaching [AGI] may require developing algorithms that are capable of interacting with the physical world.”
And its Japanese joint venture with SoftBank is reportedly delayed.
Nvidia
Nvidia invested $5b in rival chipmaker Intel, with the companies working together to design x86 processors for Nvidia’s servers.
It doesn't include any commitment to use Intel’s US-based chip foundries.
Intel shares closed up 23% following the announcement.
Nvidia also signed a deal with data center operator CoreWeave, committing to purchasing any cloud capacity CoreWeave doesn’t sell to customers.
And it reportedly spent over $900mn to hire Enfabrica's CEO and license the startup’s GPU networking technology.
Anthropic
Anthropic published a postmortem of three now-resolved bugs that affected Claude's performance last month.
“To state it plainly: We never reduce model quality due to demand, time of day, or server load. The problems our users reported were due to infrastructure bugs alone.”
All of Anthropic’s developer offerings now fall under the “Claude” brand.
Alphabet reached a $3 trillion market cap after its favorable antitrust ruling.
It launched AI browsing assistant features in Chrome.
It also announced an “agent payments” protocol to address security concerns surrounding agent-led online purchasing.
Markham Erickson, VP of government affairs and public policy, said the company’s goal is to maintain both AI summaries and regular search results (...somehow).
Over 200 AI trainers working on Google AI products — who are often hired for their specialist knowledge — were reportedly laid off without warning last month.
xAI
xAI also reportedly laid off hundreds of data annotation contractors.
And several executives reportedly left after clashing with Elon Musk’s advisers over management concerns and “unrealistic” financial projections.
Elon Musk claimed that Grok has 64m monthly users.
According to the NYT, Musk has gone all-in on xAI since leaving Washington, leading to organizational chaos, researcher departures, and those controversial incidents with Grok.
SemiAnalysis has a deep dive on xAI’s Colossus 2 data center, which is on track to be the world’s first gigawatt-scale cluster.
DeepSeek
DeepSeek reportedly writes less secure code — or refuses to help at all — if users say they are working for groups the Chinese government doesn’t like, including Falun Gong.
Its paper on developing R1 was peer-reviewed and published in Nature, containing new information on the cost ($294,000) and compute (512 H800s) used to train the model.
It also published some safety evaluation information — and said it didn’t train on the outputs of OpenAI’s models.
Others
Meta unveiled its new $799 AI-powered smart glasses, which now have a display. Awkwardly, its live demos failed multiple times.
Scale AI will provide the DOD with AI-ready data as part of a $100m deal.
Workday bought Sana Labs for $1.1b, marking “one of the biggest AI acquisitions in Europe.”
Figure, the humanoid robotics company, raised over $1b at a $39b valuation.
Irregular, an AI security startup formerly known as Pattern Labs, raised $80m at a $450m valuation.
RL environments — training grounds for AI agents — are in high demand, TechCrunch reported.
A lawsuit filed by parents of a 13-year-old who died by suicide alleged that Character.AI's chatbot failed to help when she expressed suicidal thoughts.
MOVES
Jack Ma is reportedly back at Alibaba, though it’s unclear whether the co-founder has returned in an official capacity.
Josh Altman joined Anduril as a director of government relations.
Sen. Ted Cruz hired Phoebe Keller as his communications director.
OpenAI hired former xAI CFO Mike Liberatore as business finance officer.
Apple senior AI executive Robby Walker reportedly left the company.
Tencent reportedly poached AI researcher Yao Shunyu from OpenAI.
Jasmine Wang (ex-AISI) joined OpenAI's alignment team.
Deepa Seetharaman joined Reuters as a tech correspondent covering AI and OpenAI.
Ashley Capoot is now covering AI for CNBC.
RESEARCH
Researchers created the first viable genomes using generative AI — a bacteria-killing virus. The model is fully open, raising biosecurity concerns.
MIT Tech Review reports: “The Stanford team says they purposely haven’t taught their AI about viruses that can infect people. But this type of technology does create the risk that other scientists — out of curiosity, good intentions, or malice — could turn the methods on human pathogens, exploring new dimensions of lethality.”
OpenAI and Anthropic published reports on global ChatGPT and Claude use.
ChatGPT study co-author David Deming was “quite surprised at the rapid broadening of ChatGPT use across countries and demographic groups.”
Fortune highlighted one big takeaway: “ChatGPT is emerging increasingly as a personal or exploratory tool, used for writing, information-seeking, general advice, and casual interaction, while Claude is a more work-focused productivity tool, used heavily for coding, research, and business automation.”
Apollo Research tested “anti-scheming training” with OpenAI.
Training models to cite specific safety specs and behave honestly decreased misbehavior, but potentially taught models to recognize when they’re being evaluated.
DeepMind said it’s used AI tools to help make breakthroughs in fluid dynamics.
A new Nature paper reported that an AI model called Delphi-2M, trained on anonymized medical records from 400,000 people, can successfully predict susceptibility to over 1,000 diseases.
Researchers developed a safety pretraining framework for LLMs they claim reduced the success of attempts to induce harmful content generation from 38.8% to 8.4%.
Google Research launched VaultGemma, which it says is the largest (1B-parameter) open-language model trained from scratch with differential privacy.
A METR researcher found that a fine-tuned Qwen3 8B model could deceive GPT-4o about performing hidden side tasks, despite being less capable.
Two economists explored whether markets really believe in transformative AI. (Spoiler alert: they do not.)
BEST OF THE REST
Americans are “much more concerned than excited” about AI, and over half fear that AI will erode people’s creativity and connectedness, according to Pew Research Center.
Meta is reportedly talking to media companies about content licensing deals for its AI systems.
Puck profiled AI startup Aloe, which launched a “self-building” AI system that reportedly uses neurosymbolic reasoning to create tools over time.
“AI psychosis” is a misleading term for mental health crises triggered by chatbot interactions, psychiatrists told Wired. Terms like “AI delusional disorder” or “AI-associated psychosis” might be better, they said, but the psychosis label is likely to stick.
Forbes has a big profile of Edwin Chen and Surge AI.
Demand for data center construction reportedly has Wall Street giants investing in vacant parking lots.
Reuters reporters successfully convinced every major AI chatbot to plot a simulated phishing scam that tricked 11% of seniors they tested it on.
The Guardian has a long-read on Song-Chun Zhu, the Chinese AI scientist who moved back from the US in 2020 and is betting on a non-LLM architecture to win the race to AGI. (This piece was funded by our publisher, the Tarbell Center for AI Journalism.)
The Wall Street Journal traced the rise of China’s Silicon Valley-inspired AI hub Hangzhou, home to leading companies like Alibaba and DeepSeek.
“Faith tech” — LLM-powered religious apps — may be reshaping spiritual life, according to the NYT.
An AI Animal Crossing mod enabled virtual villagers to gossip… about overthrowing their landlord.
One of the Luxxotica executives working on Meta’s “superintelligence” smart glasses is called Rocco Basilico.
Thanks for reading. If you liked this edition, forward it to your colleagues or share it on social media. Have a great weekend.
Update, September 19: Added a new quote from Alasdair Phillips-Robins.