Dario Amodei’s warnings don’t add up
Transformer Weekly: H200 approvals, a new California PAC, and OpenAI preps an IPO
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
China has reportedly allowed ByteDance, Alibaba, and Tencent to buy Nvidia’s H200 chips.
Google and Meta launched a new California-focused super PAC.
OpenAI is reportedly preparing for a Q4 IPO.
But first…
THE BIG STORY
“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” So says Dario Amodei in his latest essay, The Adolescence of Technology. Across 20,000 words, Amodei cogently lays out the various catastrophes that might befall society in the wake of advanced artificial intelligence: out-of-control superintelligences, bioterrorists on every corner, authoritarian takeovers, and a complete collapse of the socioeconomic order — to name but a few.
It is not an optimistic vision. Yet Amodei argues that, if we follow his suggestions, we might just make it through. Better guardrails on AI models, more research into alignment, and bold economic policy — combined with a bit of luck — could lead us to conquer the overwhelming challenge facing humanity. “The years in front of us will be impossibly hard,” Amodei concludes, but “I have seen enough courage and nobility to believe that we can win.”
I wonder, however, if Amodei himself will rise to the occasion. For while he warns of civilizational catastrophe, his policy proposals read like they were written for a different world.
Take loss of control risks, for instance. “The right place to start is with transparency legislation,” Amodei argues, pointing to SB 53 and the RAISE Act as examples. The logic is straightforward: first we mandate transparency, which will provide the evidence policymakers need to later build robust, more binding regulation (mandatory model testing or safeguards, for instance).
It’s an ostensibly sensible approach. As he notes, there is a risk that “overly prescriptive legislation” ends up being little more than “safety theater.” We ought to be humble about regulating, especially given how hard it is to undo laws once imposed.
But under Amodei’s own timeline, this incrementalism looks dangerously naive. We are, by his estimate, one or two years away from “powerful AI” and all the risks that entails. That is not much time — and governments move slowly. ChatGPT launched in November 2022. SB 53 was not passed until September 2025. If it takes three years to pass relatively light touch transparency legislation, why should we expect policymakers to implement more binding rules in less than a year, as Amodei argues might be necessary?
Amodei’s sequenced approach — transparency first, binding rules later — does not match the scale and urgency of the threats he describes. It is a gamble that Congress will suddenly get its act together, if only it is given more evidence.
Based on the arguments laid out in the essay, that seems like a negligently risky bet to make. If Amodei is right, we need stronger regulation quite soon. Anthropic and other companies that talk about the transformative power of what they are building should be pushing for that — not just setting their sights on federal transparency legislation. Yes, the political will is currently lacking. But it is Anthropic’s responsibility to push harder to build that (and certainly not to water down bills, as it has done in the past).
The policy proposals are not the only place where Anthropic’s actions feel out of step with his diagnosis. Amodei worries about takeovers from “non-democratic countries with large datacenters,” while at the same time working to enrich those countries by letting them invest in his company. He cautions that strong governance of AI companies will become increasingly crucial — while reportedly planning an IPO that will inevitably weaken it.
At times, it feels like Amodei is trying to play 4D chess. He says he is worried about the risks of advanced AI, and wants the world to prepare for them. But he also desperately wants Anthropic to seem like a “reasonable” company — the centrists of AI safety, as it were. His essay reflects this tension: he darts between arguing that transparency-first is epistemically prudent and that it’s politically necessary. He never quite commits to either; a slippage that lets him talk about what can happen, but avoid the harder question of what should.
The result is that Amodei may be undermining his own stated goals. If he really believes that we’re on the precipice of extreme danger, why are his policy proposals so timid? In failing to match his prescriptions to his prognosis, Amodei makes it all too easy to dismiss his warnings as hype and bluster — just another tech CEO crying wolf to shape regulation in his favor. But if he’s right about the future it could be a quite literally fatal mistake.
— Shakeel Hashim
THIS WEEK ON TRANSFORMER
The tech non-profit sending most of its money to the consultancy that created it — Shakeel Hashim digs into the circular relationship between American Resolve and Targeted Victory.
A false choice risks undermining action on autonomous weapons — Alexander Blanchard argues nuance is necessary to effectively govern the use of AI-enabled autonomous weapons.
AI workers are speaking out about the Minnesota killing — Shakeel Hashim documents the industry’s response to events in Minneapolis.
THE DISCOURSE
Following worker outrage, some tech CEOs did speak out about Minnesota:
Sam Altman sent an internal Slack message:
“What’s happening with ICE is going too far … President Trump is a very strong leader, and I hope he will rise to this moment and unite the country.”
He also said that he’d spoken to Trump administration officials about the topic.
Dario Amodei tied the events to his recent essay:
“I’ve been working on this essay for a while, and it is mainly about AI and about the future. But given the horror we’re seeing in Minnesota, its emphasis on the importance of preserving democratic values and rights at home is particularly relevant.”
Tim Cook posted an internal memo:
“This is a time for deescalation … I had a good conversation with the president this week where I shared my views, and I appreciate his openness to engaging on issues that matter to us all.”
Katie Miller, former DOGE advisor and wife of Trump official Stephen Miller, thinks liberal democracy is woke:
“Co-Founder of Anthropic: ‘My deep loyalty is to the principles of classical liberal democracy.’ If this is what they say publicly, this is how their AI model is programmed. Woke and deeply leftist ideology is what they want you to rely upon.”
Dean Ball shared a questionable take:
“Conservatives who worry about “woke AI” basically share the intuition that if generative AI had become important before the Great Vibe Shift, the AI industry would have been happy and enthusiastic supporters of insane shitlib stuff. And they’re probably right.”
Ed Zitron continues to Ed Zitron:
“Let’s say it’s true and everybody is using AI … what is the actual result? … Some engineers do some stuff faster?”
Matthew Zeitlin pointed out the obvious: “It’s really remarkable to see how the goalposts shift for AI skeptics. This is literally describing a productivity speedup.”
Jeffrey Ding thinks the US should focus on AI diffusion, rather than winning the race to AGI:
“I’ve been studying these topics since 2017 and have always been surrounded by people who say that AGI is two years away. It’s always two years away from being two years away. For me, it’s a reason to update more towards this GPT diffusion view, but ultimately technology forecasting is very difficult.”
POLICY
China reportedly approved ByteDance, Alibaba, and Tencent to buy over 400,000 Nvidia H200 chips.
House CCP Committee Chair Rep. John Moolenaar said that documents shown to the committee “reveal NVIDIA provided extensive technical support that enabled DeepSeek … to achieve frontier AI capabilities.”
Rep. Brian Mast’s AI Overwatch Act is in limbo for the time being, Punchbowl reports.
Support from Moolenaar and Intelligence Committee Chair Rep. Rick Crawford could lead to a floor vote or inclusion in this year’s NDAA, but the administration’s opposition is a big hurdle.
Mast said he’s trying to get Marco Rubio and Pete Hegseth onside.
The Trump administration reportedly pushed out two BIS staffers focused on defense against Chinese technological threats, worrying some security hawks.
One of the departures is Liz Cannon, executive director of the BIS Office of Information and Communications Technology and Services.
New DHS disclosures show that ICE uses Palantir for processing tips and extracting addresses from documents, and uses OpenAI for screening résumés.
CBP, meanwhile, used AI tools from Meta, Google, OpenAI, and Anthropic for document summarization.
Sen. Ed Markey wrote to Sam Altman with concerns about OpenAI’s introduction of ads.
He also sent letters to six other AI CEOs, including Dario Amodei and Elon Musk, asking if they had similar plans.
The US Department of Transportation is using Gemini to quickly draft safety regulations, ProPublica reported.
“We don’t need the perfect rule,” said DOT general counsel Gregory Zerzan. “We want good enough. We’re flooding the zone.”
Interim CISA director Madhu Gottumukkala reportedly set off multiple automated security warnings after uploading sensitive government documents to ChatGPT.
San Jose Mayor Matt Mahan announced his candidacy for California governor. He’s generally well-liked by the tech industry.
The EU is formally investigating whether xAI tried to mitigate the risks of deploying Grok on X before this month’s CSAM scandal.
Malaysia restored Grok access after temporarily blocking it earlier this month.
UK tech secretary Liz Kendall said AI will cause job losses, and announced an AI skills training program.
The UK House of Lords had a debate about superintelligence.
Lord Hunt and others asked the government if they’d bring forward proposals for an international moratorium on superintelligence.
The government, unsurprisingly, dodged the question, with Baroness Lloyd saying that the UK being “a global leader in the development and deployment of AI … is the way that will keep us safest of all.”
Singapore plans to invest over $778m in public AI research over the next four years.
The UK used Meta funding to recruit AI experts to build open-source government tools.
INFLUENCE
California Leads, a new Google- and Meta-funded super PAC, is backing “pragmatic” candidates for the state legislature.
The PAC’s spokesperson is former Gavin Newsom adviser Nathan Click. It’s launched with $10m in funding.
Sergey Brin donated $20m to two new YIMBY efforts in California.
Dario Amodei told Axios that Congress will face “a mob … if you don’t do [AI policy] the right way.”
TechNet outlined its 2026 federal policy priorities, emphasizing a federal AI framework and funding for AI research.
Data center operators are reportedly planning a major advertising and lobbying blitz to counter grassroots AI backlash.
The NYT looked at Meta’s ad campaign to promote data centers as job creators. It spent $6.4m in November and December across eight state capitals and DC.
Rep. Sam Liccardo is reportedly organizing meetings between AI CEOs and younger members of Congress.
“It’s helpful to start there,” he said, “and see if we can start to build some common understanding about what’s required.”
Liccardo is also helping other Democrats fundraise in Silicon Valley. He’s raised $309k for the Democratic Frontline group so far.
The Hill & Valley Forum announced its next event on March 24. OpenAI’s Brad Lightcap will speak, among other tech leaders.
Google updated its “Mayors AI Playbook,” which aims to help local politicians experiment with AI tools — Google’s, that is.
The Business Software Alliance urged Congress to preempt state AI regulation issue by issue, rather than trying to overrule state statutes all at once.
A new survey from the Institute for Family Studies found that Trump voters in red states generally oppose federal preemption.
An investigation by The Nerve found that Palantir has at least £670m in contracts with the UK government.
INDUSTRY
Anthropic
Anthropic is reportedly on track to raise $20b from investors — twice its initial target.
The Pentagon is reportedly feuding with Anthropic.
Anthropic doesn’t want its models used for autonomous weapon targeting or US domestic surveillance.
The Pentagon wants it to remove those restrictions before agreeing a $200m contract.
Music publishers including Universal sued Anthropic for over $3b, alleging the company torrented pirated songbooks and trained Claude on more than 20,000 copyrighted songs.
A Washington Post investigation described “Project Panama,” Anthropic’s multimillion dollar effort to “destructively scan all the books in the world.”
An internal planning document said: “We don’t want it to be known that we are working on this.”
Anthropic made the creator of Clawdbot, an open-source AI assistant, rename it “Moltbot” over trademark concerns.
Claude can now access apps including Slack and Canva within chats.
Yahoo launched Scout, a Claude-powered “AI answer engine.”
Apple reportedly wanted Claude to power Siri, but Anthropic asked for too much money.
OpenAI
OpenAI is reportedly preparing for a Q4 IPO, and is keen to beat Anthropic to a listing.
It’s reportedly about to raise up to $100b, including up to $50b from Amazon and $30b each from Nvidia and SoftBank.
The round would reportedly give the company a pre-money valuation of $730b.
OpenAI acquired Crixet, a LaTeX editing app, and launched Prism, a ChatGPT-driven text editor for scientific papers.
It hired seven employees from Cline, the AI coding startup.
It is reportedly charging its first advertisers roughly $60 per 1,000 views — as much as they’d need to pay for a spot during a live NFL game.
It partnered with Leidos, a government contractor, to implement AI in federal agencies.
And it updated its whistleblower policy.
Sam Altman teased “exciting launches related to Codex.”
“We are going to reach the Cybersecurity High level on our preparedness framework soon,” he tweeted.
ChatGPT Health gave Washington Post columnist Geoffrey Fowler wildly inaccurate health scores.
Upon seeing its output, cardiologist Eric Topol said ChatGPT Health “is not ready for any medical advice.”
The Guardian reported that GPT-5.2 cites Grokipedia as a source — and repeats some of its right-wing claims.
Meta
Meta’s revenue exceeded expectations, with the performance of its ad business helping fund forecast AI infrastructure spending for the year of between $115b and $135b.
That’s ahead of analyst forecasts of $110b and potentially up 87% on 2025.
CFO Susan Li said AI investments were improving ad targeting and personalization to keep users scrolling.
The company’s shares rose 11% following the results.
Meta temporarily blocked teen access to AI characters.
An updated version with parental controls is reportedly in development, and will be available for everyone.
Court filings revealed that Mark Zuckerberg ignored warnings from safety staffers against allowing minors to talk to AI companions.
It also blocked posts linking ICE List, a database compiling names and photos of ICE agents.
The company will reportedly test premium subscriptions, which would include expanded access to AI features like Manus and Vibes.
Microsoft
Microsoft’s shares fell 11% after it reported slowing growth in its Azure cloud business and Q2 capital expenditure of $37.5b, up 66% from a year earlier and more than a billion more than analyst predictions.
The share drop was the company’s biggest since March 2020.
It won approval to build 15 data centers at a former Foxconn site in Wisconsin.
An NYT investigation found that Microsoft expects its annual water needs to more than triple by 2030, especially in areas already facing water shortages.
Microsoft introduced Maia 200, an inference chip designed to make AI token generation more efficient.
It will reportedly serve GPT-5.2 and the Microsoft Superintelligence team’s in-house models.
Microsoft product leaders are reportedly worried that Claude Cowork can use Microsoft apps better than 365 Copilot can.
Satya Nadella is reportedly pushing employees to accelerate AI development in response.
Perplexity signed a $750m three-year Azure cloud deal with Microsoft to access models from OpenAI, Anthropic, and xAI while maintaining AWS as its primary provider.
Google launched Project Genie, a very impressive world model which lets users generate and interact with virtual environments.
It also acquired 3D image generation startup Common Sense Machines to further its efforts in this area.
A judge dismissed a wrongful death lawsuit filed when a teenager died by suicide after engaging with a Character.AI chatbot licensed by Google.
Google DeepMind staffers asked company leaders for policies to prevent ICE from entering campuses, after one staffer alleged a federal agent tried to enter the Cambridge campus in fall 2025.
A former Google engineer was convicted of economic espionage and trade secrets theft after stealing confidential AI chip documentation to build a startup in China.
Google is exploring ways to let websites opt out of AI Overviews and AI Mode, following new rules from UK antitrust authorities.
It launched several new AI features, including free Gemini-powered SAT practice exams and Auto Browse in Chrome.
Others
SpaceX is reportedly considering a merger with Tesla or xAI ahead of a potential IPO this June.
Apple acquired Israeli start-up Q.AI for close to $2b, the company’s second-largest acquisition ever.
It makes tech that tracks your facial expressions to let you “silently talk” to AI systems.
Nvidia invested another $2b in cloud provider and key customer CoreWeave.
It also said it will start selling Vera chips as standalone CPUs, which will compete with the Intel and AMD processors used in data centers.
Hugging Face reportedly turned down a $500m investment from Nvidia at a $7b valuation.
While it didn’t comment directly on the deal, it told the FT it doesn’t want a single dominant investor that could sway decisions.
Mozilla said it is building a “rebel alliance” of AI startups focused on safety and governance to take on OpenAI and Anthropic, using its $1.4b in reserves to back “mission driven” organizations.
Chinese AI companies are reportedly rushing to release new models ahead of an expected big launch from DeepSeek.
Alibaba-backed Moonshot AI released Kimi K2.5, which appears to be quite good.
Alibaba itself released Qwen3-Max-Thinking, which it said performs comparably to leading US models.
ByteDance is reportedly planning to launch three new models in mid-February, too.
Abu Dhabi’s Mohamed bin Zayed University of Artificial Intelligence launched K2 Think, an open-source AI model that researchers claim ranks alongside the best open models from the US and China.
The NYT profiled Ricursive Intelligence, a startup aiming to build AI systems that can improve AI chip designs. It’s raised $355m at a $4b valuation.
Research-focused AI startup Flapping Airplanes raised $180m at a $1.5b valuation, part of what the WSJ describes as a wave of investment in “neolabs” prioritizing long-term AI research over products.
Micron announced a $24b investment to build a new memory chip facility in Singapore, with output set to begin in 2028.
Intel stock crashed 17% after the company said it couldn’t meet surging AI data center CPU demand because it had cut capacity on older production lines.
SoftBank scrapped plans to acquire data center operator Switch Inc., which would have aided OpenAI’s Stargate ambitions.
Former OpenAI research VP Jerry Tworek is reportedly raising up to $1b for Core Automation, a new startup targeting training methods neglected by frontier labs and capable of continual learning.
Two Chinese chip firms announced plans for IPOs in Hong Kong.
Software stocks entered bear market territory, falling 22% from recent highs, as investors feared AI automation could erode demand for traditional software.
MOVES
Semiconductor Industry Association president John Neuffer announced his retirement after 11 years.
Digital Progress Institute president Joel Thayer joined America First Policy Institute as a senior fellow, where he’ll work on AI and emerging tech policy.
Kathryn Mitchell, formerly chief of staff at NIST’s CHIPS R&D office, joined DLA Piper as a policy adviser.
OpenAI VP (and former CISO) Matt Knight left the company.
DeepMind is hiring a “chief AGI economist” to investigate “post-AGI economics.”
Eli M. Rosenberg joined The Information, where he’ll cover AI in San Francisco.
RESEARCH
Google DeepMind published AlphaGenome, an AI model that predicts the effects of genetic mutations.
However, some experts cautioned that it will not have the same degree of impact as AlphaFold. (Computational biologist Steven Salzberg said he sees “no value in them at all right now.”)
Goodfire AI claimed to have “identified a novel class of biomarkers for Alzheimer’s detection — using interpretability.”
Stephen Casper said it was “hype-milling” that “verges on dishonesty.”
CSET released a report from a July workshop on automated AI R&D.
Experts disagreed on how quickly AI R&D will accelerate, and noted that existing benchmarks can’t accurately predict its trajectory.
Epoch AI evaluated GPT-5.2 Pro, which set a new record on its FrontierMath benchmark.
Carnegie Mellon researchers published a paper warning that increased scientific automation may obfuscate methods and bias results.
An AI text detector reportedly flagged over 50 NeurIPS papers as containing hallucinations, including some from Google and NVIDIA.
The Atlantic published a feature on science’s “AI slop” problem, reporting that journals and preprint servers are “drowning” in bad AI-generated papers.
arXiv is attempting to address the problem — first-time submitters now need an endorsement from an established author in their field.
A new Gallup poll found that AI use among “remote-capable” employees — who can complete most work tasks digitally — rose from 28% to 66% since mid-2023.
The Anti-Defamation League found that, among six top LLMs, Grok was the worst at countering antisemitic content.
The Tech Transparency Project identified dozens of “nudify” apps with over 705 million combined downloads, raking in over $100m total.
A new research paper claims that “self-distillation enables continual learning.”
METR updated its methodology for measuring model time horizons on software tasks, expanding from 170 to 228 tasks from HCAST.
The methodology changes don’t change the long-run trend, METR says.
BEST OF THE REST
The FT profiled White House AI adviser Sriram Krishnan, who appears to be well-liked even by people who wildly disagree with him.
AI-generated Instagram influencers are evolving into increasingly surreal personas — including fake conjoined twins and three-breasted women — to stand out and sell adult content on platforms like Telegram and Fanvue.
Data centers are reportedly driving an explosion in demand for gas-fired energy in the US, with projects specifically earmarked for data centers increasing almost 25x from 4 GW in 2024 to 97 GW in 2025.
Epoch AI estimates that GPT-5 failed to recoup its R&D costs during its four-month lifetime.
The NYT spoke to more than 100 mental health professionals about dealing with AI, with over 30 saying they treated patients who experienced psychosis, suicidal thoughts, or violent behavior after extended ChatGPT conversations.
A Rest of World investigation found 227 reported suicides among Indian tech workers from 2017-2025, with employees citing extreme work pressure, 15-hour days, and fears of AI-driven layoffs.
Time took a dive into the philosophical and scientific debates over whether AI systems count as having minds.
China’s military is “intensely focused on harnessing AI to deploy swarms of drones, robot dogs and other autonomous systems” according to a report from the WSJ.
It has trained AI-controlled drone swarms using animal behavior patterns from hawks and doves.
A Morgan Stanley report found that UK jobs have been hardest hit by AI, with an 8% net fall in jobs, despite similar productivity gains to those seen in the US, Germany, Japan, and Australia.
The Nation spoke to the University of Alaska student who was arrested for eating 57 AI-generated Polaroids from a gallery exhibit to protest AI art.
“AI chews up and spits out art made by other people,” he said.
The number of “.ai” domain registrations surpassed 1m in early January, generating an estimated $70m in revenue for Anguilla’s government in the past year.
MEME OF THE WEEK
Thanks for reading. Have a great weekend.



Dario also seems to think he should be the judge over which autocratic states are worthy of getting access to the AI supply chain. He is critical of China for the CCP’s dictatorship, yet he welcomes investment from Arab countries with no democratic leanings.
Sharp analysis on the timeline contradiction. If powerful AI is 1-2years out and transparency legislation takes 3 years, the sequencing seems off. The 4D chess metaphor captures it well, trying to balance safety concerns with Anthropic's market positioning creates this awkward middle ground that satisfies neither side.