The White House’s risky war on the Utah AI bill
Transformer Weekly: Kratsios in India, Anthropic vs the Pentagon and new Meta super PACs
Welcome to Transformer, your weekly briefing of what matters in AI. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
Michael Kratsios warned against focusing on “safety and speculative risks” at the India AI Impact Summit.
Anthropic and the Pentagon continued to feud.
Meta launched two new state-level super PACs.
But first…
THE BIG STORY
The kids are not alright.
At least, that’s what most voters believe.
Voters on both sides of the aisle are concerned that chatbots pose a significant risk to the mental health and safety of children. According to a poll from the Institute for Family Studies, 89% of Trump voters and 95% of Harris voters want Congress to prioritize child safety over tech industry growth.
That’s what makes the White House’s attempt to halt the passage of Utah’s Artificial Intelligence Transparency Act, first reported by Axios on Sunday, so surprising. Not only is the bill a fairly light-touch piece of transparency legislation sponsored by a Republican in a deep-red state, but it’s also been pitched as a child safety measure — something the MAGA base reliably rallies around. AI companies don’t seem particularly worried about it, either. Yet the White House says it’s “categorically opposed” to the legislation, intervening for the first time since Trump signed the December executive order indicating that it would legally challenge state AI laws.
The White House “took something that was potentially negotiable and turned it into something existential,” explained Michael Toscano, director of the Institute for Family Studies’ Family First Technology Initiative. According to his own polling, Trump voters are overwhelmingly supportive of child safety regulations for AI, and Utah’s are even more so.
HB 286, sponsored by state representative Doug Fiefia, mostly mirrors California’s SB 53: it’s a transparency bill, rather than legislation that mandates a kill switch for AI or state licensing. It targets large frontier developers, defined as those generating at least $500m in annual revenue, and requires that they create public safety plans that identify and mitigate potential “catastrophic risks.” But unlike SB 53, or New York’s RAISE Act, it extends those transparency requirements to child safety issues, requiring large developers to publish plans to avoid “child safety risks,” such as encouragement to self harm, infliction of severe emotional distress, or infliction of physical harm.
“I appreciate the White House’s engagement on this issue and look forward to continuing the dialogue,” Rep. Fiefia told Transformer. “Regardless of broader policy debates, safeguarding kids online is an area where we should be able to find common ground.”
The White House’s intervention — thought to be spearheaded by AI czar David Sacks — on a fairly uncontroversial bill supported by Trump’s own voters raises questions about whether any legislation would be deemed acceptable. The Trump administration has always said it’s pushing for a federal standard, but it has about four months left to pass legislation through Congress before midterm campaigning kicks into high gear — and it doesn’t seem to be making much headway. According to Politico, the White House is even sidelining Republican lawmakers that tech industry lobbyists trust to strike a reasonable compromise. Some of those lobbyists are questioning whether Sacks is serious about arriving at workable legislation at all.
In lieu of federal legislation, AI remains virtually unregulated at the federal level — and maybe that’s the whole point. State laws don’t fill the regulatory gap much if the White House and pro-state preemption donors can effectively bully state lawmakers into watering down their bills, or backing off AI regulation altogether. They’ve had success doing so in blue states, and we’re about to see if they can do so in red states, too.
“I think the strategy is basically to make it as painful as possible for red states or Republican legislators to get anything positive across,” said Toscano.
There’s a risk, though, that the intervention sparks a backlash. According to a poll commissioned by Secure AI Project, 71% of Utah voters worry the state will not adequately regulate AI, “potentially harming children and consumers.” On Thursday, Utah Gov. Spencer Cox said that it was “preposterous” for the federal government to tell the state to back off state-level legislation, arguing that the government needs to “regulate [AI] to make sure they don’t destroy humankind.”
— Veronica Irwin
THIS WEEK ON TRANSFORMER
AI power users can’t stop grinding — Celia Ford on how AI’s extra productivity is making its biggest fans work even harder.
Why we need a moratorium on superintelligence research — Lord Hunt of Kings Heath argues that the UK must spearhead a pause on advanced AI development.
The left is missing out on AI — Dan Kagan-Kans on why the left’s dismissal of AI is ceding debates about its threats and opportunities to the right.
THE DISCOURSE
Dario Amodei went on Dwarkesh Patel’s podcast and doubled down on his short AGI timeline:
“It is absolutely wild that you have people — within the bubble and outside the bubble — talking about the same tired, old hot-button political issues, when we are near the end of the exponential.”
“I think it’s crazy to say that this won’t happen by 2035. In some sane world, it would be outside the mainstream.”
Dario and Sam Altman avoided holding hands at the India AI Impact Summit:
Bill Drexel told Politico: “There’s rarely an incident at a high-profile, onstage international conference like this, that literally makes me burst out laughing.”
Boaz Barak subtweeted the Pentagon:
“If (and it’s a big if) [a] question in [the] latest Anthropic case is whether AI should power autonomous weapons or mass surveillance, the answer should be clear. AI should not be making life or death decisions, and should not be spying on us. This is not a partisan issue.”
Palantir CEO Alex Karp seems to feel differently:
“If we didn’t have adversaries, I would be very in favor of pausing this technology completely…but we do.”
Sen. Josh Hawley (R-Mo.) is worried about Trump’s embrace of AI:
It’s “working against the working man, his liberty and his worth.”
Colin Wellenkamp, Missouri state representative spelled out his constituents’ concerns:
“The top three items people are most thinking about are: what are you doing with my water and my air, what are you doing to keep AI secure when you’re not using it to pull scams…and is it going to take my job?”
Brian Merchant described what he thinks “the left” actually dislikes about AI:
“The left is so opposed to AI because the entire political economy enabling its current iteration is rotting…People don’t hate AI because they think it’s fake, in other words, they hate it because they see what it is.”
Professor Ted Underwood noted that leftists can be AGI-pilled, too:
“Just spent two hours talking w/ 30 (likely left-leaning?) doctoral students about the opportunities and perils of AI. Marx was quoted; the phrase ‘zero-shot’ was used; ‘stochastic parrot’ was not. If this complex reality isn’t visible in thinkpieces / social media, we need to make it visible.”
POLICY
Michael Kratsios addressed the India AI Impact Summit, criticizing a focus “on safety and speculative risks… rather than concrete opportunities.”
The senior White House tech advisor said the Trump admin “seeks to support legislators as they construct a national policy framework that protects children, prevents censorship, respects intellectual property, and safeguards our workers, families, and communities.”
He announced funding initiatives to help countries adopt AI, and a Peace Corps-style “Tech Corps.”
AI companies used the summit to make new voluntary commitments to share anonymized usage data and conduct multilingual evaluations.
The Pentagon continues to feud with Anthropic.
Anthropic reportedly annoyed the Pentagon by asking Palantir if Claude was used during the Maduro raid.
And Anthropic still won’t let the Pentagon use its models for domestic surveillance. The Pentagon said that OpenAI, Google and xAI have “agreed in principle” to such uses.
Pete Hegseth is reportedly considering designating Anthropic a “supply chain risk,” which would effectively ban any military contractor, including Palantir, from using Claude.
Meanwhile 1789 Capital, where Don Jr. is a partner, reportedly declined to invest in Anthropic for ideological reasons.
Reps. Gottheimer and Lawler introduced a bill offering tax credits for AI workforce training costs.
Sens. Hawley and Blumenthal introduced legislation to prevent data centers’ energy costs from being passed on to consumers.
House Science Chairman Brian Babin and Rep. Obernolte asked the Government Accountability Office to produce a report on federal and state AI laws.
House Foreign Affairs Committee leaders sent a letter urging the State and Commerce Departments to close gaps in semiconductor manufacturing equipment export controls to China.
The FTC continued to pursue its probe of whether Microsoft is illegally monopolizing enterprise computing through AI bundling practices.
The Pentagon added Alibaba, BYD, and Baidu to its list of companies with alleged Chinese military connections — before quickly deleting the updated list from its website.
US CAISI announced a new AI Agent Standards Initiative to “foster the emerging ecosystem of industry-led AI standards and protocols.”
The Treasury Department announced a public-private initiative producing resources to strengthen cybersecurity and risk management for AI in financial services.
SpaceX and xAI are competing in a Pentagon contest to develop voice-controlled autonomous drone swarming technology.
UK Prime Minister Keir Starmer announced plans to expand the Online Safety Act to include AI chatbots.
Spain ordered prosecutors to investigate X, Meta, and TikTok for allegedly spreading AI-generated child sexual abuse material.
Ireland’s Data Protection Commission also announced an investigation into Grok.
INFLUENCE
Meta launched two new state-level super PACs: the Republican-backing Forge the Future Project, and Making Our Tomorrow, a Democratic vehicle.
Along with its two previously announced PACs, they have a combined budget of $65m.
The new PACs are initially spending in Illinois and Texas — two places where Meta faces opposition to its data center projects.
The Anthropic-backed Jobs and Democracy PAC launched a $450k ad campaign supporting Alex Bores in New York.
OpenAI’s Chris Lehane reportedly told employees that the company is not yet donating to 501(c)(4)s or super PACs because it “wants to retain control of its political spending.”.
Lehane helped set up Leading the Future, the largest industry super PAC.
Rep. Pat Ryan pledged to reject campaign cash from Palantir employees and donate past contributions to immigrant aid groups. Rep. Jason Crow and Sen. John Hickenlooper said they would donate Palantir funding to immigrant rights groups.
House minority leader Hakeem Jeffries, Rep. Riley, and several New York Democrats avoided making commitments.
Academic researchers called for guardrails on some infectious disease datasets, similar to how institutions handle private health data, to protect against AI-enabled biorisk.
TIME ran a cover feature on “The People vs. AI,” profiling nine ideologically diverse Americans who want to slow or stop AI development.
INDUSTRY
Anthropic
Anthropic released Claude Sonnet 4.6, which significantly outperforms Opus 4.5, its smartest model from four months ago. (The model comes with a 134 page system card.)
It opened a new Bengaluru office and announced new Indian partnerships with companies, nonprofits, and government agencies, including major IT provider Infosys.
Jack Clark announced that Anthropic is “aggressively scaling” its Societal Impacts team, which will be “informing decisions Anthropic makes about how to deploy its technology and how to study the effect it is having on people in the real world.”
Anthropic could pay Amazon, Google, and Microsoft up to $6.4b to run Claude on their cloud servers next year.
Daily active Claude users increased by 11% after its Super Bowl ad aired, outpacing ChatGPT and Gemini’s smaller relative gains.
CodePath, a computer science education nonprofit, partnered with Anthropic to integrate Claude into college courses on AI.
Figma partnered with Anthropic on a tool that converts vibe-coded instructions into editable designs.
OpenAI
OpenAI reportedly began finalizing the first stage of a funding round that could raise $100b at a $380b valuation, with investors including SoftBank, Amazon, Nvidia, and Microsoft.
It introduced OpenAI for India, an initiative meant to expand AI access in the country, which already has 100 million weekly active ChatGPT users.
It offered up to $15k in legal support for employees affected by ICE or border patrol.
Reporter Julia Black noted: “OpenAI, of course, relies heavily on highly-skilled international talent for its technical roles.”
Google launched Gemini 3.1 Pro, which more than doubled Gemini 3 Pro’s ARC-AGI-2 score, reflecting significant reasoning improvements.
Alphabet announced plans for new fiber-optic lines to improve online connectivity between the US and India.
Ormat Technologies, a geothermal power company, will supply Google’s Nevada data centers once they’re operational.
Nvidia
Nvidia announced partnerships with major Indian VC firms, cloud providers, and AI-native companies in support of the country’s IndiaAI mission.
After failing to acquire Arm five years ago, Nvidia sold its remaining shares in the company for about $140m.
Meta will reportedly spend tens of billions of dollars on Nvidia chips, both to train and run AI models.
Microsoft
Microsoft announced it’s on track to invest $50b by 2030 on AI in the “Global South.”
It committed to continue matching all of its electricity needs with renewable energy purchases.
Bill Gates withdrew from delivering a keynote at the summit amid renewed scrutiny of his ties to Jeffrey Epstein.
Others
Amazon CEO Andy Jassy announced $200b in capital expenditure, mostly for AWS infrastructure, outspending both Google and Microsoft.
AWS reportedly suffered at least two service outages in recent months due to engineers allowing its Kiro AI coding tool to make changes, including one instance where it decided to “delete and recreate the environment.”
OpenAI, Google, and Perplexity are nearly approved to sell their tech directly to the US government, bypassing tech firms such as Palantir that typically host their chatbots.
Apple reportedly ramped up its AI-powered device efforts, with smart glasses, a pendant, and AirPods expected as early as this year or next.
The company’s stock no longer closely tracks the Nasdaq 100 with one strategist telling Bloomberg the shift is a positive result of the company largely staying out of the AI Arms race and avoiding volatility.
Palantir moved its headquarters to Miami without saying why.
Former tech-friendly Miami mayor Francis Suarez tweeted: “This is the tipping point!!! What a watershed moment for Miami…”
Softbank reportedly plans to build a $33b gas power plant generating 9.2 GW in Ohio to power AI data centers as part of the trade deal agreed between the US and Japan last year.
Fei-Fei Li’s startup World Labs raised $1b to develop world models for 3D environments.
Toronto chip startup Taalas, which hardwires AI models into chips to increase inference speed and efficiency, raised $169m.
SaaS companies including McAfee and Rocket Software released early earnings reports in an attempt to assuage private lenders of their “SaaSpocalypse” fears.
Raspberry Pi’s CEO bought stock in the company, sending its shares up 42% amid social media posts suggesting that users of low-cost AI agents may drive demand for its computers.
Roughly a third of Airbnb’s customer support is reportedly run by AI.
Pinterest’s stock price is at its lowest since 2020, driven by reduced ad revenue and users shifting to chatbots for design and planning help.
An activist investor in Toto, Japan’s largest toilet maker, urged the company to invest more in its advanced ceramics segment, which makes components for AI memory chips and generates 40% of operating profit.
MOVES
Peter Steinberger, the creator of OpenClaw, joined OpenAI to lead the development of personal agents.
OpenAI poached Charles Porch from Meta. The ex-Instagram VP of global partnerships will be OpenAI’s first VP of global creative partnerships.
Anthropic added Chris Liddell, former Microsoft and GM executive, to its board of directors.
Investor Jack Altman joined VC firm Benchmark as a general partner.
Andy Masley is stepping down from EA DC to write full time.
RESEARCH
OpenAI and Paradigm introduced EVMbench, which tests how well AI agents can exploit vulnerabilities in smart contracts on blockchain networks.
Anthropic analyzed millions of human-agent interactions, and found that Claude Code’s longest sessions doubled from under 25 to over 45 minutes over the past three months, with users increasingly auto-approving its actions.
Researchers identified a “deployment overhang,” where models seem capable of more autonomy than they actually use.
A team of researchers updated the AI Agent Index, and found that only four of 12 frontier-level agents come with basic safety evaluation documents.
Surge AI introduced EnterpriseBench, RL environments that simulate enterprise environments for AI agents to navigate.
A team of theoretical physicists used GPT-5.2 to propose a new formula for a particle interaction many in the field didn’t think could happen.
Sigil Wen, a self-taught engineer funded by the Thiel Fellowship, claimed to have built “the first AI that earns its own existence, self-improves, and replicates — without needing a human,” called the Automaton.
OpenAI announced a $7.5m grant to co-fund UK AISI’s Alignment Project.
Google.org launched a $30m open call offering “funding, tools, and technical expertise” to researchers accelerating breakthroughs in health and climate science.
BEST OF THE REST
Ethan Mollick published a helpful guide to using AI agents.
Asterisk Magazine described “the sweet lesson of neuroscience” — that the brain’s two-part architecture, split between “learning” and “steering” subsystems, could inform the development of more prosocial AI systems.
Alpha School, the pricey AI-powered private school system, illegally scrapes content from existing online courses without permission and generates faulty lesson plans that “do more harm than good,” 404 Media reported.
Residents of Potters Bar, a small town outside London, are fighting to protect the English countryside from the AI infrastructure buildout.
AI is making it easier for romance scammers to reach victims across language barriers.
Companies are embracing “chatbot marketing,” publishing massive amounts of text in an effort to influence what LLMs say about them.
Wired examined the sex, power, and ambition driving Silicon Valley’s gay founders and investors, including figures such as Sam Altman and Peter Thiel.
MEME OF THE WEEK
Thanks for reading. Have a great weekend.
Correction, February 20: Corrected the article to note that HB 286 does not require safety plans to be reviewed by third parties.


