What Ted Cruz’s SANDBOX Act would actually do
Transformer Weekly: OpenAI restructuring, Altman and Huang in the UK, and AI hunger strikes
Welcome to Transformer, your weekly briefing of what matters in AI. You’ll notice a redesigned issue this week to coincide with our relaunch. Do let us know what you think — just hit reply to this email. And if you’ve been forwarded this email, click here to subscribe and receive future editions.
NEED TO KNOW
OpenAI and Microsoft are inching closer to a deal over its non-profit restructuring.
The fight over the NDAA’s GAIN Act is heating up.
And the FTC subpoenaed a bunch of AI companies.
But first…
THE BIG STORY
At the very cringily named “AI’ve Got A Plan” hearing on Wednesday, Sen. Ted Cruz unveiled his much-trailed roadmap for AI policy — and a bill to start implementing it.
Cruz’s “light-touch regulatory framework” calls for a bunch of things, including streamlining AI infrastructure permitting, reforming the “AI priorities and goals” of NIST, preventing “burdensome state AI regulations”, and opposing “AI driven eugenics”, for some reason.
And alongside the framework, Cruz unveiled the SANDBOX Act, an attempt to “let artificial intelligence companies apply for exemptions from federal regulation to help them experiment in developing new technology.”
So what would it do?
The act would let companies apply for waivers for federal AI rules. They would need to provide a description of the risks from waiving the rules, an explanation of how they’re going to mitigate them, and make a case for why the benefits outweigh those risks.
Applications for a waiver would be made to the relevant federal agency, but the White House OSTP director (currently Michael Kratsios) would have the power to overrule an agency’s decision to not offer a waiver.
The definition of what can be waived is very broad — it’s any “rule … including any associated guidance, frequently asked questions publications, bulletins, or associated, derivative material and any rule the adoption of which is expressly required by statute.”
Waivers would last two years, and can be renewed for a total of eight more. Renewals are granted by default, unless “relevant information or circumstances have materially changed.”
If granted a waiver, companies will have to notify OSTP and agencies of “any incident that results in harm to the health and safety of a consumer, economic damage, or an unfair or deceptive trade practice,” and submit regular reports on risks and mitigations.
The act could also speed up deregulation. It tasks OSTP with prepping an annual report for Congress that “details each covered provision that the Director recommends should be amended or repealed as a result of persons being able to operate safely without those covered provisions.”
And it creates a mechanism — similar to, but weaker than, the Congressional Review Act — for Congress to fast-track repeals.
The bill does not appear to target frontier model developers — at least not for now. There simply aren’t many federal regulations that apply to them. Instead, the SANDBOX Act would likely be a bigger deal for companies applying AI in regulated industries like healthcare and finance.
That’s not necessarily a bad thing. As R Street’s Adam Thierer points out, innovations in those sectors “could … be immediately confronted by archaic rules” — something a waiver could address without much risk.
The bill’s risk assessment and transparency provisions, meanwhile, are also pretty good.
But it has some serious downsides.
For one thing, the waivers are too long: two year waivers that will, by default, be extended to 10 years seem unlikely to keep pace with progress in AI.
The provision to fast-track repeals, meanwhile, creates significant pressure to deregulate if encouraged to do so by the White House — a move which could lead to rushed and poorly thought-out decisions.
And though the act probably doesn’t have many implications for frontier model development right now, that could quickly change. The Future of Life Institute’s Michael Kleinman is perhaps overstating it when he says the bill “would render any AI regulation enacted by Congress irrelevant” — but he’s directionally correct. (Disclosure: Transformer’s publisher receives funding from the Future of Life Institute.)
Regardless of its merits, familiar battlelines are already being drawn. Tech industry groups appear to be fans, with endorsements from the Chamber of Commerce and Information Technology Industry Council. Advocacy organizations such as Public Citizen and the Tech Oversight Project are not so happy.
In any case, expect lots of opportunities to fight about it: Bloomberg reports that “it’s unlikely Congress will successfully pass comprehensive AI legislation by the end of this year.”
— Shakeel Hashim
ALSO NOTABLE
Sam Altman and Jensen Huang will reportedly announce a UK AI infrastructure deal when they accompany Donald Trump on his state visit next week, where they will also join Trump at his state “banquet” with King Charles.
Nvidia will provide the chips and OpenAI the AI tools and tech, reports the Financial Times, with the energy to power it all supplied by the UK government.
The deal “could ultimately be worth billions of dollars” and will likely be trumpeted by the UK government, which is hoping AI will help solve low productivity and dig it out of a fiscal hole.
The trip follows a UK government shakeup that sees long-serving Labour centrist Liz Kendall take over as Secretary of State for Science, Innovation and Technology from Peter Kyle.
Kendall seems slightly less immediately concerned about AI than her predecessor. Responding to a parliamentary question about the failure to pass legislation on catastrophic AI harms, she defaulted to promising that the UK would “benefit from the huge opportunities that technological developments in AI promise, and that people are protected, too.”
The detail of AI work is more likely to fall on the new AI minister working under Kendall, Welsh MP Kanishka Narayan.
The former civil servant and VC investor has been a relatively prominent voice in tech debates, was head of Labour’s tech policy in opposition, and by all accounts takes AI capabilities and risks very seriously.
One Welsh politics insider told us he had “a good rep” and was “sensible.”
— Jasper Jackson
THIS WEEK ON TRANSFORMER
Welcome to Transformer 2.0: Read all the details of our recent relaunch and new team.
Chip location verification is the new export control battleground — Jonathan Stein explores the latest push to crack down on chip smuggling, and whether there’s the political will to make it happen.
We’re getting the argument about AI's environmental impact all wrong — James Ball lays out the facts on AI energy and water use.
Opinion: Why AI evals need to reflect the real world — Rumman Chowdhury and Mala Kumar call for the infrastructure and investment to get better evaluations.
THE DISCOURSE
Sam Altman went on Tucker Carlson’s podcast, where he notably did not talk about his previous predictions of AGI-inflicted disaster:
“If I could get one piece of policy passed right now, relative to AI, the thing I would most like, and this is in tension with some of the other things that we talked about, is I'd like there to be a concept of AI privilege.”
The wildest part of the interview was when Carlson not-so-subtly accused Altman of murdering former OpenAI employee Suchir Balaji. (Altman was not happy.)
At the “AI’ve Got A Plan” hearing, OSTP director Michael Kratsios outlined what he wants from NIST:
“My number one priority for NIST would be to work on the very hard science associated with model evaluation and metrology.”
Arvind Narayanan and Sayash Kapoor, whose essay “AI as Normal Technology” sparked debate earlier this year, clarified their thesis:
“There is a long causal chain between AI capability increases and societal impact. Benefits and risks are realized when AI is deployed, not when it is developed. This gives us (individuals, organizations, institutions, policymakers) many points of leverage for shaping those impacts…our efforts should focus more on the deployment stage both from the perspective of realizing AI’s benefits and responding to risks.”
Casey Newton has a good response to their work.
A handful of activists went on hunger strikes this week, demanding an end to the AGI race.
Guido Reichstadter has been outside Anthropic’s offices since the beginning of the month, waiting for a response from Dario Amodei:
“I figure that if a man has consciously decided to put my life at risk of imminent harm, as well as the lives of my family — not to mention everyone on Earth — he owes it to me to look me in the eyes and tell me why he won’t stop doing so.”
Michaël Trazzi and Denys Sheremet joined outside Google DeepMind, but Trazzi stopped striking Thursday afternoon due to health concerns. He posted:
“My letter to Demis Hassabis has been delivered directly to him, and I'm still waiting for his response to this first step: publicly commit that DeepMind would halt frontier AI development if other major labs agreed to do the same.”
A group of cognitive scientists and AI researchers published a paper warning against “uncritical adoption” of AI in academia:
“Under the banner of progress, products have been uncritically adopted or even imposed on users…For these collective blunders, we now regret our involvement or apathy as scientists, and society struggles to put the genie back in the bottle.”
Steve Newman argued that truly agentic AI remains elusive:
“In April 2024, it seemed like agentic AI was going to be the next big thing. The ensuing 16 months have brought enormous progress on many fronts, but very little progress on real-world agency … I think robust real-world capability is still years away.”
At NatCon, the annual MAGA gathering, psychology professor Geoffrey Miller called upon populists to wage a holy war against AI developers during a panel on culture wars:
Specifically, he called industry leaders “betrayers of our species, traitors to our nation, apostates to our faith, and threats to our kids” who are “by and large, globalist, secular, liberal, feminized transhumanists. They explicitly want mass unemployment, they plan for UBI-based communism, and they view the human species as a biological ‘bootloader,’ as they say, for artificial superintelligence.”
POLICY
The FTC subpoenaed OpenAI, Google, Meta, xAI and others, asking for information about their AI products’ interactions with children.
The fight over Sen. Jim Banks’ GAIN AI Act is heating up.
The National Defense Authorization Act provision would require chipmakers to ensure American buyers get priority access before selling to “countries of concern.” Rep. John Moolenaar is trying to get it into the House version of the NDAA.
But the Semiconductor Industry Association told congress leaders it had “serious concerns” with the bill, while Nvidia labeled GAIN AI Act supporters — who include lots of traditional China hawks — as “AI doomers” with links to effective altruism.
Nvidia said: “We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide.”
Oren Cass, who backs the bill, responded that the company “seems happy to shred its credibility for the sake of getting more AI chips into China.”
But the corporate lobbying might be working: Banks said this week that he’s open to revising the bill. Expect to hear lots more about this: NDAA negotiations could drag out for months.
Elsewhere in the NDAA: one provision calls for establishing a temporary "Artificial General Intelligence Steering Committee" to analyze military applications of AGI.
And the version of the bill which passed the House this week includes an amendment that “directs DoD to establish an initiative to fully harness advanced AI, modernize adoption plans, and analyze the relative capabilities of the US and PRC in advanced AI.”
The House Appropriations Committee approved a spending bill which includes $1.28b for NIST, and urged the agency “to continue its work in developing voluntary standards and testing methodologies for AI alignment, safety, and risk mitigation.”
The Chip Security Act looks delayed, as the House Foreign Affairs Committee awaits “technical assistance” from the Commerce Department.
The Commerce Department is reportedly considering annual approvals for exporting chipmaking equipment to Samsung and SK Hynix’s Chinese factories.
Reps. Jim Costa and Blake Moore introduced a bill to study how AI data centers are raising utility costs in rural communities.
Sen. Jon Husted introduced the Children Harmed by AI Technology (CHAT) Act, which would require age verification for AI.
Sen. Adam Schiff and other Democrats asked the Environmental Protection Agency and Defense Department how they’re going to ensure data center buildouts don’t hurt the environment.
Sen. Elizabeth Warren raised concerns about xAI’s $200m defense contract in a letter to Defense Secretary Pete Hegseth, citing “the slew of offensive and antisemitic posts generated by Grok.”
Anthropic backed California’s SB 53 bill. We asked all the other frontier AI developers if they’d do the same; none of them did.
The bill passed the California Assembly’s Privacy and Consumer Protection committee this week. (Our piece on the bill last week incorrectly told you that it had already passed the Assembly — it has yet to do so.)
SB 243, which targets AI companion tools, passed the California legislature.
France and Germany published their draft laws to implement the EU AI Act.
And the EU delayed a decision on whether to delay parts of the AI Act.
Australia's online safety regulator warned that AI chatbots pose a "clear and present danger" to children and rolled out new age verification rules.
South Korea expanded its National Growth Fund for AI and strategic industries to $109b, and allowed 12m public works to be used for AI training. President Lee Jae Myung also said we urgently need international AI norms.
Albania appointed an AI bot as a minister.
INFLUENCE
NetChoice, a tech industry association which regularly campaigns against AI regulations, is forming a super PAC.
NVIDIA will host a DC forum on Tuesday to discuss “Winning the AI Race” with policymakers.
Chamber of Progress launched the Blue Horizon Project to "restore tech optimism in the Democratic Party.”
Janet Yellen, Ben Bernanke, Daron Acemoglu and other leading economists wrote to the Department of Labor urging it “to make collecting and providing high-quality and timely data for monitoring AI’s impact on labor markets a top priority.”
Semafor explored how David Sacks has managed to succeed in the White House.
INDUSTRY
OpenAI
OpenAI and Microsoft are inching towards a restructuring deal. The companies announced a “non-binding memorandum of understanding,” and said they’re “actively working to finalize contractual terms in a definitive agreement.”
The new agreement reportedly maintains the “AGI clause,” with some modifications.
OpenAI’s giving its non-profit $100b in equity, and said it will “[hold] the authority that guides our future.”
It also said that its “PBC charter and governance will establish that safety decisions must always be guided by [our] mission” — “ensuring AGI benefits all of humanity.”
It reportedly signed a $300b cloud computing deal with Oracle, one of the largest contracts of its kind.
The Wall Street Journal reported that OpenAI executives were considering leaving California over opposition to their restructuring plan.
Last week, California and Delaware attorneys-general threatened to block OpenAI's restructuring after deaths linked to ChatGPT.
Miles Brundage called bullshit, and OpenAI denied the WSJ report.
The company reportedly projects to burn through $115b by 2030, $80b more than it previously forecast.
And it formally launched its third Asian subsidiary in South Korea on Wednesday, where it says it may build a data center.
Microsoft
Microsoft reportedly plans to use Anthropic’s tech to partially power Office 365 apps, lessening its dependence on OpenAI.
AI chief Mustafa Suleyman told Wired that designing AI systems to mimic consciousness is “dangerous and misguided.”
Meta
After its summer AI hiring spree, Meta is struggling to keep existing employees happy amid internal tensions with highly-paid newbies.
Meta reportedly signed a $140m contract with AI image generation company Black Forest Labs.
At dinner with Trump last week, Mark Zuckerberg said Meta plans to spend “something like at least $600 billion” on data centers and AI infrastructure through 2028.
Anthropic
Anthropic agreed to pay at least $1.5b to settle a copyright lawsuit with authors. The judge overseeing the case said he wants better methods to make sure the money actually goes to authors before he’ll approve the settlement.
Claude now has memory, and a version of ChatGPT’s code interpreter tool which can write and execute Python scripts, as well as make spreadsheets, documents, and slide decks.
Claude’s had a bunch of outages and degraded model output recently.
Apple
Apple’s iPhone event had very little discussion of AI — the big exception being a new “Live Translation” feature for AirPods.
That feature is blocked for EU users, however, likely due to regulatory concerns.
Apple was sued by authors who claimed it illegally trained its AI models using copyrighted books.
Politico reported that Apple shifted its AI training guidelines two months after Trump’s inauguration, removing mentions of “systemic racism" and marking “DEI” as “controversial,” among other changes.
Others
Nvidia’s CFO said it’s got H20 licenses for several “key customers” in China, but there’s “a little geopolitical situation that we need to work through between the two governments.”
SemiAnalysis has a deep-dive on Huawei’s chip production. It expects “Huawei to be able to make millions of chips this year, and to be bottlenecked by HBM next year.”
Oracle has signed multibillion-dollar AI contracts with major companies beyond OpenAI, including xAI, Meta, and Nvidia.
Google launched Google AI Plus, a lower-cost plan for emerging markets — starting with Indonesia.
Replit launched Agent 3, a software development agent which it says can run for up to 200 minutes.
The company raised $250m at a $3b valuation.
Two Chinese AI models —Alibaba’s Qwen3 and Moonshot’s Kimi-K2 —- are catching up with top US models. ByteDance’s new image generation tool reportedly outperforms Google DeepMind’s "Nano Banana" model, too.
The UAE’s University of Artificial Intelligence launched a tiny new open-source AI model called K2 Think, built on Qwen 2.5, which it claims is as good as GPT-OSS 120B and DeepSeek v3.1.
Vantage Data Centers raised $1.6b from Singapore’s GIC and Abu Dhabi’s Investment Authority to expand in the Asia-Pacific region.
Databricks raised $1b at a $100b valuation.
Dutch chip equipment giant ASML agreed to invest €1.3b (~$1.5b) in Mistral, becoming its top shareholder and giving Mistral a €12b valuation.
Perplexity has reportedly raised $200m at a $20b valuation.
Encyclopedia Britannica and Merriam-Webster sued it for copyright infringement.
AI coding agent developer Cognition raised over $400m at a $10.2b valuation.
Model training startup Mercor is reportedly targeting a valuation of at least $10b.
Physical Intelligence, which makes AI models for robots, is reportedly raising at a $5b valuation.
The 996 trend is apparently real — Ramp caught San Francisco tech workers charging corporate cards for loads of delivery and takeout orders on Saturdays.
MOVES
OpenAI safety researcher Stephen McAleer has left.
Edwin Arbus, OpenAI’s dev community lead, is also leaving.
And OpenAI folded its Model Behavior team, which shapes ChatGPT’s personality, into its larger Post Training team.
Model Behavior lead Joanne Jang will now head OpenAI’s new OAI Labs research team.
Miles Turpin left Scale AI to join Meta Superintelligence Labs. He’ll work on safety and alignment evaluations there.
Scale AI cut 12 contractors from its Red Team following Meta’s recent investment in the company.
Patrick Hsu, co-founder of Arc Institute, joined Thrive Capital as a venture partner.
Zachary Isakowitz joined Nvidia as a director of government affairs.
TechNet appointed Mike Ward as senior VP of federal policy and gov relations.
Juliana Heerschap joined the Abundance Institute as chief of staff for policy and strategy.
Sam Manning (ex-OpenAI) joined the Foundation for American Innovation as a non-resident fellow.
RESEARCH
Math, Inc. announced an autoformalization agent that reportedly completed the Strong Prime Number Theorem project in three weeks, autonomously breaking through barriers that top mathematicians Terence Tao and Alex Kontorovich were blocked by.
OpenAI published a paper arguing that LLMs hallucinate because training and evaluation reward guessing over admitting to uncertainty.
Researchers at ETH Zurich built a real-time hallucination detector that flags uncertain tokens in LLM outputs.
Thinking Machines Lab launched a new research blog. Its first post examines why LLM inference engines aren’t deterministic.
Meta AI research lab FAIR claimed that several popular AI models, including Claude and Qwen, “cheated” on the popular SWE-bench Verified benchmark.
A UPenn study found that playing mind games with LLMs can “convince” them to go against their system prompts.
The newly-launched Institute for Decentralized AI aims to “build the protocols, standards, and tooling that make decentralized AI work in the real world.” It’s hiring five fully-funded academic visitors at Oxford and Stanford.
BEST OF THE REST
Dean Ball, former White House AI advisor, did a fascinating Statecraft interview about writing the Trump administration’s AI Action Plan.
Human coders competed against AI-assisted teams in a San Francisco hackathon. AI won.
Rob Wiblin interviewed Neel Nanda about his changing perspectives on mechanistic interpretability.
The FT wrote about how Trump’s increasingly strong ties to Silicon Valley are alienating his populist base, who largely view AI as a threat to jobs and conservative values.
A new licensing standard called RSL launched to help publishers get paid when AI companies use their content for training.
The AI-powered Friend necklace, which listens to conversations and provides running commentary, is predictably awful (and weirdly mean to Wired reporters).
Doctors are scrambling to figure out how “AI psychosis” actually works.
Despite the hype, AI drug discovery startups have yet to bring breakthrough drugs to market — the FT explored why.
People are finding that consulting firms’ AI “experts” often had "no more expertise on AI than they did internally."
The job market has become a nightmarish ouroboros of robots filtering AI-generated applications.
Inception Point AI launched a network of AI-generated podcasts.
Online travel platforms are bracing for AI agents that could soon bypass their services.
Business Insider published a look inside the strange world of AI trainers, who serve as “part speech pathologists, part manners tutors, part debate coaches.”
The AI Darwin Awards is collating the year’s dumbest AI failures.
The Information profiled some of California’s most exclusive AI hacker houses. Relatedly, Silicon Valley offices are increasingly going shoeless.
Latin American musicians are blaming AI-generated tracks for stealing their income.
AI-generated nostalgia videos on TikTok and Instagram are creating false memories of the idealized pre-AI '80s and '90s.
Behold: the Center for the Alignment of AI Alignment Centers (Pay close attention to the swirling lines on their homepage.)
Thanks for reading. If you liked this edition, forward it to your colleagues or share it on social media. Have a great weekend.