Tech's scared of the RAISE Act
Transformer Weekly: The RAISE Act, the RISE Act, and a whole lot of lobbying
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
With the RAISE Act having passed the New York legislature, lobbying efforts to kill it are ramping up.
The Chamber of Progress recently called on Gov. Kathy Hochul to veto the bill, claiming that it would “be an eviction notice” for New York’s AI startups.
That’s despite the fact that the bill “only applies to the largest AI companies that have spent over $100 million in computational resources to train advanced AI models”, according to its authors.
To refresh your memory: the bill requires such companies to produce and follow safety and security protocols, and requires them not to release models that “create an unreasonable risk of critical harm”.
If companies don’t comply, the NY AG will be able to bring civil penalties against the companies.
The Chamber also claimed that the bill was “rushed”, which seems … false. (We first covered the bill in Transformer back in January).
The Computer & Communications Industry Association also called for a veto.
Anthropic’s Jack Clark outlined “some concerns” about the bill, arguing that it’s “overly/broad unclear in some of its key definitions”, and “represents a real risk to smaller companies”.
Bill sponsor Alex Bores, meanwhile, claimed that representatives from Andreessen Horowitz were “calling NY Congressmembers to get them to speak out against the RAISE Act”.
a16z is one of the backers of the American Innovators Network, a group which recently published a letter opposing the bill, and is reportedly “planning to invest north of six figures” in a campaign to oppose this and other AI bills in New York.
In other words: the RAISE Act is the new SB 1047, and you should expect to hear a lot about it.
The discourse
Sen. Chris Murphy published a piece on AI policy:
“Once AI can reason more effectively than a human (called Artificial General Intelligence or “AGI”), AI will undoubtedly be a net job killer, not a job creator … I want to beat China in the race for advanced AI. I do. But not at any cost. If we do not use government policy and intervention to control for the job loss and to protect consumers, it won’t matter that we get to AGI before China.”
Axios founders Jim VandeHei and Mike Allen wrote a great column about the risks of AI:
“Elon Musk has put the risk as high as 20% that AI could destroy the world. Well, what if he's right? … p(doom) demands we pay attention ... before it's too late.”
Francis Fukuyama is also coming around to loss-of-control risks:
“The other kind of fear, which I always had trouble understanding, was the ‘existential’ threat AI posed to humanity as a whole. This seemed entirely in the realm of science fiction. I did not understand how human beings would not be able to hit the ‘off’ switch on any machine that was running wild. But having thought about it further, I think that this larger threat is in fact very real, and there is a clear pathway by which something disastrous could happen.”
On the other hand, Meta CFO Susan Li really doesn’t seem to be feeling the AGI:
“You can build all this capacity, and what do you do with them if it turns out you don't need as much compute for either training or inference as you thought? I think a lot of us have different backup use cases …the real question is what happens in, like, two years if you've built so much compute that you cannot envision a reasonable ROI on the backup use case if what you're building doesn't come to fruition. And that's something I think we're all going to learn in the next few years.”
Policy
Sen. Cynthia Lummis introduced the RISE Act, which (in her words), “creates a safe harbor for AI developers — only if they meet clear standards for transparency and documentation”.
It requires companies to publish model cards and model specifications, in exchange for liability protections.
(Interestingly, the bill text mentions the possibility of AGI and superintelligence!)
The AI Futures Project (authors of AI 2027) were consulted on the bill and tentatively endorse it; Daniel Kokotajlo has an interesting analysis here.
Sens. Cantwell and Blackburn continued to push back on the state AI regulation moratorium. The provision is now with the Senate parliamentarian.
The California Report on Frontier AI Policy, commissioned by Gov. Gavin Newsom last year, was published. The Verge has a nice summary.
A draft version was published in March; this final report notes that since then “evidence that foundation models contribute to both chemical, biological, radiological, and nuclear (CBRN) weapons risks for novices and loss of control concerns has grown”.
It also argues that “whistleblower protections, third-party evaluations, and public-facing information sharing are key instruments to increase transparency.”
Rep. John Moolenaar called for “safety testing, transparency requirements for new models and strong oversight of training data, so we don’t end up with CCP-style systems like DeepSeek”.
Taiwan blacklisted Huawei and SMIC, blocking local companies from shipping to them without government approval.
The European Commission launched a call for independent experts to join its AI Scientific Panel.
Spain's AI agency director said the country will be ready to enforce the EU AI Act on schedule.
A group of researchers put out a great piece explaining what’s going on with the China AI Safety and Development Association — the closest thing China has to an AISI.
They argue that its primary function is “to represent China in international AI conversations”, as opposed to testing and evaluating frontier AI models domestically.
A new report found that China's spy agencies are investing heavily in AI.
Influence
The FT reported this week that lobbyists for Meta, Google, Microsoft, and Meta are all pushing for the moratorium on state AI legislation.
Andreessen Horowitz called for NIST to establish a “National AI Competitiveness Institute”.
President Trump will attend the “inaugural Pennsylvania Energy and Innovation Summit” next month. The event will be focused on “Pennsylvania’s incredible potential to power the AI revolution”.
Relatedly, a bunch of energy + AI execs had a meeting this week, including OpenAI’s Chris Lehane and the CEOs of Exxon, BP, and Occidental.
More than 20 tech safety advocacy groups urged Congress to pass the AI Whistleblower Protection Act.
The Midas Project and Tech Oversight Project published “The OpenAI Files” — a very helpful compendium of OpenAI’s inner workings, conflicts of interests, and various wrongdoings.
Industry
Meta reportedly tried and failed to buy SSI and hire Ilya Sutskever. Instead, it is now reportedly in talks to hire SSI’s Daniel Gross and former GitHub CEO Nat Friedman.
The deal would seemingly involve buying out part of the pair’s venture fund for over $1b.
The two will reportedly work for Alexandr Wang, who is reportedly in talks to have the title of “chief AI officer”.
Scale AI, meanwhile, is dropping customers like flies after the Meta investment.
Google, the company’s largest customer, is reportedly planning to cut ties. It was previously on track to spend $200m with Scale this year, according to Reuters.
OpenAI said it’s phasing out work with Scale, too, though claimed that this had been planned before the Meta deal.
The Information has a great profile on Surge AI, a little-known competitor that has actually outgrown Scale in the past year.
OpenAI and Microsoft’s relationship is reportedly souring — a lot. The WSJ reported that OpenAI has discussed making antitrust complaints against Microsoft.
Part of the argument is reportedly around whether Microsoft should have access to Windsurf’s IP following an OpenAI acquisition.
They’re also fighting over how much equity Microsoft should get in the new OpenAI PBC: Microsoft reportedly wants a 33% stake. OpenAI wants a smaller stake — and also wants to ditch its cloud-exclusivity deal with Microsoft.
The FT reported that Microsoft “is prepared to walk away” from the negotiations — something which could prevent OpenAI’s restructuring and torpedo its most recent funding round.
The DoD awarded OpenAI a $200m contract to “develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains.”
Masayoshi Son is reportedly trying to build a “trillion-dollar industrial complex in Arizona to build robots and artificial intelligence”. He wants TSMC to be involved.
xAI is reportedly in talks to raise another $4.3b. It’s reportedly spending $1b a month at the moment.
Google released Gemini 2.5 Flash and Pro models for general availability and introduced a more cost-efficient model, 2.5 Flash-Lite.
Google also confirmed it uses YouTube videos to train AI models like Gemini and Veo 3.
Amazon CEO Andy Jassy said the company expects that implementing AI tools “will reduce our total corporate workforce” size.
BT’s CEO said something similar.
A Chinese AI company reportedly circumvented US chip restrictions by flying engineers with hard drives to Malaysia, where they used Nvidia chips to train a model before flying back with the weights.
Moves
Matt Clifford is stepping down as the UK PM’s AI adviser next month. He’s said he’s leaving “for purely personal reasons”.
Jung-Woo Ha was appointed Korea’s senior secretary to the president for AI & future planning.
OpenAI has laid off some of its insider risk team, and says it plans to “revamp” it.
Time Magazine is launching a new AI newsletter.
Best of the rest
OpenAI published a piece explaining how the company’s “preparing for future AI capabilities in biology”, outlining some of its safeguards and announcing a biodefense summit next month.
The NYT has a great piece on how ChatGPT can induce dangerous delusions in vulnerable users, including one man who nearly killed himself after the chatbot encouraged his belief in simulation theory.
The NYT also profiled Mechanize, the startup formed by ex-Epoch researchers which is explicitly trying to automate white-collar jobs ASAP.
A bunch of new papers were published on emergent misalignment. One found that the phenomenon extends to reasoning LLMs, while another (from OpenAI) proposed a way to detect and fix the behavior.
Google released a technical report on Gemini 2.5, outlining its new RL*F approach (Reinforcement Learning from Human and Critic Feedback).
The Oxford Martin AI Governance Initiative published a paper exploring how to standardize AI risk tiers.
David C. Krakauer, John W. Krakauer, and Melanie Mitchell published a paper exploring emergence from a “complex systems perspective”.
The NAACP and Southern Environmental Law Center announced plans to sue xAI over alleged Clean Air Act violations at its Memphis facility.
Almost 7,000 UK university students were caught cheating with AI tools in 2023-24.
Thanks for reading; have a great weekend.