California's latest AI safety bill might stand a chance
SB 53 is entering the home stretch despite industry lobbying. Will it make it over the line?
It’s been nearly a year since Gavin Newsom vetoed California State Bill 1047, a sweeping effort to regulate AI led by Senator Scott Wiener that rallied a motley crew of Nobel laureates, Hollywood actors, and AI developers against ultrarich tech giants. Despite widespread public support across the political spectrum — “don’t trust huge tech companies to police themselves” is often convincing to everyone outside the companies in question — Newsom buckled under industry pressure.
Yet the ghost of SB 1047 lives on in SB 53, Sen. Wiener’s watered-down bill designed to implement independent recommendations without spooking industry critics.
AI policy watchers think it has good odds of passing. “I would guess, with roughly 75% confidence, that SB 53 will be signed into law by the end of September,” recently departed White House AI advisor and prominent SB 1047 critic Dean Ball told Transformer. Yet, although tech companies aren’t raising as much hell as last year, they are still quietly pushing back against the much weaker bill.
Immediately after vetoing SB 1047, Newsom commissioned a report to guide future attempts at AI regulation from the Joint California Policy Working Group on AI Frontier Models. The task force, which included AI godmother and SB 1047 opponent Fei-Fei Li, ironically produced recommendations that mostly mirrored the policies Newsom had just vetoed. However, notably absent from their report was any serious discussion of legal liability.
Unlike SB 1047, which would have held large AI companies liable for catastrophic harms caused by their models, SB 53 focuses on transparency. The bill would require large developers (think: OpenAI or larger) to publish model cards and safety policies. Importantly, they’d have to actually follow those safety policies, rather than rely on “voluntary commitments” that some seem to ignore. It would also expand whistleblower protections to include employees, independent contractors, and other external collaborators, and create a formal channel for reporting safety breaches. (SB 53 also resuscitates the uncontroversial, but likely unfeasible, call for CalCompute, a public AI compute cluster.)
The bill initially reinforced the “trust but verify” ethos contained in the working group’s report via independent third-party audits, which would have checked whether large developers were making good on their safety promises. Although such audits wouldn’t be required until 2030, the provision was nixed entirely last week. Now, the bill’s enforcement mechanism is more “trust” than “verify,” hinging on whistleblowers and whatever incident reports the Attorney General’s office can piece together. Getting caught violating safety agreements would depend almost entirely on someone inside choosing to speak up.
Still, the transparency requirements could indirectly strengthen future liability cases: published safety policies create a public record that could later be used in lawsuits, helping judges and juries assess what counts as an “industry standard” or “reasonable care” in AI development.
Legislators also raised the annual revenue threshold determining which companies count as “large AI developers” from $100m to $500m, meaning the law currently exempts all but the largest, such as Meta, OpenAI, Anthropic, and Google. Such companies must also have trained a model “using a quantity of computing power greater than 10^26 integer or floating-point operations.” The structure of the bill itself also addresses one of the biggest criticisms of last year’s bill too: rather than target models, it targets companies which develop models.
Basically, SB 53 would require a small handful of very powerful companies to publish the safety policies and model cards that most already share publicly, albeit inconsistently. They would also be required to follow their own policies. Most frontier AI companies, save for Meta, have already signed the EU AI Code of Practice, which obligates them to develop safety frameworks and model cards, but not publish them in full.
The AI industry’s complaints have, unsurprisingly, shifted to match the updated bill. Last year, opponents of SB 1047 — including Andreessen Horowitz, Y Combinator, and tech trade groups like Chamber of Progress — loudly worried that the bill’s liability provisions would stifle “Little Tech” and, misleadingly, that developers could be jailed for misuse of their products. Now, they’re saying — mostly in Sen. Weiner’s office, we’re told, rather than public-facing op-eds — that SB 53 would force companies to file needless paperwork, reveal trade secrets, and deal with a messy patchwork of state laws (especially given that the proposed moratorium on state AI laws is, for now at least, dead). “Some of these folks are just never going to be happy,” said Nathan Calvin, vice president of state affairs and general counsel at AI advocacy non-profit Encode.
On August 11, OpenAI’s head of global affairs Chris Lehane asked Newsom to “consider frontier model developers compliant with [California’s] state requirements when they sign onto a parallel regulatory framework like the CoP [EU AI Act Code of Practice] or enter into a safety-oriented agreement with a relevant US federal government agency.”
In essence Lehane argues that if a company can comply with looser federal or international guidelines, they should get a free pass to ignore California’s regulations.
As AI policy researcher Miles Brundage pointed out on X this week, “It’s very disingenuous to act as if OpenAI is super interested in harmonious US-EU integration + federal leadership over states when they have literally never laid out a set of coherent principles for US federal AI regulation.” Instead, he suspects that OpenAI’s true end game is “make the number of bills go down.” One OpenAI employee posted privately on X: “I am concerned about the vibes of what we’ve been putting out, and I’m concerned that Miles is concerned.”
In mid August, Politico reported that OpenAI had hired over half a dozen Democrat-linked lobbyists over the past year, Lehane among them. Then, two new pro-AI super PACs launched: Meta’s own Meta California, and Leading the Future, backed by Andreessen Horowitz and OpenAI co-founder Greg Brockman. Their mission is clear: discourage AI regulation at all costs.
All of this happened before SB 53 was amended last Friday. Now, without audit requirements or applicability beyond the state’s biggest tech companies, OpenAI’s complaints about regulation “stifling innovation” appear even weaker.
Ultimately, it’s all up to Newsom. We won’t have to wait long for his decision: SB 53 has already cleared the Senate and will likely be sent to the governor’s desk by September 15. He’ll have until October 15 to sign or veto.
The case for signing looks strong on paper. SB 53 draws directly from the California Report on Frontier AI Policy, which Newsom commissioned himself, and 1047’s most controversial provision has already been removed. Recent polling suggests that most of his Democratic constituents don’t trust AI developers to regulate themselves, potentially boosting his national profile as someone willing to take on Big Tech just in case he runs for president in 2028. Even his wife, Jennifer Siebel Newsom, publicly supports AI regulation.
That said, as Common Sense Media founder Jim Steyer told the Sacramento Bee, “He needs money from the tech industry. That’s really the equation.” Despite shifting focus from liability to transparency, SB 53 still contains elements of SB 1047 — and pro-AI lobbyists and longtime allies such as venture capitalist (and Democratic power broker) Ron Conway could convince Newsom that it’s too similar to the bill he already rejected.
Whether this bill passes or not, it won’t address most people’s biggest concerns about AI, nor will it do so at a federal level. While AI safety researchers worry about theoretical existential risks — which SB 53 likely isn’t strong enough to protect against — the rest of the country worries about job loss and ChatGPT helping teenagers take their own lives.
“I don’t know exactly what it will take for people to feel like those concerns are being assuaged by government action, but I don’t think SB 53 is that bill,” said Ball, who since leaving the White House has become a senior fellow at the Foundation for American Innovation and restarted his Hyperdimensional newsletter.
“I don’t think that the average American is like, ‘Man, I’d really trust these AI companies more if there were mandatory disclosures about how they mitigate biorisk,’” he added. But, especially in the absence of coherent federal leadership on AI safety, “that doesn’t mean the bill isn’t worth doing.”
Correction Sep 5: This article was amended to remove a reference to SB 53 having cleared the assembly. A vote is expected next week.