SB 53 protects whistleblowers in AI — but asks a lot in return
Opinion: Abra Ganz and Karl Koch argue that whistleblower protections in SB-53 aren’t good enough on the face of it — but how the state chooses to interpret the law could turn that around
SB 53 places a legal responsibility on frontier AI labs to report whether their models pose a risk to the population. But it places a far greater moral responsibility on their employees to say what those risks are.
Section 4 of SB 53, which makes up a little under a quarter of the bill’s text, gives select employees of AI companies unprecedented rights to report catastrophic risks. All employees are now protected from retaliation when blowing the whistle — whether within the company, to the California Attorney General (AG), or to a federal authority — if their company is making misleading or false statements about its management of catastrophic risks or is otherwise violating SB 53. A subset of employees, those responsible for “critical safety incidents” as defined by SB 53, can also blow the whistle on the very existence of a catastrophic risk. A good lawyer could even argue that SB 53 covers certain contractors, who make up an increasingly large proportion of the tech workforce.
We should celebrate that California (the fifth-largest economy in the world if it were a nation state, and home of all the major western AI labs) has created accountability through oversight. Yet the heaviest responsibility falls on the wrong shoulders.
SB 53 places the burden of reporting concrete catastrophic risks from externally deployed AI solely on whistleblowers. Frontier labs are obliged to report “an assessment of catastrophic risk” for each model they develop, but they are under no obligation to report the specifics of any risk, nor to update their assessment if a novel risk is discovered, nor to take any action if a risk is likely to materialize into actual harm, until after harm has already occurred. The moral burden of reporting imminent harm instead falls to an ambiguously defined subset of employees responsible for critical safety incidents. While these employees can access pro bono legal and technical advice through whistleblower charities such as the AI Whistleblower Initiative, the ability to report risks — which the law defines as the potential for more than 50 deaths or $1b of damage arising from a single incident — should not depend on charity and individuals choosing to risk losing their jobs and livelihoods.
Beyond insiders who have the opportunity to make significant but potentially career-ending disclosures, the actors who are best placed to oversee models are third-party evaluation organizations, who do not face the same conflicting incentives. However, whistleblower protections covering these organizations were cut from SB 53 at the last minute. It’s not clear what such organizations can do if the lab whose products they’re evaluating chooses to ignore their concerns: SB 53 neither protects them from escalating catastrophic risk concerns within the company (e.g. to the board), disclosing them to a regulator, or, even in extreme cases with imminent risk of mass harm, making them public. Instead all of these disclosures can be prohibited by non-disclosure agreements, which are frequently used across the tech industry. This means that companies are once again left to self-assessment, with the only external assessors unable to speak up if they discover critical issues. A moratorium on state-level AI laws, as proposed at the federal level, would remove even self-assessment as well as the ability of insiders to speak up, leaving businesses and individuals that use AI exposed to the catastrophic risks SB 53 was designed to prevent.
There is still the possibility of pushing responsibility further onto the labs’ shoulders. If the Californian civil service can implement SB 53 well, if it can understand the risks of AI and what is required to detect them, then it can ask companies to do that work. SB 53 gives the Office of Emergency Services (OES) the power to specify the details of required assessments: if it chooses to use this, and gives the labs detailed reporting requirements rather than allowing them to write their own, it can force companies to be responsible for reporting the risks they create.
Perhaps the bill’s most important impact, however, may be an unintended one: forcing California’s government to finally build AI expertise. Currently, that expertise barely exists, preventing insiders from raising concerns: a survey of frontier lab employees by the AI Whistleblower Initiative found 100% of respondents to be “Not Confident At All” or “Not Very Confident” that their concerns would be “understood and acted upon by the government.” Given the EU AI Office just opened a whistleblowing channel, US insiders might choose to raise their concerns in Europe if there isn’t an expert American alternative. Yet SB 53 will necessitate California’s civil service to get to grips with AI. When a whistleblower reports that a frontier model could assist in creating bioweapons or has evaded the developer’s control, the office on the other end must be able to understand what that means, test whether it’s true, and figure out what to do about it.
SB 53 forces the Californian AG to address this problem, and the AG’s office has already announced that it is hiring an AI expert. By taking this issue seriously, the California government can not only improve its ability to handle whistleblower reports through anonymous and secure reporting channels, strong confidentiality policies, and timely response protocols, but can, indirectly, improve companies’ internal policies as well. Research shows that when regulators take whistleblower protections seriously, companies change their internal reporting policies for the better.
If this bill is implemented well, by the AG and the OES, it will allow employees to report the most critical risks — while also removing the need to do so.
Abra Ganz is the geostrategic dynamics team lead at the Center for AI Risk Management & Alignment, where she also leads the Whistleblowing in AI research stream.
Karl Koch is the founder of the AI Whistleblower Initiative, an independent non-profit aimed at supporting insiders at the frontier of AI in safely raising concerns and seeing them addressed effectively.




