The RAISE Act can stop the AI industry’s race to the bottom
Opinion: Assembly Member Alex Bores argues that regulation can prevent market pressure from encouraging the release of dangerous AI models, without harming innovation.
The CEOs of OpenAI, Anthropic, and Microsoft have said clearly: We’re in a race for transformative AI, and every second counts. But when your competitors skip safety steps and get to market first, you face pressure to do the same. These companies have failed to meet voluntary commitments and undercut each other on safety to ship faster.
No company wants its models to cause harm, but a race to market punishes them for being reasonable. New York already solved this problem in cybersecurity: minimum standards and transparency requirements make sure that companies do not face competitive pressure to cut corners. The RAISE Act applies that proven solution to AI.
Frontier AI labs acknowledge that their models could enable catastrophic harms, including engineered pandemics, crippling cyberattacks, or loss of human control over autonomous systems. But merely acknowledging these outcomes does not prevent them.
The dynamic is simple: Company A spends six months on safety testing. Company B spends three months and launches first. Company A loses market share. Next cycle, Company A also cuts testing to three months. Then two. Then one. This scenario is not hypothetical: labs are already shortening timelines for safety evaluations.
The risks are concrete. Research published in the academic journal Science shows that AI models can help non-experts understand how to manufacture risky pathogens and MIT students without science backgrounds used chatbots to identify pandemic-capable viruses and acquisition methods in under an hour. Cyberattacks, especially on critical infrastructure, can easily cause over $1b in damages, and AI tools are already helping to reduce exploit development time from weeks to minutes. Within hours of the release of Hexstrike-AI, a tool designed for defensive security testing, posts on the dark web discussed weaponizing the system to exploit zero-day vulnerabilities in enterprise systems. These are not hypothetical scenarios. They are playing out today.
When companies competed by cutting cybersecurity corners, New York built a floor. State law requires companies handling consumer data to maintain reasonable security measures. If they fail, the Attorney General can sue. If they suffer a breach, they must notify the AG and affected consumers.
This works. The AG has won settlements and lawsuits against noncompliant companies. New Yorkers and their data are more secure because we enforce minimums.
The RAISE Act does the same for AI. Companies spending over $100m to train the most capable models must: write and publish a safety plan to reduce catastrophic risks, follow that plan, disclose critical safety incidents and not release models their own tests find would unreasonably risk catastrophic harms. If they fail these requirements, the AG can bring civil action.
AI companies know their own best practices better than regulators do. They are the ones who must implement safety measures. But we have learned from cybersecurity that you need enforcement. Without it, companies race to the bottom.
The objections are predictable: regulation kills innovation and drives companies away. Some actors in the tech industry have made this argument when lobbying against the RAISE Act. Yet New York’s experience proves otherwise. The state enacted cybersecurity requirements for financial institutions in 2017, followed by the SHIELD Act in 2019, which imposed data security requirements on any business handling New Yorkers’ information. These laws work similarly to the RAISE Act: companies must implement security programs, document them, and notify authorities of breaches. New York remains the financial capital of the world. Wall Street did not relocate to Delaware. The regulations formalized what responsible companies already do. The alternative — catastrophic failure — is what actually destroys industries.
The RAISE Act passed both houses of New York’s legislature in June with the support of a majority of Republicans and nearly every Democrat. It now awaits Governor Hochul’s signature. California’s Governor Newsom already signed SB-53, which lays out similar transparency requirements, into law.
Congress should create federal standards. Until they do, states must act. New York led on cybersecurity regulation. We have led on AI research. We can lead on AI safety.
Alex Bores serves as the Assembly Member for the 73rd District of New York and sponsor of the RAISE Act.