Insurance might be the key to making AI secure
Opinion: Cristian Trout, Rajiv Dattani and Rune Kvist argue that insurance can help reward responsible development of AI.

Houses in Philadelphia in the 1700s, made of wood and packed tightly together, had a nasty habit of burning down.
As the city’s population exploded tenfold that century, residents were building faster than fire management could keep up. Homeowners and builders couldn’t assess their own fire risk and didn’t bear the full cost of their carelessness when fires spread. So in 1752, Benjamin Franklin founded the Philadelphia Contributionship, America’s first fire insurance mutual. Beyond financial coverage, it incentivized brick construction, spread fire-prevention practices, and improved firefighter equipment enabling Philadelphia — and the insurer — to grow safely.
Nearly three hundred years later, this tried and true playbook could be key to ensuring AI is adopted widely, while risks are quantified and mitigated cost-effectively.
Insurers’ incentives are aligned with both goals. They are financially motivated to enable AI adoption since it unlocks a new market for them, and accurately quantifying and carefully managing risk is how insurers profit from it. They do both by developing and enforcing standards, funding safety research, and running audits that drive risk pricing.
This isn’t a new idea. Throughout history, we see this virtuous cycle of insurance encouraging safe innovation and the adoption of new technologies. When electricity created new fire hazards, insurers funded Underwriters’ Laboratories to research risks, develop standards, and certify products — including the lightbulbs in your house. When automobile demand surged after World War II, insurers established the Insurance Institute for Highway Safety in 1959, nearly a decade before federal action. This institute developed crashworthiness ratings and lobbied for airbags, contributing to the 90% drop in deaths per mile over the 20th century. Today, it continues to analyze and encourage safer vehicle design.
AI is now where electricity and cars were when they first appeared: it holds great promise and great peril. AI could accelerate drug development; it could also enable terrorists to create synthetic bioweapons.
But the choice between progress and security is a false one. Progress requires security. Businesses won’t adopt tech they can’t trust, and neither will the public. Nuclear’s promise of abundant energy died for a generation after accidents like Three Mile Island and Chernobyl fueled public backlash and intense regulatory scrutiny. AI proponents shouldn’t dismiss the possibility of a similar catastrophe: an AI Three Mile Island.
Likewise security drives progress. The ChatGPT boom has been made possible partly by AI alignment techniques that have made AI more steerable and hence more useful.
We need institutions capable of modeling risks accurately and creating incentives for safe development. But given the breakneck pace of AI development, that’s easier said than done.
Government regulation will struggle to keep pace, either stifling innovation with overly broad restrictions or failing to address risks with rules that quickly become obsolete. Markets can adapt faster, but require actors with both the capability and incentive to accurately assess risks. Enter insurers, whose profitability depends on pricing risk accurately.
As history demonstrates, insurers can also reduce harm by spreading best practices, funding safety R&D, and conducting audits. In fact, these functions are often inextricable from the value proposition of commercial insurance: corporations often purchase insurance to access loss prevention services or signal product reliability.
You can see this dynamic emerging in narrow AI applications. For example, in late 2023, Microsoft announced its “Copilot Copyright Commitment,” effectively insuring customers against copyright violations from its AI tools. Within months, everyone from OpenAI to Adobe was offering copyright indemnification for their AI tools. These commitments thus quickly became a competitive requirement. Why? Because it sends a credible signal about the product’s quality, and streamlines its adoption.
To offer similarly credible commitments, smaller AI vendors have bought insurance. This creates a direct link between technical safeguards and market access: in order to win enterprise contracts, vendors need insurance; in order to get insured, they must implement the safeguards insurers require.
These are promising signs, but a critical component is missing: sophisticated risk modeling for frontier AI systems. When insurers can’t accurately model a technology’s risks, they can’t effectively incentivize good behavior or help credibly signal product quality. Policymakers can help reduce the surface area for legal maneuvering by assigning liability more clearly, but at some point someone must model the technological risks. The Artificial Intelligence Underwriting Company (AIUC), along with a handful of other startups and non-profits, is working on doing just that. AIUC is developing the technical standards, auditing, and risk models that insurers need to price AI risks accurately. Providing these allows us to help policyholders improve their security, and convert that security into customer adoption.
We believe steering AI development wisely is the challenge of a generation, and that leveraging market solutions is essential to meet that challenge. Franklin solved Philadelphia’s fire crisis and nurtured her growth not through regulation but through market incentives: insurance that rewarded safer building practices and punished reckless ones. We think insurance can do the same for AI.
Cristian Trout is an AI policy researcher and former Winter Fellow at the Centre for the Governance of AI. He is consulting for the Artificial Intelligence Underwriting Company (AIUC).
Rajiv Dattani is a co-founder of AIUC. He serves on the board of METR where he was previously COO, and is a former McKinsey insurance partner.
Rune Kvist is CEO and co-founder of AIUC. He serves on the board of the Center for AI Safety, and is a former Anthropic product lead.