Doing AI safety policy when governments aren’t interested
Opinion: Jess Whittlestone argues that there are still ways to keep AI safety policy on the table even when governments don’t prioritize it
In recent years, policymaker interest in AI safety has waxed and waned dramatically. Perhaps the best illustration: the shift from what was called the UK’s “AI Safety Summit” in November 2023, to the Paris “AI Action Summit” in early 2025 — ostensibly the same series of events, just with a radical change of focus. At the 2025 event, US vice president JD Vance explicitly stated: “I’m not here to talk about AI safety.”
There are many possible explanations for this shift — changes in government, shifting urgency around competing priorities, an increased sense of competitiveness around AI progress — and it’s inevitable that political interests will vary. But how do we do a good job of AI safety policy in a world that’s just not that excited about AI safety policy?
There are three things we can and should do now to ensure that AI safety remains on the agenda.
First is to actively work on increasing political interest in AI safety policy, by making better arguments for its importance and promoting those arguments more widely. This might focus on targeting decision-makers directly, assuming they would care more about AI safety with better or more information. Alternatively, such work could target a wider public audience, in the hopes that greater public concern about AI risks would put pressure on decision-makers more indirectly. In either case, the assumption is that there are important groups who would care more about AI safety if they had better or more information. This is a reasonable assumption, but it has limits.
An important second path: accept interest in AI safety policy as fixed, and develop policy proposals that both mitigate AI risks and genuinely serve other political or policy goals. The word “genuinely” is important: integrity in policy advocacy is vital and we should not attempt to “shoehorn in” AI safety policies under false pretenses of achieving some other goal. But there are avenues of genuine synergy between AI safety and other political priorities worth exploring. Here are three:
Support the development of (AI-enabled) technologies which could themselves help reduce risks from AI (sometimes called “defensive accelerationism,” or def/acc). One example: using AI to support vaccine development which could in turn reduce risks from AI-engineered pathogens. This approach is likely attractive to policymakers focused on economic growth and innovation.
Improve government ability to monitor, understand, and respond to the development of AI capabilities, such as the work being done by the UK AISI and other similar institutes worldwide. While this work is often motivated by AI safety, it can also help governments with an innovation agenda by making it easier to identify technology trends and applications. This sort of work is also likely to be appealing from a national security perspective, to improve a country’s ability to understand and defend against outside threats from AI.
Improve crisis preparedness. No matter your take on extreme risks from AI, it seems fairly clear that failure or misuse of AI could soon precipitate an acute incident that would be incredibly costly for governments to deal with. For example, AI could be used to greatly increase the frequency and severity of cyberattacks on critical national infrastructure. It is in the interests of governments to prepare for such threats even in the next year, and doing so would also increase capacity to identify early warning signs of more extreme threats.
A final path is to embrace the inevitable ups and downs of political interest and prepare for the moment when enthusiasm increases. We may see an uptick in political interest in AI safety in the coming months, whether due to internal government changes or factors outside of them. One possibility is that some kind of AI related ‘incident’ — whether that’s a stunning new capability or a high-profile AI failure — could lead to an uptick in concern. We can prepare policies and recommendations for that moment, so they’re ready to go. This could look like, for example, developing blueprints for regulation that isn’t politically tractable today but might be in a world with more political motivation to reduce AI risks.
Ideally, we’d work on all three of these paths at once: making what progress we can with policy change today given the reality of political interests, preparing policies for a world more concerned with AI safety, and at the same time making the arguments needed to get us into that world as quickly as possible. This would be a long shot for any one individual or organisation. But with coordination across different actors with similar goals, there’s a lot we can achieve.
Jess Whittlestone is senior advisor for AI Policy at the Centre for Long-Term Resilience, a UK-based think tank focused on societal resilience to extreme risks.




