Will AI safety become a mass movement?
Some AI safety activists think the community should borrow from the climate playbook and build broad public appeal — but not everyone agrees

Late last week, OpenAI’s San Francisco offices went into lockdown. Authorities and OpenAI security raised the alarm over a threat purportedly made by an activist, said to be a senior member of anti-superintelligence advocacy group Stop AI. Both Stop AI and PauseAI, another advocacy group, released statements condemning any form of violence. While details are still emerging about this specific incident, it has raised questions about the pitfalls of building an activism movement around AI safety.
Historically, the AI safety movement has worked mostly behind the scenes, taking an “inside game” approach to achieving change. Advocates of alignment can often be found in jobs at frontier labs, working on safety research. They might also occupy roles in government, where they can influence legislation, or at think tanks where they can lobby for caution.
This approach can seem insular. “Most advocacy has been insider-focused: expert-driven, slow-moving, and disconnected from broader public energy,” concluded a report published by Social Change Lab earlier this year.
In the past two years, however, some have begun to adopt strategies more similar to those favored by other causes, such as climate advocacy: attempting to build a mass movement. Through protests, hunger strikes, and highly-visible campaigning, they’ve sought to turn AI safety from a niche concern into one that spans society.
As a result of their efforts, AI safety now finds itself at a crossroads: should it stay niche, or become a rallying cry?
The ‘inside game’ approach
In government, an insider-focused strategy has produced tangible results. The Biden administration passed an AI Executive Order in October 2023 which included a transparency regime, while the UK’s groundbreaking AI Security Institute successfully recruited top technical talent from leading labs to conduct model evaluations, spawning a network of similar national safety institutions across the globe. On the legislation front, the EU’s AI Act, which passed in 2024, incorporated red-teaming and transparency requirements championed by AI safety advocates working within regulatory bodies. Those successes have however proved fragile, with the Trump administration rescinding Biden’s EO, and the EU set to water down and delay implementation of the AI Act.
Within the frontier labs themselves, safety-focused researchers have successfully advocated for significant compute allocation toward alignment research. An industry culture of valuing safety also means researchers are quick to criticize labs they deem to have insufficient guardrails.
But there are concerns that the work of safety advocates could be used as “safetywashing” while labs continue full steam ahead on developing more and more powerful models.
Felix de Simone, an organising director at PauseAI US, who began his own career in environmental policy, told Transformer that one lesson he learned from climate campaigning is “we shouldn’t trust industry, nor should we allow it to set the narrative.”
“The climate change movement learned this decades ago, when industries like Exxon knowingly lied about the effects of fossil fuels,” he said over email. “But some people in the AI safety community still see certain AI companies as ‘good guys’ and others as ‘bad guys,’ even though all frontier AI companies are participants in the same race to the bottom that endangers us all.”
The game has also changed in recent years as AI has become a charged political topic, as well as a target for millions of dollars’ worth of lobbying. In August, a network of super PACs called Leading the Future launched with more than $100m in funding, backed by the likes of Andreessen Horowitz, Perplexity and OpenAI’s Greg Brockman. Its goal? To “support candidates aligned with the pro-AI agenda … and oppose those that do not”. Governments, meanwhile, are making huge bets on the economic benefits of the technology.
In the face of such interests, the case for mobilizing larger numbers of people to oppose rapid AI development begins to look more compelling for those worried about its implications.
Go big or go home
“Given the nature of AI developments (fast, unpredictable, wide-ranging), social movements may be particularly well-suited to respond,” Social Change Lab’s researchers wrote in their report. “They could be critical for putting the necessary pressure on tech companies and policy makers to proceed with the caution the public wants; movements could demand serious and multiple existing and emerging concerns to be properly addressed.”
Public sentiment on AI is cautiously optimistic, but majority opinion leans towards regulation. In its most recent biennial survey of the British public, the Ada Lovelace Institute found that 72% said laws and regulation would make them feel more comfortable with AI, up from 62% in the 2023 survey. Globally, skepticism about whether AI companies have fair and ethical practices is growing, according to Ipsos data.
Catalyzing that wariness into action and wider civic engagement, though, is a challenge.
Some commentators have argued that those concerned about the long-term risks of AI should take a page out of the green movement’s book. “Back in 1970 the correct move wasn’t coming up with a foolproof plan to solve climate change,” computer scientist Erik Hoel wrote in a 2023 Substack post. “Nor was it giving up if no one offered such a plan. The correct move was activism. AI safety advocates should therefore look to climate activists to see what’s effective. Which is basically panic, lobbying, and outrage.”
One of the successes of the climate movement has undoubtedly been its ability to rally a patchwork of groups behind a common goal. Tying immediate harms, like traffic pollution, to long-term risk, has helped make global-scale problems tangible.
Cathy Rogers, a research consultant with Social Change Lab who co-wrote its report, tells Transformer that AI safety might be able to learn from the climate movement, since both are concerned with “things which are too terrifying to contemplate.”
“One of the key learnings, if you like, is about strategic and tactical diversity,” she explains. Some parts of the movement focus on changing public thinking on a grand scale, making concepts like a “climate emergency” mainstream. Other groups push for specific changes, using everything from in-person protest and local campaigns to becoming activist shareholders in energy companies or electing green candidates to public office.
The reason this hodgepodge works, she explains, is that there is agreement on an overall goal: to slow and reverse the level of heating. There are many disagreements within the movement, but this unified purpose makes it coherent enough to have mass appeal.
The AI risk landscape is similarly diverse, especially when expanded beyond existential risk to include issues like copyright, psychological impacts, the labour market and data privacy. Yet there is little cohesion between Hollywood writers going on strike over AI clauses, local communities protesting a data center and researchers producing reports on the singularity.
Will it work?
Rogers wonders aloud whether such a single goal could ever be agreed upon by the disparate parts of the AI safety and ethics landscape. She suggests it might be possible if different groups would agree to meet regularly and find a common aim. Many, however, are skeptical that this could happen.
In a Substack post in September arguing against the idea of building an AI safety movement, researcher and writer Anton Leicht wrote that the issue is ill-suited for a broad coalition approach. “I think this movement is likely to be captured by reductive overtones and ultimately muddy the policy conversation through untractable or misguided proposals,” he argued.
This is a common concern. Minh Nguyen, a former climate activist who switched to working on AI safety in 2021 and now works for Y-Combinator-backed Hud Evals, told Transformer over email that an alliance between AI safety and broader anti-AI groups would “be overwhelmed by passionate people who have a really bad and possibly counterproductive understanding of the situation.” He cited the contentious topic of data center water use as one example where the two sides have trouble seeing eye to eye. Without that shared reality, it might be difficult to agree on priorities.
Rogers, though, thinks many on the technical and policy side underestimate the general public’s ability to comprehend complex topics. “There’s some people who are very rude and dismissive, frankly, about public involvement,” she says. “And then there are people, I think, who say don’t get the public involved for a slightly more reasonable reason, which is that they’re worried that it will become really polarized.”
This is a concern for PauseAI US too. De Simone says that the organisation is a big-tent that welcomes anyone who is concerned about AI for a variety of reasons, provided they agree with the group’s proposed solutions. “An AGI pause is broadly popular with the US public, and it’s our job to tap into that popularity and channel it into action,” he said over email.
However, he emphasized the importance of bipartisan messaging, noting that when causes like the green movement grow in size, they can run into hot-button political topics.
“Unfortunately, climate change has become politically polarized in the US,” he said. “We need to avoid this outcome at all costs.”
The radical line
Another question for a broadening AI safety movement would be how to handle its most hardline members. Leicht has written that a popular strategy “exposes reasonable AI safety organisations to association with perceived crackpot protesters.”
It should be noted that the vast majority of AI activists condemn violence. Several groups, including PauseAI, do not allow members to engage in any illegal activity.
However, successful social movements throughout history, from women’s suffrage to the modern-day climate movement, have always contained those willing to engage in a more confrontational approach. From roadblocks and sit-ins all the way to the sabotage of equipment and buildings, activists on the so-called “radical flank” might cross lines that others find distasteful.
“It can be positive, where it raises the salience of a topic and all boats rise,” says Rogers. “Or it can be negative in that, particularly if people do violent things, it can bring the whole movement down because suddenly the movement feels tarnished.”
She cites the example of the group Insulate Britain. While its tactics of blocking major roads were unpopular with the public, Social Change Lab’s research credits the protests with massively raising the profile of home insulation as an issue. While Insulate Britain itself declared its direct action had not achieved its goals, more moderate groups working in the same area were able to gain traction, leading to real policy changes.
Taking this kind of action is still relatively rare within the AI safety movement. Talking to the For Humanity podcast last year, PauseAI US executive director Holly Elmore said the group’s in-person protests are often very small in size because people who truly understand the issue of AI safety “tend to not like protesting.”
This is slowly changing. Earlier this year, a handful of activists across the Bay Area, the UK and India, began hunger strikes in protest to the development of superintelligent AI. The hunger strike has been used throughout the history of protest as a form of non-violent protest. In recent years, individuals in Australia, India, Germany and the UK have employed the tactic as a means of pressuring governments to hear their concerns about the climate.
But instead of calling for government action, the AI hunger strikers aimed their protests directly at the frontier labs.
Michael Trazzi, who went on a hunger strike outside Google DeepMind’s offices in London for a week, tells Transformer he had been inspired by the example of Mahatma Gandhi, who undertook several fasts as part of India’s freedom movement. Trazzi had also been frustrated with a small, brief PauseAI protest he had attended, and wanted to do something harder to ignore. “If you actually want to pressure a company, this [a one-day protest] is not how you pressure them. You pressure them by getting some immediate attention, by coming every day, by actually being annoying.”
A fracture point looks likely to be whether to engage in non-violent but technically illegal activity. Being arrested for civil disobedience can even be a form of campaigning, employed by the civil rights movement, suffragettes, and most recently climate activists to gain attention.
While most groups currently avoid breaking the law, the nature of broadening the appeal of the movement means an inevitable loss of control over what individuals are willing to do in the name of the cause.
Learning lessons
Despite all these concerns, people across the spectrum of AI safety and ethics groups are still plugging away, building support for their causes. And many are optimistic that they can achieve change.
Trazzi sees evidence that there is already public buy-in, both in polls supporting AI regulation and in the attention paid to protests like his in the media and online. “So it’s not a question of convincing them as much as telling them [...] you have a voice in our democracy to let your representatives know that something is happening.”
Joseph Miller, who leads PauseAI UK, has been thinking about the lessons of another movement lately, one that achieved its main aims: abolitionism in Britain.
“The abolitionist movement was very successful, very quickly, in changing the minds of the public,” he tells Transformer. “So I do think that should be a hopeful lesson for us.”
Abolition was not a fast process: two decades passed between the formation of the Committee for the Abolition of the Slave Trade in 1787 and the passing of the 1807 Act that prohibited the slave trade. It took a further three decades for slavery to be fully abolished. For those who anticipate AGI as an imminent threat, this would be too slow.
Yet while the wrongs AI safety activists want to right are fundamentally different, the abolition movement could still provide a model for insider maneuvers coexisting with appeals to the public. Instigated by a small number of people, among them former slaves, parliamentarians, lawyers and religious leaders, abolitionists were able to combine a campaign targeting lawmakers with growing popular support.
AI safety might seem like a niche concern, up against very deep structural forces, but anti-slavery campaigners faced similar barriers, says Miller. “It was deeply embedded in the economy. It was extremely costly to abolish it. And yet they did it anyway.”







Much needed reporting