A brief guide to the groups protesting over AI
The differences between Stop AI, PauseAI, ControlAI, and more
Activism opposing various aspects of AI might, from the outside, all look the same. But dive deeper, and you’ll soon discover internecine conflicts, surprising alliances, and vastly different tactical approaches. In the wake of alleged violent threats made towards OpenAI employees last week, the spotlight has turned to the activist groups opposing the current course of AI. Here’s your guide to who’s who.
Stop AI
The most radical of the groups — and the one reportedly linked to last week’s OpenAI incident — is Stop AI. Stop AI was founded in 2024 by Sam Kirchner and Guido Reichstader, growing out of an informal “No AGI” hashtag on social media.
The group is best known for “civil disobedience”: in February, three Stop AI protestors, including Reichstadter, were arrested after they blocked the doors of OpenAI’s offices. Reichstadter was also one of three people to go on a hunger strike outside AI company offices in September. And last month, someone from San Francisco’s public defender’s office jumped on stage at a Sam Altman event to serve him a subpoena to appear at a trial of the Stop AI protestors for their actions outside his company’s HQ.
Notably, Stop AI has taken an extremely adversarial approach to the rest of the AI safety ecosystem. According to Pause AI, the group was founded because “PauseAI leadership did not allow the eventual StopAI founders, Sam Kirchner and Guido Reichstader, to do illegal direct actions.”
In February, a Stop AI protestor jumped on stage at the Effective Altruism Global conference, calling speaker Neil Buddy Shah (CEO of the Clinton Health Access Initiative and chair of Anthropic’s Long-Term Benefit Trust) a “fucking murderer,” and suggesting the audience were a “bunch of pussy ass bitches.” In April, a protester started shouting at an event with Yoshua Bengio. On social media, Stop AI has also criticized Eliezer Yudkowsky, despite his prominent role in campaigning against the race to build AGI.
In fact, one of the most prominent supporters of Stop AI is also one of the most prominent critics of AI safety, as well as the effective altruism movement that provides much of its funding: Emile Torres. Torres appeared on the first episode of Stop AI’s podcast, posted a photo wearing a “Stop AI” t-shirt, and until recently displayed the 🛑 emoji — widely used among Stop AI supporters — in their X handle. Torres has previously called effective altruism “sort of a cult,” and earlier this year warned that “one shouldn’t be surprised if members of the community start talking about the use of force, military strikes, or targeted killings to reduce the supposed ‘existential risk’ of AGI,” mostly blaming Eliezer Yudkowsky’s writing on the subject.
Last Friday, Stop AI made headlines after OpenAI reportedly locked down its offices in response to threats from an individual linked to the group. Stop AI released a statement that day saying that a senior member had allegedly made statements “renouncing nonviolence” and assaulted another Stop AI member who had stopped him from trying to buy weapons, leading to fears that he would attack OpenAI staff. The organization denounced his actions (as did Torres) and said it had informed authorities and AI companies of potential danger.
PauseAI
The other major group protesting over the race to AGI, PauseAI, is a network of local organizations, founded in 2023 by Joep Meindertsma, a Dutch entrepreneur who leads its international coordinating group.
The most prominent local groups are those in the US led by former animal welfare activist Holly Elmore, and the UK, led by Joseph Miller, an AI safety researcher currently doing a PhD at Oxford University. Both Elmore and Miller have backgrounds in the “traditional” AI safety and effective altruism communities, though Elmore has become increasingly critical of them.
As the name suggests, the group calls for a “temporary pause on the training of the most powerful general AI systems,” an idea inspired by an open letter from the Future of Life Institute in March 2023. (Disclosure: FLI funds Transformer’s publisher Tarbell.)
PauseAI has primarily organized small protests outside AI companies’ offices, including one last year co-organized with the person Stop AI named as having made threats towards AI companies last week.
In a statement after the recent OpenAI incident, PauseAI US said that the organization “does not work with StopAI and has not since StopAI was founded,” and emphasized its commitment to nonviolence and following the law.
ControlAI
The most professionalized of the activist groups focused on AI, ControlAI is an offshoot of Conjecture, an AI startup run by Connor Leahy (who previously cofounded EleutherAI). Set up in 2023, the group has produced slick advertising campaigns, memorably hiring a blimp to fly over the AI Safety Summit in 2023.
Unlike Stop AI or PauseAI, ControlAI doesn’t organise demonstrations, instead pursuing a more “inside game” strategy, with its most notable achievement getting UK MPs and Lords to sign a statement on AI’s extinction risk. But like the other groups, it is extremely critical of the traditional “AI safety” ecosystem: the organization’s explanation of its “direct institutional plan” spends over 500 words criticizing Open Philanthropy (now known as Coefficient Giving), the main funder in the space. (Disclosure: Coefficient Giving is Transformer’s main funder.)
The others
Other groups are moving into this space, too. David Krueger, an assistant professor at the University of Montreal, recently launched Evitable, a new organization to “inform and organize the public to confront societal-scale risks of AI.”
Doing this work comes with risks, however. The Center for AI Safety, which was behind the Sam Altman-backed statement on AI risk in May 2023, earlier this year hired John Sherman, host of the popular For Humanity Podcast, as director of public engagement — before swiftly “parting ways” with him after it transpired Sherman had previously said that the “proper reaction” to AI risks was to burn down AI labs. This week, Sherman launched a new organization focused on “AI extinction risk communications for the general public.”
Environmentally-driven opposition to data centers has also grown in recent months: a report from Data Center Watch found a “sharp escalation” in activism against data centers in Q2 2025, with local activists increasingly coordinating.
November 28: This article was updated to clarify that the groups discussed are not all opposed to all forms of AI.




PauseAI is not anti-AI, and it seems highly likely that most (if not all) of the other groups would disagree with this label, too. PauseAI is an AI regulation and AI safety activist group. Labeling us as anti-AI misrepresents what we stand for. Please change your title to something more fitting.
From our FAQ:
> Aren’t you just scared of changes and new technology?
You might be surprised that most people in PauseAI consider themselves techno-optimists. Many of them are involved in AI development, are gadget lovers, and have mostly been very excited about the future. Particularly many of them have been excited about the potential of AI to help humanity. That’s why for many of them the sad realization that AI might be an existential risk was a very difficult one to internalize.
Pause AI is not anti-AI, we are anti-AGI, this is COMPLETELY different and it saddens me greatly to read such a misleading statement in a newsletter that I respect so much. We are actively fighting against people labelling us "Anti-AI" (usually big tech lobbyists). I am sure that this was an honest mistake, but I think we should have a chat to clear the record going forward Shakeel.