A false choice risks undermining action on autonomous weapons
Opinion: Alexander Blanchard of the Stockholm International Peace Research Institute argues that nations must embrace nuance if they’re to effectively govern the use of AI-enabled autonomous weapons

Debate surrounding international regulatory efforts on autonomous weapon systems is at an impasse. Policymakers hashing out those rules are stuck on whether or not to hammer out a legally binding treaty at the United Nations that prohibits certain uses of the systems.
This year, states will have to grapple with the complexities of this issue as they decide at the UN on whether to continue current governance efforts on these weapons. So far, states have been working to identify their concerns on autonomous weapons systems and to lay the groundwork on what a possible treaty might look like. A vote at the UN will see them decide whether to continue along these lines, move to negotiating a treaty, or abandon these efforts altogether. If they do not continue, they must return to the drawing board to find a new approach for governing these controversial weapons. Yet with the use of autonomous weapons in the Ukraine-Russia war bringing daily battlefield casualties, if the international community is to achieve progress, it must urgently move beyond the ‘ban or bust’ mentality around regulation and toward nuanced debate.
Autonomous weapons systems — often colloquially referred to as ‘killer robots’ — use artificial intelligence to identify and attack targets without the need for human intervention. They have considerable military appeal for potentially increasing operational reach, persistence and speed. Dispensing with the need for a human operator means autonomous weapons can, in theory, operate without a constant communication link, allowing them to extend operations to environments where the signal between user and system is lost or that are deemed too risky for military personnel. Autonomous weapons may also help plug frontline personnel shortages and, if built cheaply, are highly expendable, meaning their loss on the battlefield doesn’t bring the same tactical and financial costs as other high-end military technologies such as fighter jets.
But autonomous weapons raise serious humanitarian, legal and ethical concerns. The UN Secretary-General, António Guterres, has said that “the autonomous targeting of humans by machines is a moral line that must not be crossed.” The International Committee of the Red Cross has raised concerns about the challenges autonomous weapons present for compliance with the laws of armed conflict. A particular concern is the dangers autonomous weapons pose to civilians and those protected under international law: modern battlefields are complex environments and it is unlikely that the technology in itself will be capable of distinguishing civilians from soldiers.
Autonomous weapons are, accordingly, subject to lively debate at the UN, where states are considering appropriate regulatory responses. However, over a decade of discussion at the UN has resulted in limited progress. In 2019 states managed to agree on a set of guiding principles as the basis of their work on autonomous weapon systems. This affirmed, amongst other things, that the law of armed conflict continues to apply and that human responsibility in decisions about their use must be retained. But the current landscape is one of institutional complexity, political sensitivity, and growing urgency. For instance, states are still yet to fully agree on how to define these weapons.
There are multiple reasons for this. Policymakers have difficulty conceptualizing their concerns about autonomous weapons. There is a deteriorated geopolitical environment in which distrust amongst states is rife. But it is also down to a dichotomy in the debate surrounding these efforts: that the only choice facing states is whether or not to negotiate a legally binding treaty.
Recent research conducted by myself and my colleague at the Stockholm International Peace Research Institute shows, however, that this dichotomy is more apparent than real. Policymakers around the world are weighing a range of options for governing autonomous weapons. This includes considering different scopes of obligation entailed by resulting regulation, including how restrictive it might be and whether it covers not just humanitarian harms, but also human rights and security concerns. It also includes the estimated scale of regulation, such as whether binding treaties, political commitments or ongoing dialogue are most effective in shaping state behaviour, protecting civilians, and ensuring accountability.
There is also the question of whether regulation on autonomous weapons should be pursued wholesale or incrementally. For some policymakers, development of new norms is best achieved through wholesale change. Efforts that are overly modest or narrow are seen as likely to foreclose future opportunities for meaningful regulation. Others favor an incremental approach, viewing current efforts at the UN as one step in a longer process of norm-building. For these policymakers, even outcomes that aren’t legally binding — such as political declarations, clarifications, or guidance on how existing international law applies to autonomous weapons — are still valuable because they can influence behaviour, create shared understanding and establish a foundation for possible future international legal development. In short, in our current geopolitical context, something might be better than nothing.
In other words, there are a rich and nuanced set of considerations that shape states’ ambitions and how they conceive of success in regulating these systems. The trouble is that, at the multilateral level, these nuances are rarely reflected — whether due to procedural limitations at the UN (such as time limits on producing statements), the sensitivities and confidentiality of national policymaking, or deliberate omissions that one expects from a negotiating process. Instead, a ‘ban or bust’ discourse drives a narrow idea of what success looks like. It thereby masks two possible realities: on the one hand, states may have more points of agreement than they realize, and so may be able to take more steps to progress the policy process than is apparent; on the other hand, the nuanced reality may make it more difficult for states to coalesce around a way forward with a wider range of factors at play than first thought. Only by moving beyond a polarized discourse will it become clear which of these is true.
This year marks an inflection point for international regulatory efforts on autonomous weapons. At the end of 2026 states will vote on whether to continue current efforts at the UN on identifying possible elements for regulation. If they choose not to move forward with existing efforts, they will need to either develop new procedures within the UN or explore international regulation outside of the organisation. Until then, states should aim to be as transparent as possible about their views on the policy process. More openness about their respective positions could help facilitate coordination and aid in the development of regulation.
The Ukraine-Russia war shows that autonomous weapons are no longer science fiction but a battlefield reality. The need for states to take a coherent approach to regulating these systems is more urgent than ever.
Alexander Blanchard is a Senior Researcher in the Governance of Artificial Intelligence Programme at the Stockholm International Peace Research Institute (SIPRI). He was previously the Dstl Digital Ethics Fellow at the Alan Turing Institute, London.




For more information, check out Military AI Watch: https://www.projectcensored.org/military-ai-watch/