Why we need a moratorium on superintelligence research
Opinion: As the AI community gathers in India, Lord Hunt of Kings Heath argues that the UK must spearhead a pause on the development of the world’s most advanced AI models.

The clarion call is loud and clear. Late last year, a broad coalition of AI godfathers, Nobel Prize winners, policymakers, faith leaders, public figures, and organizations such as ControlAI came together to demand a prohibition on the development of artificial superintelligence. They argued that work should be paused until there is scientific consensus that it can be done safely and controllably, with strong public buy-in.
As policymakers convene at the AI Impact Summit in New Delhi this week, they should take note. Calls for a prohibition on ASI for the foreseeable future cannot fall on deaf ears while nations desperately compete to secure investment from the AI companies themselves.
Anthropic CEO Dario Amodei, one of the most powerful entrepreneurs in the AI industry, was very clear in his recent essay that “humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it.” He outlined the risks that could arise with the advent of what he calls “powerful AI.” This includes systems “much more capable than any Nobel Prize winner, statesman or technologist,” which, he argues, could end up “in the worst case even destroying all life on earth.”
In the UK, one of the key concerns of MI5’s Director General is the potential future risk from non-human, autonomous AI systems that may evade human oversight and control. A December 2025 report from the UK’s AI Security Institute echoed this, warning that AI systems have the potential to behave in unintended ways, and that in a worst-case scenario, this could lead to “catastrophic, irreversible loss of control over advanced AI systems.” The latest International AI Safety Report reinforces this concern, warning of scenarios where AI systems operate outside anyone’s control with no clear path to regaining it.
We would all hope that AI companies would proceed carefully. However, it appears that caution has been thrown to the wind. Instead of taking a measured approach, they are racing to develop superintelligence, with each company feeling compelled to speed ahead because their competitors are doing the same.
Just a few weeks ago, at the World Economic Forum’s annual meeting in Davos, Demis Hassabis, CEO of UK-based Google DeepMind, said he would advocate for a pause in AI development if other companies and countries would follow suit. Unfortunately, there are no signs that companies themselves can or will do this.
With leading voices in the industry and in security highlighting such serious concerns it is clear that governments need to act to build safeguards into ASI development so that it only proceeds in a safe and controllable manner.
So, what is the prospect of international agreement on a moratorium on the development of ASI in today’s uncertain world?
While there is a clear incentive for the major military powers to leverage superintelligence to gain a decisive military advantage, an arms race in super intelligent AI poses risks for those very countries who are leading the charge. Governments could put their own national security at risk, by losing control of military systems where AI technology is increasingly embedded. No nation has an interest in this outcome.
Most importantly, the potential consequences worldwide are so threatening that we cannot abdicate our responsibility to seek international agreement to mitigate these risks. International agreement is not only possible but necessary.
I will not deny that these are challenging times to secure global agreements, amidst concerns that the international rules-based order — however imperfect — is breaking down. But international agreements have been reached around contentious issues in the past. In the 1980s, when the Cold War threatened nuclear annihilation, nations agreed to a landmark nuclear de-escalation treaty. In the 1990s, the Chemical Weapons Convention was drafted and entered into force; it has been ratified by 98% of the world’s nations.
While one can point to their imperfections, these agreements have been a force for good and have demonstrably made the world safer.
Across the world, a coalition of AI experts, organizations like ControlAI, and citizens is taking shape and demanding a prohibition on superintelligence for the foreseeable future.
The UK is uniquely positioned to lead this effort and has previously shown leadership in regulating powerful technologies before they fully materialised. It did so with in vitro fertilization standards in the 1990s. It has pioneered the AI Safety Summits and established a world-class AI Security Institute. And it has a long tradition of helping to negotiate international treaties: It played a central role in the Treaty on the Non-Proliferation of Nuclear Weapons and the Chemical Weapons Convention, as well as in the more recent Ottawa Treaty on the prohibition of land mines and Arms Trade Treaty on weapon sales.
We do not have to sacrifice growth by imposing sweeping regulations. We can lead on allowing beneficial applications to flourish while applying targeted regulation to the most powerful models posing the greatest risks.
The global momentum to address the risks from superintelligence is building. The UK needs to work with international partners to shape and guide this plan.
Rt Hon Lord Hunt of Kings Heath PC OBE is a former health administrator and a Labour Co-operative member of the House of Lords. He is a former Deputy Leader of the House of Lords and served as a Minister in the Governments of Tony Blair, Gordon Brown and Keir Starmer.




I agree with Lord Hunt that we must have a moratorium on the development of all future AI features. Otherwise, we will find ourselves in a world beyond our present chaos, a world that doesn't even understand the technology we already use, but writ large. How can we propagate systems like Horizon, CoOp, and Jaguar Land Rover across systems that we should understand how they work, without developing guardrails to avoid catastrophic runaway?
And those guardrails should have the tools to make as well as practices and procedures to examine and test them.
I've been working on AI since the first commercial iteration ( which we called Expert Systems) in the '70s, and we in the AI world have been warning that this would happen.