Nobel laureates and AI developers call for ‘red lines’ on AI
Experts are calling for international agreements as the United Nations meets, but they face an uphill battle turning words into action
A roll call of weighty names — including 10 Nobel Prize winners and two former heads of state — have called for international action to put in place “red lines” for AI development by the end of 2026.
The more than 200 signatories, who also include senior employees at OpenAI, Google DeepMind and Anthropic, cite the dangers of engineered pandemics and mass unemployment, and note that many experts “warn that it will become increasingly difficult to exert meaningful human control [over AI systems] in the coming years.”
The statement, timed for the UN General Assembly this week, is a meaningful step towards building an international consensus on AI. But it is unlikely to move the needle on concrete governance, largely due to American opposition.
Along with the usual suspects, including “godfathers of AI” Geoffrey Hinton and Yoshua Bengio, signatories include Joseph Stiglitz, the economist; Juan Manuel Santos, former president of Colombia; Mary Robinson, former president of Ireland; and Enrico Letta, former prime minister of Italy. A wide range of other former government ministers, scientists, and diplomats (and, slightly incongruously, actor Stephen Fry), have also signed.
OpenAI co-founder Wojciech Zaremba and DeepMind principal scientist Ian Goodfellow are among those from OpenAI, Google DeepMind, and Anthropic who put their names to the statement — though the CEOs of each company did not.
“Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world,” the signatories warn, arguing that “an international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks.” By the end of 2026 such red lines should be “operational, with robust enforcement mechanisms,” they say.
The statement does not outline what the red lines governing AI development should be. A separate statement from last year calls for a ban on autonomous replication, power-seeking behavior, autonomous cyberattacks, and sandbagging. Many signatories of Monday’s statement are also signatories of this earlier one — including senior Chinese scientists such as Ya-Qin Zhang, former president of Baidu, and Huang Tiejun, chairman of the Beijing Academy of Artificial Intelligence.
It also comes on the heels of two recently announced UN AI initiatives: an international scientific panel, similar to the Intergovernmental Panel on Climate Change (IPCC), and a “global dialogue” on AI governance.
Moving from dialogues and statements to concrete action, however, is likely to be an uphill battle thanks to the lack of support from the US government. While the Trump administration’s AI Action Plan says the US “supports likeminded nations working together to encourage the development of AI in line with our shared values,” it goes on to say that “too many of these efforts have advocated for burdensome regulations, vague ‘codes of conduct’ that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies.” Earlier this month, Sen. Ted Cruz said that one of Congress’s “pillars” for regulating AI should be to “counter excessive foreign regulation.”
Still, the new statement is evidence that more and more people are taking the potential capabilities — and risks — of AI seriously. Csaba Kőrösi, former president of the UN General Assembly, said in comments accompanying the statement: “Humanity in its long history has never met intelligence higher than ours. Within a few years, we will.”