Yoshua Bengio: ‘The ball is in policymakers’ hands’
The 2026 International AI Safety Report, which Bengio chairs, describes accelerating capabilities and growing risks. But mitigation approaches aren’t keeping up
Yoshua Bengio would like you to know that AI risks once considered fantastical are quickly becoming reality.
“There were a number of concerns that were only theoretical until this year,” he told Transformer ahead of today’s launch of the 2026 International AI Safety Report. But in the twelve months since the first of these reports was published, some frontier models have shown a number of worrying new abilities, he says. Among them are the early signs of deception, cheating and situational awareness that Transformer has covered in detail. “We can’t be in total denial about those risks, given that we’re starting to see empirical evidence.”
The new safety report, based on contributions from 100 experts, runs to 220 pages and describes the full range of risks that general-purpose AI presents, from deepfakes and manipulation to job loss and AI psychosis. In particular, it emphasizes that advances in AI’s scientific capabilities have heightened the threat of new biological weapons and describes how AI systems are increasingly being used in real-life cyberattacks. It also warns that pre-deployment safety testing is becoming harder because models are increasingly aware that they are being studied.
While the report notes that industry commitments to safety have expanded over the past year, Bengio says that efforts to mitigate risk are lagging. He points to the use of Claude Code in cyber attacks, allegedly by a Chinese state-sponsored group, in late 2025 as an example. The capability of LLMs to aid hackers has increased far faster, than our ability to detect and block their use in cyberattacks, he said.
“Unfortunately, the pace of advances is still much greater than the pace of [progress in] how we can manage those risks and mitigate them,” he said. “And, that, I think, puts the ball in the hands of the policymakers.”
The report doesn’t provide specific policy recommendations, but the authors hope it generates serious conversations among policymakers by raising awareness of the issues. Bengio is hopeful many of those conversations will take place at the India AI Impact Summit 2026 in February. He isn’t holding his breath for huge announcements, but is hopeful progress will be made. “I don’t expect actual international treaties and so on to emerge at that point,” he said. “But international coordination can be informal … and these events, like the India summit, they really help that kind of coordination to happen.”
Speaking not in his role as chair of the report but as an AI researcher, Bengio said that as more evidence supporting the potential risk of AI builds up, the more optimistic he becomes that nations will begin to work together. “It is in the rational interest of various countries to make sure we end up with an international agreement,” he said. “You want the other guy to follow some rules, and vice versa, right? That’s exactly what has happened with the management of nuclear risks.”
Even so, he worries that some facets of AI safety continue to be overlooked. In particular, he points to how AI could be used to create or preserve monopolies or oligopolies, or how politicians might use the technology to increase their hold on power against their political opponents. “These kinds of power issues [don’t] get as much attention from the media and people in general as [they] deserve,” he said.
The first report was published in January 2025. The team behind the report later published an interim update in October 2025. In an opinion piece for Transformer about the update, its authors argued that the field was “advancing far too fast for a single annual report to capture the pace of change.”
Given that pace, and the slow progress being made on mitigating risk, does Bengio think the alignment problem can be solved before we reach transformative AI capabilities? “I really don’t know,” he said, speaking in his capacity as a researcher. “I’m not sufficiently confident that I could just retire and let others do it. I’m putting all my energy into doing this, and doing it fast enough.”





Humans have never said no to a bad idea. Look where we're at.