AI doesn’t need to be general to be dangerous
There’s more to AI safety than the AGI debate
It’s little surprise that AI safety conversations tend to revolve around the idea of artificial general intelligence — a future system that can do almost everything humans can, posing immense risks in the process. It’s what AI companies are racing to build, the prospect of its achievement justifying huge valuations and investment to match. And it’s what unlocks many of the most terrifying scenarios for AI causing huge harm.
Yet that focus can lead people to ignore some very real and likely much more immediate threats that even AGI skeptics should be deeply concerned about.
It doesn’t help that there is little agreement on what exactly would qualify as “AGI” or how to decide when it has actually been created. This makes it easy to dismiss AI safety concerns on the grounds that the field can’t adequately define what it’s worried about. And even if definitions are agreed, AGI represents an inherently high bar — one that can feel distant enough to ignore. Combined, many people tune out of the discussion altogether, treating catastrophic AI risks as synonymous with a far-off, theoretical threshold.
But while some risks, such as loss of control, do rely on the existence of AGI, many serious AI risks do not. Narrow AI capabilities could still pose significant threats, which could materialize long before anything resembling general intelligence. AI, in other words, does not need to be general to be dangerous.
The obvious example is biorisks. A motivated, bad-intentioned expert does not need AGI to develop a novel bioweapon — just a model with powerful biological capabilities (such as the ones many organizations are building). A general intelligence is also not required to carry out damaging cyber attacks at scale, or to help non-experts build a dirty bomb. Narrow expertise in a dangerous domain is enough to cause disaster. A model that may seem far from intelligent at all, let alone as intelligent as a human, can contain and structure knowledge in ways that could be extremely dangerous. A toddler with a gun can still be deadly.
And the risks extend beyond malicious actors wielding dangerous tools. Even seemingly benign capabilities can have serious consequences, some of which we are already seeing emerge. Take AI’s exceptional language abilities, which have upended the job market for translators. Or the extreme sycophancy of GPT-4o, arguably an example of powerful emotional manipulation and persuasion capabilities — capabilities that may have played a significant role in several suicides. These impacts are not yet at the kind of scale that tends to dominate debates about whether we should be worried about catastrophic risks from AI, and yet there are plenty of scenarios in which the damage done by models available today could scale up.
To date, AI progress has been jagged rather than uniform: models achieve superhuman performance in some areas while lagging well behind humans in others. If this trajectory continues, we should expect the first societal-scale risks from AI to emerge not from AGI, but from systems that are superhuman in one dangerous area and mediocre everywhere else.
Dismissing these risks because of hype around AGI misses the many signals that the AI systems we have now, while falling well short of AGI, are plenty capable of real harm — and that if capabilities improve in certain narrow domains, those harms could grow ever more menacing.
None of this means we should stop worrying about AGI, which poses unique and terrifying risks of its own. But it does mean that we need to look past an AI discourse often dominated by discussions of AGI timelines. As Helen Toner noted in a recent talk, we might be better off thinking instead about ‘timelines till things get crazy.’ That reframing captures what we really care about: the impacts AI will have on society. And thanks to the risks posed by even narrow AI systems, those timelines could be very short indeed.



