AI is persuasive, but that’s not the real problem for democracy
Opinion: Felix M Simon argues that AI is unlikely to significantly shape election results in the near future, but warns that it could damage democracy through a steady erosion of institutional trust.

There is a nightmare scenario for democracies around the world in which sophisticated artificial intelligence becomes so capable of human persuasion that it skews election results. Bombarded by compelling, targeted adverts or sophisticated mis- and disinformation crafted by malicious actors, the theory goes, an unwitting public will be cajoled into entirely rethinking their votes.
That scenario isn’t about to take hold. It isn’t even close.
The concept of mass persuasion via digital technology isn’t new. The first meaningful instance we saw of this argument was with microtargeting, where personal data is analyzed to identify an individual’s demographic or interests so that they can be sent personalized messages. This gripped the public consciousness following the scandal around data analytics company Cambridge Analytica and its role in the 2016 US Presidential Election and the EU Referendum in the United Kingdom the same year. Even at the time, critical voices poured water on claims that the company helped sway these political events, arguing that the microtargeting campaigns were little more than marketing hype. Research in the following years has largely debunked claims of microtargeting wizardry.
Yet the myth of the outsized power of microtargeting refuses to die. Instead, it has found a new lease of life with AI. That the most advanced AI models are persuasive — that is, they can shape or change people’s opinions on a topic — has become clear over recent months, thanks to the careful work of researchers around the world. Various studies have shown that AI chatbots can reduce belief in conspiracy theories, reduce concerns about HPV vaccination, or increase pro-climate attitudes — impressive achievements considering that just five years ago these systems didn’t even exist.
Ironically, this persuasiveness seems to have little to do with microtargeting. A recent landmark study by the UK’s AI Security Institute (AISI) found that conversational LLMs are persuasive not because they are good at personalizing messages (although it can help a bit) but mostly thanks to their ability to craft information-dense and ultimately compelling arguments. This shouldn’t come as a surprise: people can be more easily persuaded when they are exposed to strong arguments. A wrinkle in the AISI study, though, is that while personalization might not be very important compared to other factors, factual accuracy after a certain point doesn’t seem to be either. Some of the most persuasive models in the experiments were among the least accurate, suggesting that, as the AI systems in the study became better at persuasion, the average quality of the information they provided decreased.
So could this finally give bad actors a leg up to achieve harmful political goals such as swinging elections? Could it be that highly compelling AI-created content can be used to erode the fabric of democracy without the need to even consider microtargeting?
Not quite. Overall, persuasion still remains challenging, especially when it comes to deeply held political beliefs. That’s why AI’s impact around elections, as my colleague Sacha Altay and I recently argued, will likely remain limited. The reasons are plentiful: Political views are notoriously difficult to sway and people show a generalised skepticism towards political messaging; persuasive messages still have to reach people in over-saturated and competitive information environments where attention is in short supply; and the edge that AI systems provide in creating misinformation is limited — to name just a few.
The evidence from the AISI study is also based on experiments where participants were paid to interact with LLMs. It remains unclear whether people would engage with LLMs in the same way outside of these experimental settings and if the results would be as strong. In addition, people will find reasons to discount information they do not agree with and limit their exposure to it. They already do so in a range of other contexts, including when information comes from politicians or news media they dislike or distrust. For better or worse, the same will likely happen with LLMs.
What’s perhaps more worrying is not whether powerful AI systems can be used to manipulate elections, but how their more general use — by governments and politicians, but also by social media platforms, politicians and the news media — could affect citizens and shape their relationship with, and trust in, each of these as well as each other.
In a world of already low or declining trust in political systems and institutions, governments, and news media in many countries, the indiscriminate and irresponsible use of AI systems — even in an attempt to make people’s lives better — could end up being more detrimental for democratic life than systems employed with the explicit aim of shaping election results. Public sentiment is already negative for the use of AI systems by the news media, governments, and political parties in many countries, and people generally expect AI to make these parts of society worse.
Quite apart from how well these pillars of democracy actually work, it is vital not just for elections but democratic life more broadly, that people can — and do — trust that they at least function. If people think that the news media and the work of institutions is getting worse thanks to AI, or that elections are no longer free and fair because politicians use AI to meddle with them — even if this is not or barely happening — then these foundations of democracy could become even more brittle.
Ultimately, it may not be AI’s persuasive abilities that undermine democracy, but simply its use by our institutions. And the elephant in the room has nothing to do with persuasive AI at all. We now live in a world where democratically elected leaders with authoritarian instincts can seemingly act with impunity — acquiesced or even supported by business and media elites — and violate the rule of law, institutional checks and balances, freedom of speech and a long list of other protections that are meant to guarantee a healthy democracy. The persuasive power of AI is frankly the least of anyone’s problems.
Dr Felix M. Simon is the Research Fellow in AI and News at the Reuters Institute for the Study of Journalism and a Research Associate at the Oxford Internet Institute, both at the University of Oxford.