Feb. 5, 2026
Here are five primary dangers from political AI chatbots, VCU expert Jason Ross Arnold says
Share this story
The journals Nature and Science recently published results of studies that found that AI chatbots can persuade voters to change their political views, though with facts that weren’t always accurate.
At Virginia Commonwealth University, Jason Ross Arnold, Ph.D., professor and chair of the Department of Political Science in the College of Humanities and Sciences, has studied disinformation, public ignorance and governance of artificial intelligence.
VCU News asked him for some insight.
You’ve identified a handful of primary dangers tied to political chatbots. What’s the one danger that gives you the most heartburn?
There are much larger long-term risks associated with AI, but among the near-term, voter-facing dangers, one worth highlighting is how chatbots reinforce users’ existing beliefs rather than challenge them. That dynamic deepens echo chambers in already polarized societies like the United States.
This concern is compounded by the fact that these systems generate highly fluent, rhetorically polished answers that can mask subtle framing choices or narrative emphasis, making them feel authoritative even when they omit context or de-emphasize contrary evidence.
What makes this especially troubling is how easily that dynamic can be exploited by malicious – or simply Machiavellian – political actors. AI makes it possible to personalize disinformation at scale, further erode trust in professional, truth-seeking media and push preferred narratives through concentrated control over widely used chatbots or so-called “fact-checking” systems that themselves distort the truth.
In an election cycle, this combination risks turning chatbots into automated political operatives: confident, persuasive and influential, but not reliably tethered to fact.
What are the other key dangers of political chatbots?
Several other dangers are worth highlighting. One is misgrounding: when chatbots cite real sources but attach them to claims those sources don’t actually support. This can make misinformation harder to recognize.
A recent study published in Nature Communications found that between roughly 50% and 90% of large-language-model responses were not fully supported — and sometimes contradicted — by the sources they cited. While the study focused on medical queries, it highlights a general failure mode of citation-based systems that carries over directly to political and policy contexts.
Another danger is hidden bias and framing effects. Small asymmetries in how information is presented can subtly nudge political attitudes while still sounding neutral and professional.
There’s also the longer-term concern of cognitive offloading, where voters increasingly rely on AI summaries instead of engaging directly with complex political issues, weakening the habits of critical evaluation that democracy depends on.
Finally, as these systems become more embedded in political life, there is a risk that concentrated control over widely used chatbots – by corporations or governments with weak democratic constraints – could shape public discourse in ways that are opaque and difficult for citizens to contest.
Of course, peril and progress can go hand in hand, so how might political chatbots be beneficial?
Political chatbots can be beneficial when they lower barriers to participation without nudging people toward specific preferences. They can do this by tailoring explanations to a voter’s background knowledge, improving understanding of complex issues without telling people what to think.
By compressing the time required to gather baseline political information, they may also broaden participation among citizens who care but are stretched thin, when the information they provide is reliable.
Finally, they can help voters understand dense ballot initiatives by translating legal or technical language into plain English and fill information gaps in local elections where news coverage is sparse.
For a voter who might engage a chatbot for political information, what tips would you offer?
The most important advice is to treat a chatbot as a starting point, not an authority. Voters should cross-check claims with trusted sources, ask follow-up questions when something feels off and explicitly ask the system to reconsider or verify its answers rather than accepting the first response.
One underappreciated step is adjusting how the chatbot is instructed to behave. Many systems allow users to select a more direct or “efficient” style and to add custom instructions – for example, asking the model to be concise, avoid flattery, correct errors plainly and prioritize accuracy over agreement. Those kinds of settings can reduce the tendency toward preference-confirming or overly agreeable responses and encourage more critical exchanges. This isn’t a cure-all, but it can shift the interaction from reassurance toward scrutiny.
Ultimately, using political chatbots responsibly requires the same thing democracy does: active engagement, not passive consumption.
Crystal ball, please – and in just a few sentences: Will AI do more to undermine or strengthen democracy as we tend to think of it?
In the near term, AI is likely to do both, but over the longer run, the risks to democracy are more serious than many people appreciate.
AI enables highly personalized disinformation and social engineering at scale, which – if misused – can destabilize societies, fuel unrest or conflict, and entrench forms of digital authoritarianism that are difficult to reverse.
While AI will bring enormous benefits to science, medicine and many areas of public life, we are currently far better at building these systems than at governing them. Whether AI ultimately strengthens or undermines democracy will depend less on the technology itself than on whether societies develop the institutions, norms and safeguards needed to manage its risks.
Subscribe to VCU News
Subscribe to VCU News at newsletter.vcu.edu and receive a selection of stories, videos, photos, news clips and event listings in your inbox.