Artificial intelligence chatbots are very good at changing peoples’ political opinions, according to a study published Thursday, and are particularly persuasive when they use inaccurate information.
The researchers used a crowd-sourcing website to find nearly 77,000 people to participate in the study and paid them to interact with various AI chatbots, including some using AI models from OpenAI, Meta and xAI. The researchers asked for people’s views on a variety of political topics, such as taxes and immigration, and then, regardless of whether the participant was conservative or liberal, a chatbot tried to change their mind to an opposing view.
The researchers found not only that the AI chatbots often succeeded, but also that some persuasion strategies worked better than others.
“Our results demonstrate the remarkable persuasive power of conversational AI systems on political issues,” lead author Kobi Hackenburg, a doctoral student at the University of Oxford, said in a statement about the study.
The study is part of a growing body of research into how AI could affect politics and democracy, and it comes as politicians, foreign governments and others are trying to figure out how they can apply AI to sway public opinion.
The paper, published in the journal Science, found that AI chatbots were most persuasive when they provided study participants with large amounts of in-depth information, rather than when they deployed alternate debating tactics such as appeals to morality or arguments personalized to the individual.
The implication, according to researchers, is that AI chatbots could “exceed the persuasiveness of even elite human persuaders, given their unique ability to generate large quantities of information almost instantaneously during conversation,” the researchers wrote, although they did not put AI chatbots head-to-head against human debaters.
But the study also said that the persuasiveness of AI chatbots wasn’t entirely on the up-and-up: Within the reams of information the chatbots provided as answers, researchers wrote that they discovered many inaccurate assertions.
“The most persuasive models and prompting strategies tended to produce the least accurate information,” the researchers wrote.
They added that they observed “a concerning decline in the accuracy of persuasive claims generated by the most recent and largest frontier models.” Claims made by GPT-4.5 — a model released by OpenAI in February — were significantly less accurate on average than claims from smaller, older models also from OpenAI, they wrote.
“Taken together, these results suggest that optimizing persuasiveness may come at some cost to truthfulness, a dynamic that could have malign consequences for public discourse and the information ecosystem,” they wrote.
OpenAI did not immediately respond to a request for comment Thursday. The study was conducted before OpenAI released its latest model, GPT-5.1, last month.
About 19% of all claims by the AI chatbots in the study were rated as “predominantly inaccurate,” the researchers wrote.
The paper warned that, in an extreme scenario, a highly persuasive AI chatbot “could benefit unscrupulous actors wishing, for example, to promote radical political or religious ideologies or foment political unrest among geopolitical adversaries.”
The paper was produced by a team of people from the U.K.-based AI Security Institute, a research organization backed by the British government, as well as from the University of Oxford, the London School of Economics, Stanford University and the Massachusetts Institute of Technology. They received funding from the British government’s Department for Science, Innovation and Technology.
Helen Margetts, a professor of society and the internet at Oxford and a co-author, said in a statement that the research was part of an effort “to understand the real-world effects of LLMs on democratic processes,” referring to large language models, or the technology underlying popular generative AI systems like ChatGPT and Google Gemini.
The study participants were all adults in the United Kingdom, and they were asked questions related to British politics.
The study lands at a time when AI is upending politics from a variety of angles. President Donald Trump has regularly posted AI-created videos and images to social media, some political campaigns have sent out AI-generated fundraising emails or deepfake videos, and state operatives from China and Russia have deployed AI “slop” in their propaganda campaigns.
