Chatbots Show Promise in Swaying Voters, Researchers Find

Chatbots Show Promise in Swaying Voters, Researchers Find

Chatbots Show Promise in Swaying Voters, Researchers Find

Political operations may soon deploy a surprisingly persuasive new campaign surrogate: a chatbot that’ll talk up their candidates. According to a new study published in the journal Nature, conversations with AI chatbots have shown the potential to influence voter attitudes, which should raise significant concern over who controls the information being shared by these bots and how much it could shape the outcome of future elections.

Researchers, led by David G. Rand, Professor of Information Science, Marketing, and Psychology at Cornell, ran experiments pairing potential voters with a chatbot designed to advocate for a specific candidate for several different elections: the 2024 US presidential election and the 2025 national elections in Canada and Poland. They found that while the chatbots were able to slightly strengthen the support of a potential voter who already favored the candidate that the bot was advocating for, chatbots persuading people who were initially opposed to its preferred candidate were even more successful.

For the US experiment, the study tapped 2,306 Americans and had them indicate their likelihood of voting for either Donald Trump or Kamala Harris, then randomly paired them with a chatbot that would push one of those candidates. Similar experiments were run in Canada, with the bots tasked with backing either Liberal Party leader Mark Carney or the Conservative Party leader Pierre Poilievre, and in Poland with the Civic Coalition’s candidate Rafał Trzaskowski or the Law and Justice party’s candidate Karol Nawrocki.

Also Read  11 Things Grok Says Elon Musk Does Better Than Anyone

In all cases, the bots were given two primary objectives: to increase support for the model’s assigned candidate and to either increase voting likelihood if the participant favors the model’s candidate or decrease voting likelihood if they favor the opposition. Each chatbot was also instructed to be “positive, respectful and fact-based; to use compelling arguments and analogies to illustrate its points and connect with its partner; to address concerns and counter arguments in a thoughtful manner and to begin the conversation by gently (re)acknowledging the partner’s views.”

The bots resorted to making more inaccurate claims when pushing right-wing candidates

While the researchers found that the bots were largely unsuccessful in either increasing or decreasing a person’s likelihood to vote at all, they were able to move a voter’s opinion of a given candidate, including convincing people to reconsider their support for their initially favored candidate when talking to an AI pushing the opposite side.

The researchers noted that chatbots were more persuasive with voters when presenting fact-based arguments and evidence or having conversations about policy rather than trying to convince a person of a candidate’s personality, suggesting people likely view the chatbots as having some authority on the matter. That’s a little troubling for a number of reasons, not the least of which is that the researchers noted that while chatbots would present their arguments as factual, the information they provided was not always accurate. They also found that chatbots advocating for right-wing political candidates provided more inaccurate claims in every experiment.

Also Read  Astronaut's Remarkable Image Captures Milky Way Beyond Earth's Horizon

The results largely come out in granular data about swings in feelings about individual issues that vary between the races in different regions, but the researchers “observed significant treatment effects on candidate preference that are larger than typically observed from traditional video advertisements.” 

In the experiments, participants were aware that they were communicating with a chatbot that intended to persuade them. That isn’t the case when people communicate with chatbots in the wild, which may have hidden underlying instructions. One has to look no further than Grok, the chatbot of Elon Musk’s xAI, as an example of a bot that has been obviously weighted to favor Musk’s personal beliefs.

Because large language models are a black box, it’s difficult to tell what information is going in and how it influences the outputs, but there is little to nothing that could stop a company with preferred political or policy goals from instructing its chatbot to advocate for those outcomes. Earlier this year, a paper published in Humanities & Social Sciences Communications noted that LLMs, including ChatGPT, made a decided rightward shift in their political values after the election of Donald Trump. You can draw your own conclusions as to why that might be, but it’s worth being aware that the outputs of chatbots are not free of political influence.



Source link

Back To Top