A new study shows that people embroiled in political discussions on social media find it difficult to identify AI bots, increasing the risk of spreading misinformation.
Social media platforms are increasingly used to engage in political discourse. However, with the rise in AI bots it is becoming increasingly difficult to decipher whether the user behind the account is human or not.
AI bots are automated accounts programmed to interact in a very human-like manner. AI bots based on large language models (LLMs) – which enable them to understand language and generate text – were used by researchers at the University of Notre Dame in Indiana, US, to engage with humans in a political discussion on the social networking platform Mastodon.
These AI bots were customised with different personas that included realistic, varied personal profiles and perspectives on global politics. They were directed to offer commentary and to link global events to personal experiences. Each persona’s design was based on past human-assisted bot accounts that had been successful in spreading misinformation online.
During this experiment, it was discovered that the majority of the time (58%) the human users could not decipher which account was an AI bot.
“They knew they were interacting with both humans and AI bots and were tasked to identify each bot’s true nature, and less than half of their predictions were right,” said Paul Brenner, a faculty member and director at the Center for Research Computing at Notre Dame and senior author of the study.
Two of the most successful and least detected personas were characterised as females spreading opinions on social media about politics who were organised and capable of strategic thinking. For the researchers, this indicates that AI bots used to spread misinformation can easily deceive people regarding their true nature.
Of course, spreading misinformation is not new: users have been creating social media accounts to spread misinformation with human-assisted bots for a while. The difference now is that with AI bots based on LLMs, users are able to do this many times over, cheaper and faster. This could have significant ramifications during an election campaign, for example.
To avoid AI spreading misinformation online, and in so doing swaying public opinion, Brenner believes it will require governments to take action through education, legislation and social media account validation policies.