Chatbots can persuade people to stop believing in conspiracy theories

Reading Time: 3 minutes
digital hand reaching out from a screen surrounded by papers

Identifying the Core Issue

The internet has made it easier than ever before to encounter and spread conspiracy theories. And while some are harmless, others can be deeply damaging, sowing discord and causing panic. 

In addition, AI has the capacity to generate images that can be misleading and contribute to the dissemination of false information. 

The recent vivid example is false images of consequences of Hurricane Milton that were generated by AI and spread across the Internet.

The video about the spread of misinformation regarding Hurricane Milton

After Hurricane Milton hit Florida, conspiracy theorists turned their attention to Walt Disney World, spreading new misinformation. “Hurricane Milton has flooded Disney World in Orlando,” wrote one known source of disinformation on X, with the photos, which X users immediately noted was probably created using an automated AI image creator. The post had already been viewed over 300,000 times at that time.

A Step Toward Solving the Problem

Now, researchers believe they’ve uncovered a new tool for combating false conspiracy theories: AI chatbots.

The study by researchers from MIT Sloan and Cornell University, published in the journal Science, found that chatting about a conspiracy theory with a large language model (LLM) reduced people’s belief in it by about 20%—even among participants who claimed that their beliefs were important to their identity.

Approach to Understanding and Challenging Conspiracy theories

The study used a distinctive approach that enabled in-depth interaction with participants’ personal beliefs. Participants were first asked to identify and describe a conspiracy theory they believed in using their own words, along with the evidence supporting their belief.

GPT-4 Turbo then used this information to generate a personalised summary of the participant’s belief and initiate a dialogue. The AI was instructed to persuade users that their beliefs were untrue, adapting its strategy based on each participant’s unique arguments and evidence.

These conversations, lasting an average of 8.4 minutes, allowed the AI to directly address and refute the specific evidence supporting each individual’s conspiratorial beliefs, an approach that was impossible to test at scale prior to the technology’s development.

Key Results

The results of the intervention were striking. On average, the AI conversations reduced the average participant’s belief in their chosen conspiracy theory by about 20%, and about 1 in 4 participants — all of whom believed the conspiracy beforehand — disavowed the conspiracy after the conversation. This impact proved durable, with the effect remaining undiminished even two months post-conversation.

“Even in a lab setting, 20% is a significant impact on changing people’s beliefs,” says Yunhao (Jerry) Zhang, a postdoc fellow affiliated with the Psychology of Technology Institute who studies AI’s impacts on society. “In the real world, even a 5% or 10% shift would still be meaningful.”

Notably, the impact of the AI dialogues extended beyond mere changes in belief. Participants also demonstrated shifts in their behavioral intentions related to conspiracy theories. They reported being more likely to unfollow people espousing conspiracy theories online, and more willing to engage in conversations challenging those conspiratorial beliefs.

The adaptable nature of GPT-4 Turbo means it could easily be connected to different platforms for users to interact with in the future.


Criticism and suggestions

I believe this solution is promising, but it’s not a complete fix. It works well for individuals, but it doesn’t address the larger issue of how misinformation spreads so quickly on social media. It’s a good starting point, but a more comprehensive approach is needed to tackle misinformation on a greater level. For instance, developing better algorithms to reduce the spread of false content and enhancing the responsibility for creators of conspiracy theories on the internet.


Sources:

1)https://www.technologyreview.com/2024/09/12/1103930/chatbots-can-persuade-people-to-stop-believing-in-conspiracy-theories/

2)https://www.theguardian.com/us-news/2024/oct/10/russia-ai-hurricane-milton-disinformation

3)https://www.nbcnews.com/tech/internet/hurricane-milton-conspiracy-theory-government-storm-biden-rcna174558

4)https://mitsloan.mit.edu/press/can-ai-talk-us-out-conspiracy-theories

5)https://undark.org/2024/10/30/could-ai-help-curb-conspiracy-theory-beliefs/

Tagged , ,

2 thoughts on “Chatbots can persuade people to stop believing in conspiracy theories

  1. 52438 says:

    This is a promising approach to dealing with conspiracy theories, using AI chatbots. It’s impressive that chatbots can reduce belief in conspiracies by 20% through personalized conversations. While this shows AI’s potential to encourage critical thinking, I’m curious about how lasting these changes are. And the bigger picture, it’s a promising step toward harnessing technology for mutual understanding, but we better think about the ethics too.

    • 52509 says:

      In terms of making these changes last, it’s a more complicated process. However, I believe it’s possible to make these changes more durable by using certain techniques. For instance, as mentioned in the post, AI could potentially connect with different platforms where users can interact with it. This would reinforce changes in people’s beliefs about conspiracy theories from time to time. As for the ethical aspect, it’s an important point that definitely needs to be taken into account. Chatbots should offer ideas in a clear, non-imposing way. For example, if implemented on social networks, chatbots could provide thought-provoking questions rather than direct statements. This way, people could reconsider certain conspiracy theories on their own.

Leave a Reply