
Identifying the Core Issue
The internet has made it easier than ever before to encounter and spread conspiracy theories. And while some are harmless, others can be deeply damaging, sowing discord and causing panic.
In addition, AI has the capacity to generate images that can be misleading and contribute to the dissemination of false information.
The recent vivid example is false images of consequences of Hurricane Milton that were generated by AI and spread across the Internet.
After Hurricane Milton hit Florida, conspiracy theorists turned their attention to Walt Disney World, spreading new misinformation. “Hurricane Milton has flooded Disney World in Orlando,” wrote one known source of disinformation on X, with the photos, which X users immediately noted was probably created using an automated AI image creator. The post had already been viewed over 300,000 times at that time.
A Step Toward Solving the Problem
Now, researchers believe they’ve uncovered a new tool for combating false conspiracy theories: AI chatbots.
The study by researchers from MIT Sloan and Cornell University, published in the journal Science, found that chatting about a conspiracy theory with a large language model (LLM) reduced people’s belief in it by about 20%—even among participants who claimed that their beliefs were important to their identity.
Approach to Understanding and Challenging Conspiracy theories
The study used a distinctive approach that enabled in-depth interaction with participants’ personal beliefs. Participants were first asked to identify and describe a conspiracy theory they believed in using their own words, along with the evidence supporting their belief.
GPT-4 Turbo then used this information to generate a personalised summary of the participant’s belief and initiate a dialogue. The AI was instructed to persuade users that their beliefs were untrue, adapting its strategy based on each participant’s unique arguments and evidence.
These conversations, lasting an average of 8.4 minutes, allowed the AI to directly address and refute the specific evidence supporting each individual’s conspiratorial beliefs, an approach that was impossible to test at scale prior to the technology’s development.
Key Results
The results of the intervention were striking. On average, the AI conversations reduced the average participant’s belief in their chosen conspiracy theory by about 20%, and about 1 in 4 participants — all of whom believed the conspiracy beforehand — disavowed the conspiracy after the conversation. This impact proved durable, with the effect remaining undiminished even two months post-conversation.
“Even in a lab setting, 20% is a significant impact on changing people’s beliefs,” says Yunhao (Jerry) Zhang, a postdoc fellow affiliated with the Psychology of Technology Institute who studies AI’s impacts on society. “In the real world, even a 5% or 10% shift would still be meaningful.”
Notably, the impact of the AI dialogues extended beyond mere changes in belief. Participants also demonstrated shifts in their behavioral intentions related to conspiracy theories. They reported being more likely to unfollow people espousing conspiracy theories online, and more willing to engage in conversations challenging those conspiratorial beliefs.
The adaptable nature of GPT-4 Turbo means it could easily be connected to different platforms for users to interact with in the future.
Criticism and suggestions
I believe this solution is promising, but it’s not a complete fix. It works well for individuals, but it doesn’t address the larger issue of how misinformation spreads so quickly on social media. It’s a good starting point, but a more comprehensive approach is needed to tackle misinformation on a greater level. For instance, developing better algorithms to reduce the spread of false content and enhancing the responsibility for creators of conspiracy theories on the internet.
Sources:
2)https://www.theguardian.com/us-news/2024/oct/10/russia-ai-hurricane-milton-disinformation
4)https://mitsloan.mit.edu/press/can-ai-talk-us-out-conspiracy-theories
5)https://undark.org/2024/10/30/could-ai-help-curb-conspiracy-theory-beliefs/