Beyond the Information Given: Factors Intensifying or Attenuating Individuals’ Susceptibility to Misinformation

Open Access
- Author:
- Lee, Sian
- Graduate Program:
- Informatics
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- May 20, 2024
- Committee Members:
- Dongwon Lee, Professor in Charge/Director of Graduate Studies
Dongwon Lee, Major Field Member
Aiping Xiong, Chair & Dissertation Advisor
Mauricio Nascimento, Minor Field Member
Kenneth Huang, Major Field Member
S. Shyam Sundar, Outside Unit & Field Member - Keywords:
- Misinformation
Fake News
Associative Inference
Cognitive Ability
Fact Check
Fact-Checker
Warning Message
Political Stance Congruency
Misinformation
Fake News
Associative Inference
Cognitive Ability
Fact Check
Fact-Checker
Warning Message
Political Stance Congruency
Susceptibility
Social Media - Abstract:
- The rise of social media allows broad connections and information sharing but also contributes to misinformation spread, impacting society by manipulating elections, promoting false COVID-19 treatments, and reducing vaccination intention. The prevalence of misinformation is not solely due to technology. It particularly becomes "successful" when perceived as true by individuals. This dissertation explores factors that intensify or attenuate individuals' susceptibility to misinformation on social media, offering insights into why some individuals are more vulnerable to the same misinformation than others and how to effectively mitigate misinformation. "Associative inference" is a memory process that lets people form novel (and sometimes false) inferences beyond given information. The first study examines the effects of associative inference on individuals’ susceptibility to misinformation and its interaction with their cognitive ability. Through online experiments, we found that associative inference intensifies susceptibility to misinformation, particularly among individuals with higher cognitive ability. On the other hand, the proliferation of misinformation on social media has led platforms like Facebook and X to devise strategies for debunking it. These platforms often deploy warning messages, crafted with third-party fact-checkers, to warn users about potential falsehoods. Despite their crucial role, fact-checkers have been criticized for subjective claim selection and inconsistent evaluation process. Previous studies evaluating fact-checkers’ fact-checking behavior have relied on manually collected data, a process that is time-consuming and thus limited in both topics and time periods. In response, the second study proposes an automated method to gather data from fact-checkers, identify matching claims, and compare verdicts among major fact-checkers. Analysis of verdicts for matching claims shows that major fact-checkers generally agree with each other. Still, fact-checking messages from the fact-checkers vary in effectiveness, especially against political misinformation, which becomes deeply entrenched when it aligns with individuals' existing beliefs and ideologies. Perceived bias in fact-checkers can also affect their credibility on fact-checking. In the third study, an online experiment was conducted to better understand the impact of political stance congruency among individuals, fact-checkers, and political misinformation. The findings reveal that partisan fact-checkers can successfully decrease the perceived accuracy of political misinformation, particularly when the misinformation is congruent to the participants’ ideology. Results showed that explicitly labeled partisan fact-checkers effectively attenuate misbelief in congruent misinformation. The findings of this dissertation have theoretical implications for understanding the psychological processes behind human susceptibility to misinformation and practical implications for developing more effective fact-checking systems to help online users mitigate misinformation.