Empirical Studies on Platform-Driven and User-Initiated Methods for Misinformation Correction

Open Access
- Author:
- Seo, Hae Seung
- Graduate Program:
- Informatics
- Degree:
- Doctor of Philosophy
- Document Type:
- Dissertation
- Date of Defense:
- October 06, 2023
- Committee Members:
- Dongwon Lee, Professor in Charge/Director of Graduate Studies
Dongwon Lee, Co-Chair & Dissertation Advisor
Bu Zhong, Outside Unit & Field Member
Aiping Xiong, Co-Chair & Dissertation Advisor
Kelley Cotter, Major Field Member - Keywords:
- Misinformation Correction
Social Media
Misinformation
Warning
Online Experiment
Fake News - Abstract:
- Due to the proliferation of social media, the amount of online misinformation has been escalating and becoming a societal concern. It has caused social divisions through false political news and led people to fatal outcomes with incorrect treatment methods. While significant efforts have been invested in misinformation detection through human fact-checkers and machine-learning algorithms, the study on correcting misinformation and preventing its dissemination has not received adequate attention. Misinformation correction refers to the act of informing users about the inaccuracy of specific information to alter their perception and/or behavior toward misinformation. We need to prioritize the exploration of effective misinformation correction methods for users since they serve as both consumers and disseminators of information on social media. It is vital to prevent users from unwittingly disseminating misinformation by ensuring their awareness of its inaccuracies. In order to explore misinformation correction methods that can effectively reach more users, I conducted online human subject experiments through three studies that encompassed a total of eight experiments. I focused on two correction agents on social media and, accordingly, conducted research on misinformation correction through the following two approaches: (1) platform-driven correction and (2) user-initiated correction. The first approach is from the social media platform perspective. I researched which types of platform warnings can effectively assist users in identifying misinformation. I found that, with the absence of source information, a machine learning-driven warning with explanations enhances users’ capability to identify fake news. Additionally, I conducted a more thorough investigation into the explanatory section, drawing inspiration from the framing effect, and confirmed the effectiveness of negative framing. The second approach is from the users’ perspective. I explored four types of user-initiated correcting comments with reliable sources. All these types substantiated the effectiveness of correcting comments. The following research questions are addressed in this dissertation: (1) As a platform-driven correction, can machine learning warnings enhance the ability to discern misinformation? (2) Is a platform-driven warning with explanations more effective than a warning without explanations? What factors influence the effectiveness of platform-driven warnings? (3) As a user-initiated correction, can users effectively correct misinformation through comments? My studies empirically demonstrated that a machine-learning warning with explanations can effectively correct misinformation. Furthermore, I underscored the valuable role that a user can play in hindering misinformation dissemination by leaving correcting comments. My research holds significance in presenting effective misinformation correction methods through multiple iterations of experiments that recruited a substantial number of participants from a systematic point of view. In the future, I anticipate conducting studies incorporating elements from a broader range of real-life situations, building upon the foundation of this research.