Abstract: Combating Misinformation through Social Sharing: Can Heuristic Cues Prime Individuals’ Sharing Intention of Corrective Messages?

◆ Yuan Wang, University of Maryland, College Park

Although a wealth of studies has examined the effects of misinformation (Ecker et al., 2014; Ecker et al., 2015) and corrective messages (Aikin et al., 2015; Bode & Vraga, 2018), little attention has been paid on the social diffusion pattern of both misinformation and corrective messages (Liang, 2018). The participatory nature of social media made it an equal platform for misinformation and corrective message to combat with each other. However, misinformation was found to hold unique advantages in viral sharing when compared with corrective messages that were designed to refute misinformation and promote scientific knowledge (Southwell & Thorson, 2015)
To provide a more nuanced understanding of how corrective messages were shared by individuals and explore the mechanisms behind the decision, this study has three aims:
1) Understand whether individuals selectively share the message that was consistent with their prior attitudes after being exposed to both misinformation and corrective messages.
2) Understand whether source cues and social endorsement cues of a corrective message could prime individuals’ sharing intention.
3) Understand the mechanism underlying the influence of source cues and social endorsement cues by examining the mediation role of attitude certainty.
267 participants were recruited in a large eastern University for credits. A 2 (expert source/non-expert source) x 2 (high social endorsements /low social endorsements) between-subject experimental design will be conducted, where all the participants would first be exposed to a misinformation message and then be exposed to a corrective message. Source cues would be operationalized as expert source versus non-expert source. Expert source would be represented by FDA, which is the administration that is responsible for issuing food regulation and communicating food risks. A non-experiment source would be represented by a fictitious ordinary person account. Social endorsement cues would be represented by the number of shares and the number of likes.

Results found that individuals are more likely to share (b=.17,P<>05), click a “like” (b=.29, P<.05), add a supportive comment (b=.15, P<.05), and less likely to add a refutative comment (b=-.29, P<.05) when the misinformation correction message is pro-attitudinal than counter-attitudinal.
Regarding the influence of messages, results found that when the source of corrective message is expert, participants are more likely to share it when the corrective message is accompanied with high social endorsements relative to low social endorsements (MD=5.34, SD=.22, P<.05); however, there is no significant difference in individuals’ sharing intention between high social endorsements messages and low social endorsements messages when the source is from social peer.
Besides, when the corrective message is accompanied with high social endorsements, there is no significant difference between an expert source and a social peer source in participants’ sharing intention; however, when the corrective message is accompanied with low social endorsements, participants are significantly more likely to share (MD=.52, SD=.22, P<.05) the message, add a supportive comment (MD=.46, SD=.22, P<.05) when it is from an expert than from a social peer.