| Literature DB >> 33747252 |
Candice Lanius1, Ryan Weber1, William I MacKenzie1.
Abstract
The COVID-19 infodemic is driven partially by Twitter bots. Flagging bot accounts and the misinformation they share could provide one strategy for preventing the spread of false information online. This article reports on an experiment (N = 299) conducted with participants in the USA to see whether flagging tweets as coming from bot accounts and as containing misinformation can lower participants' self-reported engagement and attitudes about the tweets. This experiment also showed participants tweets that aligned with their previously held beliefs to determine how flags affect their overall opinions. Results showed that flagging tweets lowered participants' attitudes about them, though this effect was less pronounced in participants who frequently used social media or consumed more news, especially from Facebook or Fox News. Some participants also changed their opinions after seeing the flagged tweets. The results suggest that social media companies can flag suspicious or inaccurate content as a way to fight misinformation. Flagging could be built into future automated fact-checking systems and other misinformation abatement strategies of the social network analysis and mining community.Entities:
Keywords: COVID-19; Fact-checking; Misinformation; Survey study; Twitter
Year: 2021 PMID: 33747252 PMCID: PMC7954364 DOI: 10.1007/s13278-021-00739-x
Source DB: PubMed Journal: Soc Netw Anal Min
Fig. 1Tweets for overcounted coronavirus numbers condition
Fig. 2Tweets for undercounted coronavirus numbers condition
Fig. 3Differences in preventative behaviors based on belief in COVID-19 Count
Fig. 4Change in rating after tweets flagged
Fig. 5Differences in news media consumption on opinions about COVID-19 death count
Fig. 6Hours spent on social media correlated with higher tweet rating despite flags
Fig. 7Higher news consumption in hours correlated with higher tweet rating despite flags