| Literature DB >> 34075349 |
Danielle Caled1, Mário J Silva1.
Abstract
This review discusses the dynamic mechanisms of misinformation creation and spreading used in social networks. It includes: (1) a conceptualization of misinformation and related terms, such as rumors and disinformation; (2) an analysis of the cognitive vulnerabilities that hinder the correction of the effects of an inaccurate narrative already assimilated; and (3) an interdisciplinary discussion on different strategies for coping with misinformation. The discussion encompasses journalistic, educational, governmental and computational viewpoints on the topic. The review also surveys how digital platforms handle misinformation and gives an outlook on opportunities to address it in light of the presented viewpoints.Entities:
Keywords: Digital media; Digital misinformation; Disinformation; Fact-checking; Fake news
Year: 2021 PMID: 34075349 PMCID: PMC8156576 DOI: 10.1007/s42001-021-00118-8
Source DB: PubMed Journal: J Comput Soc Sci ISSN: 2432-2725
Different shapes of misinformation
| Disinfor-mation | False information intended to mislead. Disinformation amplifiers do not always generate it intentionally, e.g., news organizations or social media are frequently manipulated by deceivers to disseminate inaccurate or misleading information [ |
| Rumor | Defined as a piece of information whose “veracity status is yet to be verified at the time of posting” [ |
| Clickbait | Used to attract a greater flow of readers to websites through provocative and catchy headlines, appealing to users’ curiosity, and luring them to click on links that do not deliver what was promised [ |
| Satirical News | Use of sarcasm and irony to provoke laughter or mockery to entertain the reader; relies on unexpectedness, frequently entailing a combination of incompatible entities and/or ideas [ |
| Social Spam | Different kinds of attacks (e.g., phishing, spreading of advertising messages and viruses) promoted by malicious agents [ |
Following the typology proposed by Saez-Trumper [19] and adding the definitions proposed by Fernquist et al. [27], a description of the most popular mechanisms to spread online misinformation
| Type | Technique | Description |
|---|---|---|
| Social | Astroturfing | A practice of disguising the sponsors of a message to give the impression that it originated spontaneously, representing the public interest and community concerns. |
| Circular reporting | Information, originated by a single source, appearing to come from multiple independent sources and channels with minor modifications. | |
| Click farms | An operation in which a large group fraudulently interacts with a website to artificially boost internet traffic, deceiving online systems. | |
| Data voids | Manipulations exploring the lack of natural content to induce search engines to return low-quality and problematic content [ | |
| Sock-puppets | The use of false or misleading identities on the Internet to interact with ordinary users on social media for purposes of deception. | |
| Web brigades | A set of users coordinated to undertake large-scale disinformation campaigns by exploiting the weakness of communities and systems. | |
| Technical | Deepfakes | Manipulations created using deep learning techniques trained on a large number of samples to automatically map facial expressions and to achieve face swapping [ |
| Spam bots | Bots designed to post on online comment sections, spread advertisements, or for extracting contact information for spam mailing lists. | |
| Social bots | Bots designed to automatically spread messages and advocate ideas, thus influencing public opinion on a given topic. They can also create fake accounts and simulate the popularity of social media profiles (e.g., through a massive network of followers). | |
| Hybrid | Cyborgs | Hybrid accounts combining automatic and human curation. In such accounts, a human periodically takes over a bot account in order to disguise and increase the account’s credibility. |
| Sybils | Impersonators who try to connect with a real user’s friends and take advantage of its reputation [ |
Fig. 1Disinformation and rumor propagation. The propagation generally starts when a social media user posts an alleged story (1). Then, the story might be shared with additional evidence (2). Other social media users begin to challenge the credibility of the story (3). After some time, a consensus about the veracity of the story emerges (4). During the sharing phase, the story may be caught by small outlets which spread this narrative (a), assigning a credibility stamp to it (b). Larger news sites are now encouraged to replicate the story until it is no longer possible to find the original source (c)
Cognitive vulnerabilities that can be exploited for spreading misinformation
| Confirmation bias | The act of searching evidence to support existing bias or expectations [ |
| Motivated reasoning | The tendency to “scrutinize ideas more carefully” if we do not agree with them [ |
| Biased assimilation | A process related to motivated reasoning in which people interpret new information in a biased way, in accordance with their own beliefs [ |
| Hostile media effect | An effect related to the bias perceived by individuals from their preexisting stance towards the news source. Because of this effect, people with opposing views, when accessing the same reports, tend to perceive these reports as biased against their own opinions [ |
| Repeated exposure | Repetition leads to familiarity and people use familiarity as a proxy for credibility. It increases the processing fluency (the ease of information recall), which is perceived as discrepant from a comparison standard and may affect truth judgments [ |
| Denial transparency | This phenomenon portrays the ineffectiveness of denying a proposition. It is attributed to the way people cumulatively process information, always appending new pieces to their “store of knowledge”, without deleting previous information [ |
| Backfire effect | This effect highlights the increase of people’s acceptance of challenged beliefs when presented to contradictory evidence [ |
| Group polarization | It is explained through the predictably behavior of group members adopting a more extreme stance after group deliberation [ |
| Causal inference making | The act of attributing unwarranted cause–effect relationships to contiguous events. After the occurrence of an event, people tend to mistake their inferences with real memories of the event, yielding auto-suggestion errors [ |
| Emotion | Previous research indicates that the accuracy of personal beliefs and resulting attitudes can be shaped by a person’s emotional state and by the prevalent tone of media coverage [ |
Characterization of decision-making processes
| Autonomous | Solutions to assist autonomous decision-making process leave the news consumption decision completely up to the reader. These solutions aim to develop literacy so that the consumer is empowered to judge the quality of the available content. |
| Mediated | In mediated consumption, the available information is partially curated by third-party agents, such as journalists or algorithms. These solutions are designed to facilitate evaluation tasks. Although mediation solutions present inputs that can be useful in the assessment, the final verdict will always be issued by the consumer. |
| Controlled | In controlled decision-making process, solutions curate the available information without considering the consumer. This process includes pre-established news analysis or even the omission of news content deemed by third-party agents to be malicious, dangerous, or inappropriate. |
Policies adopted by leading companies to curb misinformation
| Investments in partnerships with journalists, academics, and independent fact-checkers, to reduce the spreading of misinformation [ | |
| Launch of the Google News Initiativeb to fight misinformation and support journalism, based on three pillars: (1) increasing the integrity of information displayed, especially during breaking news or crisis situations; (2) collaborating with the industry to surface accurate information; and iii) helping individuals to distinguish quality content online through media literacy. | |
| Microsoft | Creation of advertising policies to prohibit “ads for election related content, political parties and candidates, and ballot measures globally”; application of these policies to Microsoft services, such as Bing and LinkedIn; partnership with NewsGuard [ |
| Political advertising banc; interactions with the public to jointly build policies against media manipulation [ |
ahttps://www.facebook.com/communitystandards/
bhttps://newsinitiative.withgoogle.com/
cAccording to Twitter CEO Jack Dorsey, political advertising forces “highly optimized and targeted political messages on people”, which brings significant risks as “it can be used to influence votes to affect the lives of millions”. See: https://twitter.com/jack/status/1189634360472829952
The role of computational solutions assigned to the corresponding decision-making process (DMP) and target audiences, followed by examples of existing tools for this purpose
| Role | DMP | Target audience | Examples |
|---|---|---|---|
| Education | Autonomous | Consumers (general public) | Google Interland; Bad News |
| Collaboration | Autonomous | Journalists and fact-checkers; consumers (general public) | Newstrition; Checkdesk; Truly Media |
| Assistance | Mediated | Journalists and fact-checkers; consumers (general public) | Full Fact’s Live platform; Chequeabot; NewsScan |
| Communication | Controlled | Consumers (general public) | NewsGuard |
| Decision | Controlled | Consumers (general public) | News Coach; ClaimBuster |
Fig. 2Architecture of misinformation classification system. Automatic tasks: misinformation detection, misinformation tracking, evidence retrieval, stance classification, and veracity classification. The output of the automatic pipeline is manually evaluated by a human validator. The human verdict is stored into a repository which delivers reliable material to fact-checking services and end-to-end assisting tools
Approaches and limitations of different perspectives to handle misinformation
| Approach | Plus (+) | Minus (−) |
|---|---|---|
| Journalism | Fact-checking; increasing the literacy of journalists to avoid giving voice to false narratives. | Corrections fail to reach a significant segment of the audience; denial effects caused by cognitive biases may reinforce belief in false stories; manual corrections are limited in scale. |
| Education | Promoting information and media literacy in each individual. | Focus on the long term; fails to consider structural problems in education; may be disregarded by authoritarian governments; is expensive; requires retraining. |
| Government solutions | Definition of a clear boundary between information and disinformation; punitive or regulatory measures can contain the production and dissemination of disinformation. | Content in the boundary is hard to regulate; may be used to restrict freedom of expression; may be perceived as censorship; lacks extraterritorial application; can inhibit dissent voices; individuals will find ways to bypass regulations. |
| Digital platforms | Enforcing moderation of content and transparency of advertising; promotion of quality news; partnerships with fact-checking agencies. | Vulnerable to commercial interests of customers and partners; dependent on proprietary software; opaque moderation process may be perceived as biased. |
| Computer Science | Usage of computational resources to support the automatic fact-checking and misinformation detection; development of misinformation indicators for promoting media literacy. | Generally, fails to address the consumers’ needs; may lack transparency; data annotation for training models is not scalable; models are quickly outdated. |
Examples of measures for dealing with the cognitive vulnerabilities as a way to combat misinformation
| Vulnerability | Defense | Example |
|---|---|---|
| Confirmation bias & Repeated exposure | Digital platforms & Computer science | Social media and web engines could adapt their algorithms to expose users to a greater diversity of narratives, reducing the existence of filter bubbles. We suggest the use of stance detection methods to identify divergent texts, and the redesign of digital platforms’ interface, prioritizing the balance of opinions. A new presentation way could, for example, show conflicting viewpoints side by side when exhibiting disputed stories. |
| Motivated reasoning & Biased assimilation | Education & Journalism | Literacy approaches could be used to make readers aware of their cognitive biases, encouraging a critical thinking and the reader engagement with a broad range of content. Educational strategies should also focus on teaching readers how to differentiate factual texts from opinionated material and on raising awareness of bad journalistic practices, such as the use of clickbait, personal attacks, or fallacies. |
| Hostile media effect | Journalism & Education | News outlets could represent significant views fairly, proportionately, and, as far as possible, without editorial bias [ |
| Denial transparency & Backfire effect | Computer science, Education & Journalism | Computer scientists should keep in mind that most individuals do not understand how machine learning models work. Thus, rather than presenting opaque verdicts on news veracity, computational solutions should offer explainable and interpretable misinformation indicators [ |
| Group polarization | Digital platforms, Computer science & Education | Social networks can use machine learning algorithms to identify filter bubbles and monitor the visual or textual material shared in these groups. Once the intensification of polarization is detected, platforms should apply educational campaigns aiming to promote dialogue, but also considering the specificities of each group. |
| Emotion | Digital platforms & Governmental solutions | Social media could regulate and curb hate speech by limiting the influence of polarizing content, and restricting the exposure and reach of hateful material. Digital platforms can be even more proactive in combating this type of practice, alerting legal authorities about crimes of slander and defamation, and providing legal evidence when necessary. For this, we recommend the creation and/or expansion of compliance programs in private companies, observing local and extraterritorial legislation. |