Literature DB >> 33052967

Countering misinformation via WhatsApp: Preliminary evidence from the COVID-19 pandemic in Zimbabwe.

Jeremy Bowles1, Horacio Larreguy1, Shelley Liu2.   

Abstract

We examine how information from trusted social media sources can shape knowledge and behavior when misinformation and mistrust are widespread. In the context of the COVID-19 pandemic in Zimbabwe, we partnered with a trusted civil society organization to randomize the timing of the dissemination of messages aimed at targeting misinformation about the virus to 27,000 newsletter WhatsApp subscribers. We examine how exposure to these messages affects individuals' beliefs about how to deal with the virus and preventative behavior. In a survey of 864 survey respondents, we find a 0.26σ increase in knowledge about COVID-19 as measured by responses to factual questions. Through a list experiment embedded in the survey, we further find that potentially harmful behavior-not abiding by lockdown guidelines-decreased by 30 percentage points. The results show that social media messaging from trusted sources may have substantively large effects not only on individuals' knowledge but also ultimately on related behavior.

Entities:  

Mesh:

Year:  2020        PMID: 33052967      PMCID: PMC7556529          DOI: 10.1371/journal.pone.0240005

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Social media platforms have become a central source of information for individuals in the Global South [1]. For example, since in sub-Saharan Africa traditional media reach is low and mobile data costs to access the internet are high, WhatsApp has become a low-cost “one-stop-shop” [1, 2]. Unfortunately, social media platforms are also highly susceptible to misinformation due to low cost of access, virality of posts, individuals’ trust in their social network, and the high cost of fact-checking [3-6]. Amidst the COVID-19 pandemic, as had been the case with the 2014-2015 Ebola epidemic [7] and the 2015-2016 Zika epidemic [8], social media has exacerbated this misinformation problem and muddied public knowledge about the virus throughout the Global South [9-11]. We study whether trusted sources of information can also leverage the ubiquity of social media to combat misinformation and related potentially harmful behavior. Specifically, we examine the effectiveness of WhatsApp messages from a trusted civil society organization (CSO) in Zimbabwe aimed at targeting misinformation in the context of the COVID-19 pandemic. Zimbabweans rely heavily on WhatsApp to access and share information due to prohibitive data costs and the anonymity that WhatsApp affords. As a result, the social network accounts for close to half of all internet traffic in Zimbabwe, far more than competing platforms such as Facebook, which commands only 1% of internet traffic [12]. While the exact number of unique WhatsApp users is not known, an estimate in 2017 suggests that there are at least 5.2 million WhatsApp users in the country [13]. This is roughly 37% of the country’s total population or 60% of the country’s population over the age of fourteen. During the study period, the COVID-19 virus had reached Zimbabwe, and the government had just imposed a national lockdown to limit the spread of the virus. Already, across various social media platforms, and particularly through WhatsApp, posts with misinformation about virus transmission and cures had gone viral. Further, due to the low official infection rates, many questioned the necessity of preventative measures [14]. Misinformation about the virus and low trust in the government threatened the likelihood of lockdown compliance in the country. To combat this problem, we partnered with two organizations, Internews and Kubatana, over a two-week period to disseminate truthful information about COVID-19 in Zimbabwe. Each week, we leverage Kubatana’s large and wide-reaching WhatsApp subscriber base to randomize the timing of message dissemination, with the treated condition receiving these messages on Monday while the control group receives messages on Saturday. We measure individuals’ knowledge through a mid-week survey, and embed a list experiment designed to measure compliance with social distancing, while addressing concerns relating to demand effects and social desirability bias. Contrary to mixed results from the Global North on the dissemination of health-related misinformation [15-18], we find that social media messaging against misinformation from a trusted source can increase both knowledge about COVID-19 and also preventative behavior. These results speak to the potential of trusted social media sources to combat misinformation and related potentially harmful behavior among its subscribers. However, more work is needed to extrapolate these findings to other sources and samples.

Materials and methods

We partner with two organizations in Zimbabwe to carry out this study. First, we partnered with Internews, an international non-governmental organization (NGO) operating in Zimbabwe. Internews focuses on training and supporting independent media across the world to help provide people with trustworthy and high-quality information. Our second partner, which implemented the study, is Kubatana, a trusted online media civil society organization (CSO) that was formed in 2001. Kubatana primarily shares information with its subscribers on issues relating to civil and human rights in Zimbabwe through its email, Facebook, Twitter, and WhatsApp channels. The organization began using WhatsApp as a method of distribution in 2013. Today, it has over 27,000 WhatsApp subscribers from across the country divided roughly across 133 WhatsApp broadcast lists. These lists were created based on the month and year of subscription and contain up to 256 subscribers per list.

Research design

Each week, our two partner organizations jointly crafted a short WhatsApp message (S1 Appendix). In the first week, the message explained COVID-19’s rates of transmission and emphasized the importance of social distancing to lower them. In the second week, the message debunked a viral piece of misinformation on fake cures for COVID-19. Kubatana disseminated the messages in English, Shona, and Ndebele, which are the three main languages in Zimbabwe, through its WhatsApp broadcast lists. In addition, the organization maintained its usual publishing and activity schedule. To evaluate their effect, we randomized the timing of these messages at the WhatsApp broadcast list level. Subscribers in broadcast lists assigned to the treatment condition in a given week were sent the message on Monday, while subscribers in broadcast lists assigned to the control condition were sent the message on Saturday. Between these two days of the week, Kubatana sent two additional messages to its subscribers through WhatsApp. First, between Tuesday and Wednesday, it sent its weekly newsletter. Second, on Thursday, it distributed a short survey designed to test treatment effects on 1) knowledge of the information disseminated in the messages, and 2) behavior relating to social distancing. Respondents were given the option of responding to the survey either directly through WhatsApp messages or through the survey platform Qualtrics according to their preference. Notably, Kubatana disseminated both the messages and survey without sharing broadcast list information with us, to avoid potential reputational costs in a context where anonymity is highly valued. Therefore, we had no access to individual identifiers from survey respondents. As we discuss later, this did not affect our results. This research design has three advantages. First, by randomizing the timing of each message rather than the dissemination itself, all WhatsApp subscribers eventually received important information regardless of their treatment condition. Second, by having Kubatana’s weekly newsletter in between the WhatsApp message to treated broadcast lists, we reduced the likelihood that survey respondents would scroll back to a previous message to search for the correct answer in the knowledge-testing questions. Third, by allowing respondents to respond through WhatsApp, we maximized the response rate. In line with our expectation due to the mobile data costs in Zimbabwe, the survey response rate was four times higher through WhatsApp than through Qualtrics. All research was carried out in compliance with local Zimbabwean research standards and was reviewed to be in accordance with standards set forth by the Committee on the Use of Human Subjects at Harvard University. By randomizing the timing of the messaging rather than whether recipients received messages at all, we did not withhold potentially important information from the sample. Further, because the researchers received no identifiable data on the participants and did not interact with any of Kubatana’s subscribers directly, the research was granted exemption status.

Data

In week 1, Kubatana disseminated the messaging to 13,921 individuals on Monday (treatment condition) and to 13,400 individuals on Saturday (control condition). In week 2, for which treatment assignment was re-randomized, messages were sent to 13,566 individuals on Monday (treatment condition) and 13,755 on Saturday (control condition). This yielded a survey sample comprising of 868 respondents over two weeks, with 585 (2% response rate) from the first week and 283 (1% response rate) from the second week. These response rates are similar to those of other studies where survey respondents are recruited through social media in sub-Saharan Africa [19]. Respondents to the survey are demographically representative: 55% of our survey respondents are male and 76% live in urban localities, aligning with evidence from nationally-representative surveys, which estimate that 59% of frequent social media users in Zimbabwe are male and 69% live in urban areas [20]. Descriptively, a substantial share of respondents report believing in fake cures that have prominently spread through social media. 30% of respondents believe that drinking hot water will cure the virus and 25% believe that inhaling steam will. S1 Table provides descriptive statistics relating to the sample. We evaluate outcomes relating to knowledge and behavior. We measured knowledge using a standardized index, or z-score, of responses to factual questions that relate to the message sent in a given week. Directly asking about preventative behavior likely induces demand effects or social desirability bias. Each week, we thus measured behavior using a list experiment, a common technique to estimate the prevalence of sensitive behaviors [21]. Respondents were given a list of activities and asked how many they had performed in the past three days. One version of this list, the short experimental list, comprised a list of four non-sensitive activities. The other version of this list, the long experimental list, used the same four non-sensitive activities and added one sensitive activity—visiting a friend or family member outside of their homes during the mandated nationwide COVID-19 lockdown period. We randomly assigned respondents to a short or long experimental list at the WhatsApp broadcast list level. A comparison of the reported number of activities, across respondents assigned to ‘short’ and ‘long’ experimental lists within the same treatment condition (i.e. whether they had received the message on Monday or Saturday of that week), provides an unbiased measure of the prevalence of the sensitive activity among the respondents assigned to a given treatment condition. Comparing this measure across treatment conditions then provides an estimate of the effect of the intervention on behavior. Each week, to assign each WhatsApp broadcast list to a treatment condition, we initially blocked broadcast lists into groups of four, grouping lists which had been created around the same time together. Then, within each block, we randomly assigned one list to each of the four possible combinations of treatment conditions and experimental list length. Such blocking and within-block randomization is commonly done prior to the random assignment of treatment to improve the precision of estimated treatment effects by subsequently including block fixed effects in the estimation [22]. In S2 Table, we show that survey response rates and respondent characteristics are balanced across treatment conditions. This suggests an absence of differential selection into survey participation based on treatment assignment. We estimate treatment effects on knowledge by regressing the z-score index onto a treatment indicator. We estimate treatment effects on behavior by regressing the number of activities reported in the list experiment onto a treatment indicator, a long experimental list indicator, and the interaction between the two. We provide specifications with and without controlling for respondent covariates. We include week fixed effects and either randomization block fixed effects or, more demandingly, WhatsApp broadcast list fixed effects. Standard errors are clustered at the level of the WhatsApp broadcast list-week throughout. Further, we explore subgroup treatment effects by splitting our sample across gender, urbanity and week of the intervention. We provide additional information on estimation in S4 Appendix. All statistical analyses were conducted using Stata 16, while graphics are produced in R.

Results

First, we examine the effects of treatment assignment on respondent knowledge about the information delivered. Fig 1 plots the treatment effects using different permutations of our specifications. The results suggest substantively large effects of the WhatsApp messages on individual knowledge. In the baseline specification with randomization block fixed effects, respondents assigned to a treated WhatsApp broadcast list in a given week report factual knowledge 0.26σ greater than respondents assigned to a control list (p < 0.001). Treatment effects are slightly larger in the specification with WhatsApp broadcast list fixed effects at 0.45σ (p < 0.001). These correspond to roughly 7 percentage points, or 12% increase, in correct responses. Across specifications, results are unchanged by the addition of respondent covariates.
Fig 1

Treatment effects on knowledge.

Estimates of the treatment effect of WhatsApp messages on a standardized index of responses to factual questions that relate to the messages sent. 95% confidence intervals plotted. All specifications include week fixed effects. Standard errors clustered at the week-broadcast list level.

Treatment effects on knowledge.

Estimates of the treatment effect of WhatsApp messages on a standardized index of responses to factual questions that relate to the messages sent. 95% confidence intervals plotted. All specifications include week fixed effects. Standard errors clustered at the week-broadcast list level. Second, we examine treatment effects on respondents’ preventative behavior. Fig 2 plots the treatment effects using different permutations of our specifications. In the baseline specification, among respondents assigned to the control condition, 37% (p < 0.001) did not comply with social distancing. However, among respondents assigned to the treatment condition, this behavior drops to 7% (p = 0.47). The difference between these effects is statistically significantly different (p < 0.05), implying that the WhatsApp messages changed related behavior. Estimated treatment effects are again slightly larger when using WhatsApp broadcast list fixed effects and are robust to the addition of respondent covariates. The magnitudes of these treatment effects are comparable to those from other studies seeking to facilitate healthy behavior in the Global South [23]. Importantly, due to the use of a list experiment, these treatment effects on behavior cannot be explained by demand effects and social desirability bias, or respondents scrolling back to a previous message to search for the correct answer. The consistency of the effects on behavior with the effects on knowledge, which are potentially affected by such possible biases, helps to bolster confidence in the results overall.
Fig 2

Treatment effects on behavior.

Estimates of the treatment effect of WhatsApp messages on behavior measured through a list experiment between subscribers in treated and control broadcast lists. 95% confidence intervals plotted. All specifications include week fixed effects. Standard errors clustered at the week-broadcast list level.

Treatment effects on behavior.

Estimates of the treatment effect of WhatsApp messages on behavior measured through a list experiment between subscribers in treated and control broadcast lists. 95% confidence intervals plotted. All specifications include week fixed effects. Standard errors clustered at the week-broadcast list level. Lastly, we examine subgroup treatment effects on the two outcomes in Figs 3 and 4 based on gender, rurality, and week of intervention. We find relatively uniformly estimated effects across subgroups. While statistically insignificant, treatment effects on knowledge among women are greater than among men (p = 0.25), while effects on behavior are not different between women and men (p = 0.85). We also find similar treatment effects in Weeks 1 and 2. S3, S4 and S5 Tables provide the estimated regression coefficients to support the figures.
Fig 3

Subgroup treatment effects on knowledge.

Estimates of the treatment effect of WhatsApp messages on a standardized index of responses to factual questions that relate to the messages sent. 95% confidence intervals plotted. All specifications include randomization block fixed effects and (apart from by-week estimates) week fixed effects. Standard errors clustered at the week-broadcast list level.

Fig 4

Subgroup treatment effects on behavior.

Estimates of the treatment effect of WhatsApp messages on behavior measured through a list experiment between subscribers in treated and control broadcast lists. 95% confidence intervals plotted. All specifications include randomization block fixed effects and (apart from by-week estimates) week fixed effects. Standard errors clustered at the week-broadcast list level.

Subgroup treatment effects on knowledge.

Estimates of the treatment effect of WhatsApp messages on a standardized index of responses to factual questions that relate to the messages sent. 95% confidence intervals plotted. All specifications include randomization block fixed effects and (apart from by-week estimates) week fixed effects. Standard errors clustered at the week-broadcast list level.

Subgroup treatment effects on behavior.

Estimates of the treatment effect of WhatsApp messages on behavior measured through a list experiment between subscribers in treated and control broadcast lists. 95% confidence intervals plotted. All specifications include randomization block fixed effects and (apart from by-week estimates) week fixed effects. Standard errors clustered at the week-broadcast list level.

Discussion

In sum, our results indicate encouraging positive changes in knowledge and behavior among WhatsApp subscribers of a trusted source. While WhatsApp has been identified as a platform through which misinformation easily spreads, we show that trusted CSOs can also leverage WhatsApp’s reach to successfully get individuals to reassess their misconceptions and correct related behavior. This effect is roughly similar across the urban-rural as well as the gender divide, highlighting the power of WhatsApp messages from a trusted source to counter misinformation. These findings, then, stress the potential of CSOs in sub-Saharan Africa to fight misinformation. They further highlight the similar role that other WhatsApp newspapers in the region might play (e.g., The Continent in South Africa and 263Chat in Zimbabwe). The study’s context and findings contribute to recent work on the effectiveness of messages to correct misinformation across a variety of issues ranging from health to politics [15, 18, 24]. These studies present mixed findings and are particularly negative with respect to vaccination campaigns [16, 17]. However, most them provide evidence from lab and online experiments in the Global North, while far fewer studies take place in the Global South. Similarly, there is a dearth of field experimental evidence in this context, which is likely to be most informative for scaling up related policies [25, 26]. Our positive findings from a field experiment in Zimbabwe suggest that there are especially high returns to correcting misinformation, especially surrounding ongoing health crises where people are uncertain and seeking information [7, 27, 28]. Our results may deviate from those in prior scholarship in part due to the population that we study: individuals who have already self-selected into receiving information from a human rights NGO. While the sample appears demographically similar to the broader population of social media users in Zimbabwe, the subscribers are likely already receptive to information delivered to them by Kubatana and hence one should be cautious when extrapolating the treatment effects we find to other samples. However, our sample represents an important, and growing, population in the developing world—of individuals who are exposed to misinformation through social media, but also seek independent, credible sources of information through well-established NGOs. As part of our ongoing surveying efforts in Zimbabwe, we asked respondents for the sources of COVID-19 information that they trust the most. Descriptively, we find that citizens are most likely to trust an international organization first, followed closely by local NGOs or CSOs, and third by a message that mentions a news source (see Fig 5).
Fig 5

Trusted sources of information about COVID-19.

Respondents were asked to select up to three sources of information that they trust most on WhatsApp to deliver information about COVID-19.

Trusted sources of information about COVID-19.

Respondents were asked to select up to three sources of information that they trust most on WhatsApp to deliver information about COVID-19. In conjunction with the experimental results we present above, this evidence suggests that a trusted source of information can use the same social media channels to disseminate information that both combats misinformation and changes related behavior. Thus, while we caution generalizing our results to general public in Zimbabwe, our results speak specifically to the important role that trusted sources play, particularly in confusing informational situations such as health crises [29], and in an authoritarian context where trust in information might be low [30]. Existing scholarship emphasizes the importance of how information is framed [31], and the credibility of the information source for the recipient [32]. During the COVID-19 pandemic, the identification and dissemination of correct information represent an important challenge. While fact-checking can contribute to a source’s credibility [33], particularly during emergency situations, it might be outpaced by the spread of misinformation through social media [34, 35]. Future research should consider how best to integrate social media messaging aimed at targeting misinformation into CSOs’ ongoing programming, while at the same time highlighting their relative importance. During the study, Kubatana’s WhatsApp messaging increased threefold, from one WhatsApp message a week. Even after two weeks, the organization reported four unsubscribers—a number that, while low, is highly unusual for it. Moreover, in the second week, there was a 50% drop in survey responses relative to the first week. Additional work on identifying how to maximize the benefits of such messaging without inducing disengagement will be of great importance for devising a sustainable way to counter misinformation in the Global South.

Messages.

(PDF) Click here for additional data file.

Coding decisions.

(PDF) Click here for additional data file.

Survey questions used.

(PDF) Click here for additional data file.

Estimation.

(PDF) Click here for additional data file.

Summary statistics.

(PDF) Click here for additional data file.

Balance.

(PDF) Click here for additional data file.

Knowledge.

(PDF) Click here for additional data file.

Behavior.

(PDF) Click here for additional data file.

Outcomes by week.

(PDF) Click here for additional data file. 1 Sep 2020 PONE-D-20-23677 Countering misinformation via WhatsApp: Evidence from the COVID-19 pandemic in Zimbabwe PLOS ONE Dear Dr. Larreguy, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Khin Thet Wai, MBBS, MPH, MA (Population & Family Planning Resear Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Your ethics statement must appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please also ensure that your ethics statement is included in your manuscript, as the ethics section of your online submission will not be published alongside your manuscript. 3. Please upload a copy of Supporting Information Tables S1-S5 which you refer to in your text. 4. We note you have included a table to which you do not refer in the text of your manuscript. Please ensure that you refer to Tables 1-5 in your text; if accepted, production will need this reference to link the reader to the Table. Additional Editor Comments (if provided): This research provides the sound evidence critical to improve countermeasures for the infodemic of COVID-19 and to impede with correct information through social media platforms. It covered the demographically representative sample. Specifically, authors could improve the integrity of this research by expanding/adding/clarifying the following issues apart from responding to reviewers. (1) To expand the abstract by adding the concrete results with relevant statistics; (2) LINE 83-84: Survey response rates were compared between Whatsapp and Qualtrics. As such, the authors needed to add the total number of subscribers to Qualtrics. (3) LINE 103-104: To clarify 'short experimental list'; 'long experimental list'. (4) LINE 116-118: To clarify "initially blocked broader list into groups of four" and cite a reference for this method. (5) LINE 121-124: To add the software used for regressing the treatment effects. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Methodology thoroughly explained. It is necessary to include what proportion of internet/social media users are regularly use Whatapps, in comparison to other social media app, to judge the generalization of findings of study. If the author describes It is good if more explicit data for comparison between arms of study can be described. Reviewer #2: The paper experiments on how to combat misinformation by disseminating truthful information about COVID-19 in Zimbabwe. The author examined how exposure to these messages can affect individual belief in preventing behavior. To assess that effect, the authors experimented with the timing when the messages were sent. The methodology sounds solid, but I missed some numbers in the paper: the number of users in each group (treatment and control) in the Whatsapp and the percentage of people in the lists who respond to the surveys. It is not clear with there is an overlap of respondents between the two surveys (first and second weeks)? If so, how much the first survey influenced the second study in the second week? I am not sure if the number of individuals is enough to conclude the findings. But, I am not a statistician and only can trust the reported results. Finally, the authors did not mention demand bias. How many respondents change their behavior or opinions as a result of being part of a study? The authors identified themselves in the surveys as being part of Harvard University. What would happen with the results of the survey if the survey was not identified or identified as being from Kubatana organization? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 14 Sep 2020 See corresponding document. Submitted filename: Response to Editor and Reviewers.pdf Click here for additional data file. 18 Sep 2020 Countering misinformation via WhatsApp: Preliminary Evidence from the COVID-19 pandemic in Zimbabwe PONE-D-20-23677R1 Dear Dr. Larreguy, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Khin Thet Wai, MBBS, MPH, MA (Population & Family Planning Resear Academic Editor PLOS ONE Additional Editor Comments (optional): All comments are addressed satisfactorily. Reviewers' comments: 23 Sep 2020 PONE-D-20-23677R1 Countering misinformation via WhatsApp: Preliminary Evidence from the COVID-19 pandemic in Zimbabwe Dear Dr. Larreguy: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Khin Thet Wai Academic Editor PLOS ONE
  16 in total

1.  See Something, Say Something: Correction of Global Health Misinformation on Social Media.

Authors:  Leticia Bode; Emily K Vraga
Journal:  Health Commun       Date:  2017-06-16

2.  In Congo, fighting a virus and a groundswell of fake news.

Authors:  Laura Spinney
Journal:  Science       Date:  2019-01-18       Impact factor: 47.728

3.  Institutional trust and misinformation in the response to the 2018-19 Ebola outbreak in North Kivu, DR Congo: a population-based survey.

Authors:  Patrick Vinck; Phuong N Pham; Kenedy K Bindu; Juliet Bedford; Eric J Nilles
Journal:  Lancet Infect Dis       Date:  2019-03-27       Impact factor: 25.071

4.  Comparing the behavioural impact of a nudge-based handwashing intervention to high-intensity hygiene education: a cluster-randomised trial in rural Bangladesh.

Authors:  Elise Grover; Mohammed Kamal Hossain; Saker Uddin; Mohini Venkatesh; Pavani K Ram; Robert Dreibelbis
Journal:  Trop Med Int Health       Date:  2017-12-01       Impact factor: 2.622

5.  Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation.

Authors:  Man-Pui Sally Chan; Christopher R Jones; Kathleen Hall Jamieson; Dolores Albarracín
Journal:  Psychol Sci       Date:  2017-09-12

6.  Effective messages in vaccine promotion: a randomized trial.

Authors:  Brendan Nyhan; Jason Reifler; Sean Richey; Gary L Freed
Journal:  Pediatrics       Date:  2014-03-03       Impact factor: 7.124

7.  Misinformation lingers in memory: Failure of three pro-vaccination strategies.

Authors:  Sara Pluviano; Caroline Watt; Sergio Della Sala
Journal:  PLoS One       Date:  2017-07-27       Impact factor: 3.240

8.  Correcting misinformation by health organizations during measles outbreaks: A controlled experiment.

Authors:  Anat Gesser-Edelsburg; Alon Diamant; Rana Hijazi; Gustavo S Mesch
Journal:  PLoS One       Date:  2018-12-19       Impact factor: 3.240

9.  Building trust while influencing online COVID-19 content in the social media world.

Authors:  Rupali Jayant Limaye; Molly Sauer; Joseph Ali; Justin Bernstein; Brian Wahl; Anne Barnhill; Alain Labrique
Journal:  Lancet Digit Health       Date:  2020-04-21

10.  How to fight an infodemic.

Authors:  John Zarocostas
Journal:  Lancet       Date:  2020-02-29       Impact factor: 79.321

View more
  9 in total

1.  Diversity in health care institutions reduces Israeli patients' prejudice toward Arabs.

Authors:  Chagai M Weiss
Journal:  Proc Natl Acad Sci U S A       Date:  2021-04-06       Impact factor: 11.205

Review 2.  COVID-19 Misinformation on Social Media: A Scoping Review.

Authors:  Andrew M Joseph; Virginia Fernandez; Sophia Kritzman; Isabel Eaddy; Olivia M Cook; Sarah Lambros; Cesar E Jara Silva; Daryl Arguelles; Christy Abraham; Noelle Dorgham; Zachary A Gilbert; Lindsey Chacko; Ram J Hirpara; Bindu S Mayi; Robin J Jacobs
Journal:  Cureus       Date:  2022-04-29

Review 3.  Mis-Dis Information in COVID-19 Health Crisis: A Narrative Review.

Authors:  Vicente Javier Clemente-Suárez; Eduardo Navarro-Jiménez; Juan Antonio Simón-Sanjurjo; Ana Isabel Beltran-Velasco; Carmen Cecilia Laborde-Cárdenas; Juan Camilo Benitez-Agudelo; Álvaro Bustamante-Sánchez; José Francisco Tornero-Aguilera
Journal:  Int J Environ Res Public Health       Date:  2022-04-27       Impact factor: 4.614

4.  Medical Mistrust and Stigma Associated with COVID-19 Among People Living with HIV in South Africa.

Authors:  Jana Jarolimova; Joyce Yan; Sabina Govere; Nompumelelo Ngobese; Zinhle M Shazi; Anele R Khumalo; Bridget A Bunda; Nafisa J Wara; Danielle Zionts; Hilary Thulare; Robert A Parker; Laura M Bogart; Ingrid V Bassett
Journal:  AIDS Behav       Date:  2021-05-17

5.  Use of bot and content flags to limit the spread of misinformation among social networks: a behavior and attitude survey.

Authors:  Candice Lanius; Ryan Weber; William I MacKenzie
Journal:  Soc Netw Anal Min       Date:  2021-03-12

6.  Debunking highly prevalent health misinformation using audio dramas delivered by WhatsApp: evidence from a randomised controlled trial in Sierra Leone.

Authors:  Maike Winters; Ben Oppenheim; Paul Sengeh; Mohammad B Jalloh; Nance Webber; Samuel Abu Pratt; Bailah Leigh; Helle Molsted-Alvesson; Zangin Zeebari; Carl Johan Sundberg; Mohamed F Jalloh; Helena Nordenstedt
Journal:  BMJ Glob Health       Date:  2021-11

7.  Analysing 3429 digital supervisory interactions between Community Health Workers in Uganda and Kenya: the development, testing and validation of an open access predictive machine learning web app.

Authors:  James O'Donovan; Ken Kahn; MacKenzie MacRae; Allan Saul Namanda; Rebecca Hamala; Ken Kabali; Anne Geniets; Alice Lakati; Simon M Mbae; Niall Winters
Journal:  Hum Resour Health       Date:  2022-03-16

Review 8.  A scoping review of COVID-19 online mis/disinformation in Black communities.

Authors:  Janet Kemei; Dominic A Alaazi; Mia Tulli; Megan Kennedy; Modupe Tunde-Byass; Paul Bailey; Ato Sekyi-Otu; Sharon Murdoch; Habiba Mohamud; Jeanne Lehman; Bukola Salami
Journal:  J Glob Health       Date:  2022-07-23       Impact factor: 7.664

9.  The importance of social media users' responses in tackling digital COVID-19 misinformation in Africa.

Authors:  Ruth Stewart; Andile Madonsela; Nkululeko Tshabalala; Linda Etale; Nicola Theunissen
Journal:  Digit Health       Date:  2022-03-18
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.