Literature DB >> 33882102

A guilt-free strategy increases self-reported non-compliance with COVID-19 preventive measures: Experimental evidence from 12 countries.

Jean-François Daoust1, Éric Bélanger2, Ruth Dassonneville3, Erick Lachapelle3, Richard Nadeau3, Michael Becher4, Sylvain Brouard5, Martial Foucault5, Christoph Hönnige6, Daniel Stegmueller7.   

Abstract

Studies of citizens' compliance with COVID-19 preventive measures routinely rely on survey data. While such data are essential, public health restrictions provide clear signals of what is socially desirable in this context, creating a potential source of response bias in self-reported measures of compliance. In this research, we examine whether the results of a guilt-free strategy recently proposed to lessen this constraint are generalizable across twelve countries, and whether the treatment effect varies across subgroups. Our findings show that the guilt-free strategy is a useful tool in every country included, increasing respondents' proclivity to report non-compliance by 9 to 16 percentage points. This effect holds for different subgroups based on gender, age and education. We conclude that the inclusion of this strategy should be the new standard for survey research that aims to provide crucial data on the current pandemic.

Entities:  

Mesh:

Year:  2021        PMID: 33882102      PMCID: PMC8059824          DOI: 10.1371/journal.pone.0249914

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

In the fight against the spread of the COVID-19 virus, research that aims to explain compliance with public health preventive measures is of utmost importance. The severity of virus activity is, in no small part, a function of citizens’ behaviours [1, 2]. Therefore, much research has focused on understanding who complies, and what are the socio-demographic and attitudinal correlates of (non)compliance. Answering these questions is critical, helping governments and public health agencies to gather reliable data on compliance with preventive measures (e.g., social distancing, the use of face masks, etc.). A great deal of this data comes from survey research. The need for high quality data on compliance with public health measures has led some researchers to investigate the reliability of survey data more closely. In particular, some studies have examined the possibility that public health restrictions in the pandemic context have created common social norms for behaviours that are valued (e.g., social distancing). In turn, these norms create an incentive for respondents to under-report behaviours that are proscribed (e.g., social gatherings). The resulting social desirability bias can considerably affect the quality of data used by policy-makers and public health officials in their decision-making, which is problematic. Both Larsen et al. [3] and Munzert and Selb [4] considered the possibility that citizens’ self-reported compliance with public health measures could be influenced by social desirability bias. These two single-country studies relied on a list experiment approach (also known as the unmatched item count technique). Reassuringly, they failed to detect a social desirability bias in the self-reported behaviour of Danish [3] and German [4] citizens. Findings from these list experiments, however, stand in contrast to the results of a recent study by Daoust et al. [5], which tested different “face-saving” (or guilt-free) strategies designed to loosen the social norm of compliance with public health measures in the context of three surveys conducted in Canada. The goal was to reduce social desirability in respondents’ answers. When exposed to a short preamble (referring to the fact that some people have altered their behaviours since the beginning of the pandemic, while others have continued to pursue various activities) combined with guilt-free answer choices (including “only when necessary”), respondents in the Daoust et al. [5] study were substantially more likely than respondents from the control group to report non-compliant behaviour in the context of the COVID-19 pandemic. This suggests that there is a social desirability bias in citizens’ self-reported behaviour when no guilt-free option is provided. While promising, these results are based on a single country, i.e., Canada, and stand in contrast to the Danish and German studies cited above [3, 4]. As a result, we cannot be sure whether or not social desirability bias in reporting compliance with public health guidelines is particularly problematic in Canada, nor do we know how effective the guilt-free strategy is at reducing such bias beyond the Canadian case. In this research, we extend the most effective face-saving strategy identified by Daoust et al. [5] to twelve countries. Doing so, we examine whether results are generalizable beyond Canada and across different contexts. In addition, we test whether the impact of the face-saving (guilt-free) answer option is homogenous across different subsets of the population to assess potential differential effects conditional on individual characteristics [6, 7]. To preview our results, we show that the guilt-free strategy is a very useful tool in every country examined, increasing respondents’ proclivity to report non-compliance by 9 to 16 percentage points. This effect holds across gender, age and education subgroups. We conclude that this method should become the new standard for survey research of citizens’ compliance with COVID-19 preventive measures, ultimately providing higher-quality data to governments and public health agencies.

Measuring citizens’ compliance with COVID-19 preventive measures

Social desirability bias has been a problem in survey research well before the COVID-19 pandemic and researchers have addressed the issue in several ways. Some work suggests that survey mode is an important factor, as the presence of a live interviewer can create greater incentives for respondents to provide more socially desirable answers. A meta-analysis on the topic suggests that an online mode is the best way to allow people to report undesirable behaviour [8]. But even with self-administered online surveys (i.e., without a live interviewer), respondents may still feel some incentive to provide what they deem to be socially desirable answers to survey questions. Survey researchers are therefore interested in developing additional ways and methodological tools to reduce social desirability and obtain more accurate estimates of undesirable behaviour [9]. Focusing on the current pandemic context, some of the first experimental studies of potential social desirability bias found limited evidence of this bias when employing the list experimental technique [3, 4]. Munzert and Selb [4] report a difference in the prevalence of not following social distancing between treated and untreated respondents of 6 percentage points that is "barely significant at conventional levels due to the vast measurement error of the list estimate." (p. 206) They also find that the prevalence of the undesirable behavior is higher when using a multi-valued response option compared to either a simple yes-no question or the list experimental estimate. This indicates that dichotomous direct questions should not be taken at face value. Discussing these findings, Munzert and Selb [4] suggest that face-saving strategies might be a "valuable alternative" (p. 207). In this study, we provide an experimental test of such a face-saving (or guilt-free) strategy across countries, and our results strengthen this conclusion. The objective of the face-saving strategy is to reduce social desirability in respondents’ answers by adding a (short) preamble and one or more guilt-free response options. We argue that such question and response wording can effectively loosen the norm around a desirable response and make it more acceptable for respondents to admit non-compliance with socially prescribed (and even mandated) behaviours. Such an approach has been applied to a wide range of topics. For example, political scientists have used it to study voter turnout [10, 11] where there is a clear norm that voting is the right thing to do. It has also proven to be useful to study illicit behavior like shoplifting [12], sexual/health behaviour such as using a condom during sexual intercourse [13, 14], or consumer choice in market research [15]. In the current context of the COVID-19 pandemic where it is crucial to understand citizens’ level of compliance with preventive measures, Daoust et al. [5] applied the face-saving strategy to analyze the potential for a social desirability bias to affect survey responses obtained in Canada. This study provided evidence that face-saving strategies can increase the proportion of citizens who self-report non-compliance with a range of public health preventive measures in Canada. They argue that this increase in self-reported non-compliance is a consequence of reduced social desirability. While there is no objective benchmark to compare these estimates, they substantiate this claim by showing that a similar increase in self-reported non-compliance is not observed when the same face-saving strategy is applied to a series of placebo behaviours that are not prohibited (e.g., grocery shopping). While promising, the results from Daoust et al. [5] suffer from a key limitation: Their focus on a single case, that is, Canada in April 2020. In this paper, we address this issue by implementing Daoust et al.’s [5] approach in twelve different countries. By doing so, we can ascertain whether results are specific to a single context and time period, or whether the effectiveness of a face-saving answer option to reduce social desirability in self-reported compliance with public health guidelines applies more generally. In the next section, we detail our data and how we implemented the face-saving strategy.

Data and indicators

We ran a face-saving–also labeled as a ‘guilt-free strategy’ throughout the rest of the study–experiment in twelve countries: Australia, Austria, Brazil, France, Germany, Italy, New Zealand, Poland, Spain, Sweden, the United Kingdom and the United States. After outlining the study’s details, we obtained respondents’ written consent to participate in the survey. The data were analyzed anonymously, while ethical approval for the project "Citizens’ Attitudes Under COVID-19 Pandemic" was received from TSE-IAST (Toulouse School of Economics). The online surveys were conducted by three different data collection firms: IPSOS (for all countries except Australia, United States and Spain), CSA (for Australia and the United States) and Netquest (Spain). While these different platforms do not entail major differences, having different firms involved in data collection reduces the risk of bias due to potential “house effects.” Data collection mainly occurred in mid-June 2020 within a period of a few days (maximum five), producing a nationally representative sample of about 1,000 respondents in most of the countries. In France, Germany and the USA, the sample size is about 2000 respondents. In these cases, half of the sample answered the questions used in this research. At the time, countries in the study experienced different levels of infection and death rates [16] ranging from a low in Australia and New-Zealand (with less than .5 deaths per 100 000 inhabitants) to a high experienced in the United Kingdom (with more than 59 deaths per 100 000 inhabitants). Although the countries in our sample were similarly influenced by public health guidelines established by global health authorities, the timing, stringency and details of public health measures adopted to combat the pandemic also strongly varied across our cases, from Sweden (the least stringent) to New Zealand (the most stringent). Moreover, the countries in our sample also reflect different levels of issue politicization, with relatively more politicization of the pandemic response in Brazil and the USA relative to the other countries in our sample. Table A.1 of the S1 File lists the exact dates of data collection in each country, as well as the corresponding number of observations. Moreover, Table A.2 in S1 File shows the population per country and the death rates (per 100 000 inhabitants) as of June 15th while Table A.3 in S1 File provides an overview of the preventive measures that were recommended and compulsory in each country. Here, we make use of the most effective face-saving strategy identified by Daoust et al. [5], which is also the strategy they recommend for future research (their Study 3). Extending this work to other countries, half of each national sample was randomly assigned to a direct question while the other half received the face-saving treatment. The direct question was: “Have you done any of the following activities in the last week?” followed by a set of four items and yes/no answer choices (skipping the question was possible but less than 0.5% did so). The face-saving question preamble was: “Some people have altered their behaviour since the beginning of the pandemic, while others have continued to pursue various activities. Some may also want to change their behaviour but cannot do so for different reasons. Have you done any of the following activities in the last week?” Respondents in the treatment group received the answer options yes/occasionally/only when necessary/no. The first three answer choices indicated (and were coded as) non-compliance with the items. Of these three options, ‘occasionally’ and ‘only when necessary’ were the guilt-free answer choices. The four items, displayed in random order, were: Go shopping or take public transportation without a face mask or taking it off during it Meet friends, family or colleagues greeting them by shaking hands, hugging or kissing Have a group of friends or family over at your place Participate in social activities (work, sport, religious ceremony…) without respecting physical distancing These items refer to behaviours that are crucial to minimizing the spread of the disease among the population, that is, wearing a face mask and practicing various forms of social distancing [17, 18]. Moreover, greeting people by shaking hands, hugging or kissing was clearly not recommended, while hosting a gathering at one’s place was allowed though not without some level of risk (e.g., Center for Disease Control and Prevention, see the “Hosting gatherings or cook-outs” section [19]). In a second step, we explore whether the effects of the treatments are heterogeneous. We consider respondents’ gender (female or male), age (treated as linear, from 18 to 91), and their level of education. Education was measured using different categories in each country given their different educational systems. We use a binary “university graduate” variable to model the effect of having obtained a university degree in each country. We also provide individual analyses for each country. Descriptive statistics summarizing the extent to which preventive measures were followed in each of the twelve countries can be found in Fig B.1 in S1 File, while descriptives for gender, age and education are reported in Appendix B of the S1 File. Moreover, Fig B.2 in the appendix of S1 File distinguishes between the two guilt-free answer choices (“Occasionaly” and “Only when necessary”), showing that there are no substantial differences between them with the only two exceptions being Brazil and Spain, where non-compliers tend to prefer the “only when necessary” answer choice.

Results

Our main goal is to ascertain whether providing a short preamble and guilt-free answer options increases citizens’ likelihood of reporting non-compliance with important public health measures like mask-wearing and social distancing. To shed light on this question, we analyze the data separately for the four items and the twelve countries. We thus provide a complete picture of the experimental effects and avoid pooling to ensure that the results are not driven by certain items or countries. Fig 1 displays the proportion of non-compliers with the preventive measures for the control (direct question) and treatment (face-saving) groups. The treatment group is depicted in light grey, while the dark grey bars indicate levels of self-reported compliance in the control condition. With four items in twelve countries, Fig 1 plots a total of 48 effects of interest.
Fig 1

Non-compliance in control and treatment (face-saving) groups.

Note: Means of non-compliance are shown with 95% confidence intervals included. ATE = Average treatment effect.

Non-compliance in control and treatment (face-saving) groups.

Note: Means of non-compliance are shown with 95% confidence intervals included. ATE = Average treatment effect. As is clear from eyeballing the graphs, the means of non-compliance are systematically higher in the face-saving group (the light grey bars). More precisely, the impact of the treatment is positive in 47 of 48 cases (the single exception being Austria for the face mask item). Substantially speaking, the average treatment effect (averaged across all countries) of receiving the treatment ranges from 9 (face mask) to 16 percentage points (hosting at home). Greeting people with non-recommended behaviours and not respecting physical distancing show an ATE of 12 and 14 percentage points, respectively. While we prefer to focus on the substantive effect, we note that in most cases (45 out of 47 positive effects), the differences are significant at p < .05 (based on two-sided t-test). Given the number of tests that we performed, we implemented the Romano-Wolf correction [20, 21]. Tables D.2-D.5 in S1 File show that the differences between the treatment and control groups are statistically significant (in 45 cases) even when using Romano-Wolf p-values. Does the impact of the face-saving treatment differ across various subgroups of the population? In an exploratory fashion, we examine potential effect heterogeneity due to gender, age and education, which are known to be linked to compliance [5, 6, 22]. For this analysis, we rely on a pooled dataset that includes data from all countries. Fig 2 plots the coefficients from logistic regressions across different subgroups. The full regression outputs can be found in Tables C.1-C.3 of Appendix C in S1 File. The results are quite clear: Based on the evidence, we cannot reject the null hypothesis of no moderation effect. The interaction coefficients never reach statistical significance at p < .05. The direction of the effects is split for gender, three out of four coefficients are negative for age and three of four are positive for education.
Fig 2

Interaction coefficients for the guilt-free treatment, by gender, age and education.

Note: The values of the interaction coefficients estimated in Tables C.1-C.3 of the S1 File are shown, with 95% confidence intervals included.

Interaction coefficients for the guilt-free treatment, by gender, age and education.

Note: The values of the interaction coefficients estimated in Tables C.1-C.3 of the S1 File are shown, with 95% confidence intervals included. All in all, we find that the face-saving strategy is effective. In total, 47 out of 48 effects are positive, and, their effects are substantive, ranging from 9 to 16 percentage points. Moreover, the impact of the treatment effect is fairly homogenous. As shown in Fig 2, it does not seem to be conditioned by individuals’ gender, age or level of education. We made sure that our conclusions were robust in several ways. First, although we should be cautious about randomization checks in an experimental context [23], we verified that both control and treatment groups were similar in terms of age, gender and education. The average age across treated and untreated respondents was identical (at 47 years of age); 52% of respondents in the control group were women compared to 51% in the treatment group, and means for the education variable were the same at .60. Second, using weights for age, gender, education and region did not alter our conclusions. More specifically, we replicated Figs 1 and 2 for the weighted dataset (see Figs D.1 and D.2 in the S1 File). One interaction out of twelve reached p < .05, that is, the interaction between age and treatment for the ‘no mask’ behaviour. Focusing on the coefficients rather than p-values, our findings are very similar. Third, we made sure that results from our tests of heterogeneous treatment effects were not driven by unobserved characteristics of particular countries by estimating the effects with a model that included country fixed effects. Figs D.3 and D.4 in S1 File replicate and the results of this test and lead to essentially the same conclusion. Fourth, we tackle the possibility that our results are in fact a “false positive” [24]. In a nutshell, our experimental design is based on the assumption that the differences in the proportion of self-reported non-compliance between the direct and face-saving questions are related to a reduced incentive to report socially desirable behaviours. While we think this is a very plausible assumption, we also examine the possibility that another mechanism might explain our results (for example the mere change in the number of response options) using a Canadian survey that was in field during the same period as the twelve surveys examined here. In this Canadian survey, we used the same face-saving strategy but with a broader battery of items, 8 instead of 4. This larger battery included behaviours that were not officially prohibited, i.e., where there should be much less social desirability. In panel A of Fig D.4 in S1 File, we show that there is a strong effect of the treatment (about 10 percentage points) for behaviours that the government prescribed, such as wearing a face mask in public, and that this effect is much less important in panel B (average of about 3 percentage points) for behaviours that perhaps entail a risk, but that the government did not proscribe–such as taking public transportation or shopping for non-essential products. While we do not have these ‘placebo’ items for twelve countries included in the analyses, this result for Canada is reassuring and increases our confidence that the greater proportion of self-reported non-compliance in the face-saving group is not a false positive. Fifth, we also check whether or not a potential experimenter demand effect [25] might drive the results using data from a survey experiment in France. The design of the experiment is presented in the S1 File. Beyond both question formats analyzed earlier, we rely on a list experiment (and a question used by Munzert & Selb [4]) to estimate the levels of compliance with preventive behaviours from alternative measurement strategies. Results are shown in Fig D.5 of the S1 File. The list experiment approach was designed to reduce social desirability bias and is usually employed for this purpose [26]. Most importantly, the risk that there would be an experimenter demand effect is very limited. Overall, even if this experiment only includes one country, using estimates from the list experiment as a baseline brings no evidence of an experimenter demand effect associated to the guilt-free question format that would systematically bias our estimates.

Discussion and implications

West et al. [27: 451] argued that “there is an urgent need to develop and evaluate interventions to promote effective enactment of these behaviours and provide a preliminary analysis to help guide this.” We agree with the authors and address the issue of social desirability and its impact on self-reported compliance with COVID-19 public health guidelines. While the work of Daoust et al. [5] on this topic showed a face-saving strategy is a promising approach to attenuate social desirability, evidence was lacking on whether this face-saving strategy was effective beyond Canada. In this research, we tested the face-saving strategy using a survey experiment in twelve countries to examine the benefits of this approach. We replicated the findings found in Daoust et al. [5] and most importantly, did so in a diverse set of contexts, with different countries and political systems, in different stages of the pandemic (deconfinement in most countries), with different levels of infections, etc. Based on four public health preventive measures related to the wearing of face masks, greetings, hosting gatherings at one’s home, and social distancing, we found that the face-saving strategy increased the proportion of citizens who readily answer that they did not comply in 47 out of 48 cases. Most importantly, the effects were substantial. They ranged from 9 percentage points (the mask item) to 16 percentage points (hosting at home) and are robust to several additional tests. Given that there are no readily available, objective benchmarks for the four preventive measures analyzed in our study, we acknowledge that there is no way to compare our estimates with external measures. However, our experimental design and extensive robustness checks bolster the conclusion that the differences we find can be at least partly attributed to a social desirability bias. There have already been major advances in the development of observational measures of citizens’ behaviour (i.e., not from survey data). Among others, France has recently used cameras in its subway stations to quantify the proportion of people who wear a mask when using public transportation, and several countries are developing applications to track the inter-regional movement of their residents [28]. While useful, we cannot rely solely on behavioral data in the fight against the pandemic. First, observational measures are not available for several important preventive public health measures, as many measures cannot be examined in public, such as the respect of social distancing if one hosts a gathering at their private home. Second, even behavioral data like that obtained from cameras or tracing applications have some major drawbacks. Most importantly, due to technical and privacy constraints, this approach does not easily provide researchers important auxilliary information about who complies and what makes people comply or not. For these reasons, we believe that survey research remains a crucial complement to other data sources. In summary, policymakers and public health experts require survey data, and as a result, we should aim for data that is of the best possible quality. Our research confirms that using a guilt-free strategy is an effective approach and is relevant to anyone who aims to provide data on citizens’ compliance with COVID-19 preventive measures. This type of data is crucial for governments and public health agencies to make enlightened decisions. Moreover, as the strategy simply implies the addition of a very short preamble and guilt-free answer choices, there are very limited additional costs involved to implement this method compared to a direct question. While replications would be welcome to strengthen the validity of the approach, we believe that our comparative research provides a firm ground for what should become the standard when measuring citizens’ compliance with public health preventive measures. (PDF) Click here for additional data file. 21 Dec 2020 PONE-D-20-32568 Face-Saving Strategies Increase Self-Reported Non-Compliance with COVID-19 Preventive Measures: Experimental Evidence from 12 Countries PLOS ONE Dear Dr. Daoust, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The three reviewers agree that the paper is well written and addresses a relevant and timely question. However, all of them raise a number of concerns and provide few suggestions that, if properly addressed, will help to further improve the quality of the paper. More specifically, I agree with Reviewer 1 that you should discuss more in detail how the different number of possible answers in the questions posed to the treatment and the control group may influence the main results. Similarly, Reviewer 3 raises some concerns on the scale used for potential answers, arguing that this empirical strategy may not apply to behaviors whose sensitivity is not "gradual" in nature. In the same vein, Reviewer 2 asks to tackle more directly the issue of “experimenter demand effect”, providing a number of useful references. Please submit your revised manuscript by Feb 04 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Federica Maria Origo Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2.Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information. If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information Once you have amended this/these statement(s) in the Methods section of the manuscript, please add the same text to the “Ethics Statement” field of the submission form (via “Edit Submission”). For additional information about PLOS ONE ethical requirements for human subjects research, please refer to http://journals.plos.org/plosone/s/submission-guidelines#loc-human-subjects-research. 3.In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The paper is written in a clear way and addresses an important question. Here are some comments: - The authors write (page 12), "among several important subgroups of the population including gender, age and education". This implies that subgroups other than gender, age and education have been checked, but there is no evidence of that in the paper. Please, clarify. - The authors write (page 11), "The results for potential heterogeneous effects across subgroups based on gender, age and education are quite clear: overall there is no substantial moderation effect. The interaction coefficients never reach statistical significance at p<.05." Then, at page 12, they write, referring to the weighted dataset "One interaction reaches p<.05 for age but the single significant interaction for education now fails to pass that threshold". The two sentences appear to be in contradiction with each other. - In the discussion, the authors should go back to the results by Munzert and Selb (2020) on Germany and discuss why the findings differ. - My main concern about the methodology adopted by the authors is that they compare a treatment with 2 options (yes/no) to a face saving treatment with 4 options (yes/occasionally/only when necessary/no), where 3 (yes/occasionally/only when necessary) correspond to the single "yes" option in the direct question. The issue is that, if a subset of the sample answers randomly, then this will inflate the positive answers in the face saving treatment, as there is a 75% probability that a random answer is "yes/occasionally/only when necessary" vs a 50% probability that a random answer is "yes" in the direct question. This would generate spurious evidence of social desirability bias. The authors should address this concern. Reviewer #2: This paper tackles the important issue of correcting for social-desirability when eliciting self-reported preventive measures to prevent the spread of COVID-19. The paper is clearly written, concise, well-executed. The data collection effort across 12 countries is impressive, and well harmonized, providing interesting comparisons in both average reported behaviours and the response to the “treatment”. The topic is certainly relevant for public policy and very current. I have one major concern—the fact that the “face-saving” treatment might correct for social desirability but could introduce experimenter demand effects—a minor concern—related to inference—and a few minor comments and suggestions, which I outline below in bullet points. *** Major Concern*** (1) Experimenter demand effect The “face-saving” treatment frames the question in such a way that people feel OK reporting a lower level of preventive behaviour. This framing might have two effects: (1) contrast social desirability upward-bias; and (2) introduce an experimenter demand effect downward-bias. The intent of the author is to tackle the first bias, but I am afraid that they are opening the door to the second type of bias. Since there is no objective external report of these behaviours, there is no benchmark to understand how much this “downward treatment effect” is actually correcting for the first upward bias, or going too far and introducing the second downward bias. Would it be possible to have a (more) objective or external measure of the actual level of (some of) these preventive behaviours? If not for the individuals in the survey, at least on average for these countries? For example, some mobility data collected by google https://www.google.com/covid19/mobility/ or apple https://covid19.apple.com/mobility Alternatively, recent papers have tried to tackle the issue of experimenter demand effect (see for example de Quidt et al. 2018 and the citations therein): would it be possible to run a small pilot study in one of these countries to show that experimenter demand effect is not an issue? If none of these alternatives are feasible, I would at least suggest discussing this issue in the paper. Regardless of whether this additional analysis is performed, I strongly urge the authors to mention in the paper that there is no way of knowing the “real” level of preventing behaviours, and that all of the statistics that are shown in the paper come from self-reports which cannot be validated. *** Minor Concern*** (2) Confidence Interval I believe that reporting 84% confidence intervals is misleading. I understand that Macgregor-Fors & Payton 2013 suggest it for “visual inspections” of overlapping confidence intervals, but one should NOT consider overlapping CI as a test of difference between plotted coefficients. I would strongly suggest reporting always and only 99% and/or 95% confidence intervals. Also, given the number of countries and behaviours reported, it would be nice to have a test correcting for multiple hypothesis testing, in the vein of Romano and Wolf (2005, 2017)---for example, a stata command that implements it is available here https://ideas.repec.org/c/boc/bocode/s458276.html and an R command here https://rdrr.io/github/grayclhn/oosanalysis-R-library/man/stepm.html *** Questions / Comments / Suggestions *** • I find the terminology “face-saving strategies” a bit confusing in the setting of the COVID-19 pandemic: when I first read the title, I thought that the authors were referring to wearing a mask (I confused it with “face-covering”) or some other form of preventive behaviour (which could be “saving” lives). I would suggest something like “plausible deniability” or “guilt-free”. However, this is a purely personal comment, and need not be taken into consideration if the authors do not agree. • For the figures, I would suggest ordering the countries from highest to lowest reported behaviour, and maybe plot them vertically. See for example Cohn et al. (2019). • The graphs and tables have only minimal notes, it would be useful for the cursory reader to have more information available right at the bottom of the graph • It could be helpful to have a table or a list of the behaviours that are allowed/suggested/prohibited in the different countries at the time of the data collection, or some form of stringency of the measures used by each country, similar to https://www.bsg.ox.ac.uk/research/research-projects/coronavirus-government-response-tracker#data or https://ourworldindata.org/grapher/covid-stringency-index • The precise definition of the treatment comes only at the bottom of page 7. It might be useful for the reader to have it earlier, maybe even in the introduction. References: Clarke, D., Romano, J. P., & Wolf, M. (2020). The Romano-Wolf Multiple Hypothesis Correction in Stata. IZA Working Paper. https://www.iza.org/publications/dp/12845/the-romano-wolf-multiple-hypothesis-correction-in-stata Romano, J. P., & Wolf, M. (2017). Resurrecting weighted least squares. Journal of Econometrics, 197(1), 1–19. https://doi.org/10.1016/j.jeconom.2016.10.003 Romano, J. P., & Wolf, M. (2005). Exact and Approximate Stepdown Methods for Multiple Hy-pothesis Testing. Journal of the American Statistical Association, 100(469), 94–108. https://doi.org/10.1198/016214504000000539 Cohn, A., Marechal, M. A., Tannenbaum, D., & Zünd, C. L. (2019). Civic honesty around the globe. Science, eaau8712. https://doi.org/10.1126/science.aau8712 de Quidt, J., Haushofer, J., & Roth, C. P. (2018). Measuring and Bounding Experimenter Demand. American Economic Review, 108(11), 3266–3302. https://doi.org/10.1257/aer.20171330 Reviewer #3: A very nice paper, well written, very transparent. My only concern is, that their claim of using a "new strategy" is a bit overstretched. Below I give more details on that criticism. Still I am convinced that the paper is a valuable contribution to the sensitive question literature and - with its focus on surveying adherence to covid preventive measures - is highly relevant. • What are the main claims of the paper and how significant are they for the discipline? • The authors replicate a previous study about the effect of face-saving strategies on self-reports of non-compliance when surveying compliance to covid-19 measures (e.g. social distancing). They ran the experiment in twelve countries and find consistent evidence of a positive effect of the strategy under investigation. The topic is highly relevant because it tackles a survey methodology issue that is of utmost importance under the actualy circumstances (the pandemic). • Are the claims properly placed in the context of the previous literature? Have the authors treated the literature fairly? • In general yes. My only criticism is, that what they present as „new face-saving strategy“ is basically simply choosing likert-scale response categories instead of dichotomous yes/no response option. In addition, only providing the yes or no response options are for most of the surveyed behaviors such as keeping distance, wearing masks, obviously not very suitable. I would guess, that not much researcher have used "primitive" yes/no response options for surveying adherence to such measures. • Still, I find it valuable to have a close look at this topic and to study how response options influence answer behavior. But the authors should explain a bit more detailed in what regard their strategy is really new (or not so much). • I also wonder, whether the authors could provide information about the effect of the response option and the question preamble separately. In their design these two aspects are combined. • Another issue is, that their strategy is only applicable to sensitive behavours whose sensitivity is somehow "gradual" in nature. E.g. it works for shaking hands or wearing face masks. But it wouldn't work for having been arrested, committing tax fraud, or voting for a right-wing party. I would wish, authors would comment on that. • Do the data and analyses fully support the claims? If not, what other evidence is required? • Yes. Design and analysis are reported very carefully and transparently. Authors provide varous robustness checks, analysis by countries, additional details, and consider, for instance, the threat of „false positives“ that might invalidate their findings. • Are original data deposited in appropriate repositories and accession/version numbers provided for genes, proteins, mutants, diseases, etc.? • Yes • Are details of the methodology sufficient to allow the experiments to be reproduced? • Yes • Is the manuscript well organized and written clearly enough to be accessible to non-specialists? • Yes, very clearly written and organized. Nice to read! Presentation of results is exemplary, with very nice figures and corresponding tables in the appendix. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: Yes: Marc Höglinger [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 11 Feb 2021 Our responses include tables and figures so we uploaded a PDF document where we detail our answers. See the "Response to Reviewers" document. Submitted filename: Response to Reviewers.pdf Click here for additional data file. 29 Mar 2021 A Guilt-Free Strategy to Increase Self-Reported Non-Compliance with COVID-19 Preventive Measures: Experimental Evidence from 12 Countries PONE-D-20-32568R1 Dear Dr. Daoust, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Federica Maria Origo Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I really liked the revised version of the paper. Well done. It is a relevant contribution on an important topic. Reviewer #2: Thank you for your through revisions. I very much appreciated the new findings from the list experimenter in France. Although the results for "friends" are a bit puzzling, I agree that the other results do not suggest any strong presence of an experimenter demand effect. Reviewer #3: Authors have carefully considered all issues raised by the reviewers and have implemented the corresponding changes. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Pietro Biroli Reviewer #3: Yes: Marc Höglinger 8 Apr 2021 PONE-D-20-32568R1 A Guilt-Free Strategy Increases Self-Reported Non-Compliance with COVID-19 Preventive Measures: Experimental Evidence from 12 Countries Dear Dr. Daoust: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Federica Maria Origo Academic Editor PLOS ONE
  8 in total

Review 1.  Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis.

Authors:  Timo Gnambs; Kai Kaspar
Journal:  Behav Res Methods       Date:  2015-12

2.  Gender differences in COVID-19 attitudes and behavior: Panel evidence from eight countries.

Authors:  Vincenzo Galasso; Vincent Pons; Paola Profeta; Michael Becher; Sylvain Brouard; Martial Foucault
Journal:  Proc Natl Acad Sci U S A       Date:  2020-10-15       Impact factor: 11.205

3.  Which interventions work best in a pandemic?

Authors:  Johannes Haushofer; C Jessica E Metcalf
Journal:  Science       Date:  2020-05-21       Impact factor: 47.728

Review 4.  Applying principles of behaviour change to reduce SARS-CoV-2 transmission.

Authors:  Robert West; Susan Michie; G James Rubin; Richard Amlôt
Journal:  Nat Hum Behav       Date:  2020-05-06

5.  Why people failed to adhere to COVID-19 preventive behaviors? Perspectives from an integrated behavior change model.

Authors:  Derwin K C Chan; Chun-Qing Zhang; Karin Weman-Josefsson
Journal:  Infect Control Hosp Epidemiol       Date:  2020-05-15       Impact factor: 3.254

6.  Physical distancing, face masks, and eye protection to prevent person-to-person transmission of SARS-CoV-2 and COVID-19: a systematic review and meta-analysis.

Authors:  Derek K Chu; Elie A Akl; Stephanie Duda; Karla Solo; Sally Yaacoub; Holger J Schünemann
Journal:  Lancet       Date:  2020-06-01       Impact factor: 79.321

7.  Mobile phone data for informing public health actions across the COVID-19 pandemic life cycle.

Authors:  Nuria Oliver; Bruno Lepri; Harald Sterly; Renaud Lambiotte; Sébastien Deletaille; Marco De Nadai; Emmanuel Letouzé; Albert Ali Salah; Richard Benjamins; Ciro Cattuto; Vittoria Colizza; Nicolas de Cordes; Samuel P Fraiberger; Till Koebe; Sune Lehmann; Juan Murillo; Alex Pentland; Phuong N Pham; Frédéric Pivetta; Jari Saramäki; Samuel V Scarpino; Michele Tizzoni; Stefaan Verhulst; Patrick Vinck
Journal:  Sci Adv       Date:  2020-06-05       Impact factor: 14.136

8.  More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model.

Authors:  Marc Höglinger; Ben Jann
Journal:  PLoS One       Date:  2018-08-14       Impact factor: 3.240

  8 in total
  15 in total

1.  Predicting willingness to be vaccinated for Covid-19: Evidence from New Zealand.

Authors:  Geoff Kaine; Vic Wright; Suzie Greenhalgh
Journal:  PLoS One       Date:  2022-04-07       Impact factor: 3.240

2.  Which vaccine attributes foster vaccine uptake? A cross-country conjoint experiment.

Authors:  Sabrina Stöckli; Anna Katharina Spälti; Joseph Phillips; Florian Stoeckel; Matthew Barnfield; Jack Thompson; Benjamin Lyons; Vittorio Mérola; Paula Szewach; Jason Reifler
Journal:  PLoS One       Date:  2022-05-04       Impact factor: 3.752

3.  Citizens and the state during crisis: Public authority, private behaviour and the Covid-19 pandemic in France.

Authors:  Christopher J Anderson
Journal:  Eur J Polit Res       Date:  2022-03-26

4.  Trust in scientists in times of pandemic: Panel evidence from 12 countries.

Authors:  Yann Algan; Daniel Cohen; Eva Davoine; Martial Foucault; Stefanie Stantcheva
Journal:  Proc Natl Acad Sci U S A       Date:  2021-10-05       Impact factor: 11.205

5.  "Until I Know It's Safe for Me": The Role of Timing in COVID-19 Vaccine Decision-Making and Vaccine Hesitancy.

Authors:  Eric B Kennedy; Jean-François Daoust; Jenna Vikse; Vivian Nelson
Journal:  Vaccines (Basel)       Date:  2021-11-30

6.  Compliance with Covid-19 measures: Evidence from New Zealand.

Authors:  Geoff Kaine; Suzie Greenhalgh; Vic Wright
Journal:  PLoS One       Date:  2022-02-09       Impact factor: 3.240

7.  Concerns and coping mechanisms during the first national COVID-19 lockdown: an online prospective study in Portugal.

Authors:  Susana Silva; Helena Machado; Cláudia de Freitas; Raquel Lucas
Journal:  Public Health       Date:  2022-04-01       Impact factor: 4.984

8.  'Citizens' Attitudes Under Covid19', a cross-country panel survey of public opinion in 11 advanced democracies.

Authors:  Sylvain Brouard; Martial Foucault; Elie Michel; Michael Becher; Pavlos Vasilopoulos; Pierre-Henri Bono; Nicolas Sormani
Journal:  Sci Data       Date:  2022-03-28       Impact factor: 6.444

9.  What containment strategy leads us through the pandemic crisis? An empirical analysis of the measures against the COVID-19 pandemic.

Authors:  Daniel Kaimann; Ilka Tanneberg
Journal:  PLoS One       Date:  2021-06-21       Impact factor: 3.240

10.  Differences in the Protection Motivation Theory Constructs between People with Various Latent Classes of Motivation for Vaccination and Preventive Behaviors against COVID-19 in Taiwan.

Authors:  Yi-Lung Chen; Yen-Ju Lin; Yu-Ping Chang; Wen-Jiun Chou; Cheng-Fang Yen
Journal:  Int J Environ Res Public Health       Date:  2021-07-01       Impact factor: 3.390

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.