Literature DB >> 34843557

Influences of study design on the effectiveness of consensus messaging: The case of medicinal cannabis.

Asheley R Landrum1, Brady Davis1, Joanna Huxster2, Heather Carrasco3.   

Abstract

This study examines to what extent study design decisions influence the perceived efficacy of consensus messaging, using medicinal cannabis as the context. We find that researchers' decisions about study design matter. A modified Solomon Group Design was used in which participants were either assigned to a group that had a pretest (within-subjects design) or a posttest only group (between-subjects design). Furthermore, participants were exposed to one of three messages-one of two consensus messages or a control message-attributed to the National Academies of Sciences, Engineering and Medicine. A consensus message describing a percent (97%) of agreeing scientists was more effective at shifting public attitudes than a consensus message citing substantial evidence, but this was only true in the between-subject comparisons. Participants tested before and after exposure to a message demonstrated pre-sensitization effects that undermined the goals of the messages. Our results identify these nuances to the effectiveness of scientific consensus messaging, while serving to reinforce the importance of study design.

Entities:  

Mesh:

Substances:

Year:  2021        PMID: 34843557      PMCID: PMC8629267          DOI: 10.1371/journal.pone.0260342

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

The Gateway Belief Model (i.e., GBM [1]), argues that communicating about scientific consensus to the general public indirectly influences change in people’s support for policies by first increasing their perceptions of scientific consensus and then aligning their attitudes with that of the scientists [1]. The vast majority of studies using the GBM examine consensus messaging in the context of climate change [2-4]. A handful of studies have also examined the GBM in the context of genetically modified organisms [5, 6], and at least one study so far has looked at the effects of consensus messaging on the issue of vaccination [7]. This study examines consensus messaging about the efficacy of cannabis for treating chronic pain. According to the GBM, messages about scientific consensus on climate change correct faulty assumptions about the robustness of such consensus (measured as participants’ estimates of the percent of scientists who agree with a proposition). These corrected beliefs then influence individuals’ views and attitudes about the risks posed by climate change, which influence support for relevant policies [4, 7]. The model has been tested primarily in the context of climate change, using the “97% of climate scientists agree…” message with a pie graph highlighting the 97% number. Concerns exist among some, however, regarding the applicability of these results outside of climate change. First, not all consensus messages can be accurately summarized as a proportion of scientists who agree (and arguably, consensus about climate change should not be interpreted that way either [8]). To this end, this study uses and compares two consensus messaging strategies. The first highlights the same numerical percentage that is used by the climate change GBM studies, 97%, here attributed to medical, as opposed to climate, scientists. Notably, studies that have used percentages lower than 97% to 98% have found weaker support [9, 10]. The proportion of medical scientists who believe that cannabis is an effective treatment for chronic pain has not been established in the way that proportions of agreeing scientists on other issues have [11] or the way that consensus estimates have been established on climate change [12], but we include this condition for the purpose of comparison. We call this strategy the descriptive norm/authority appeal as it takes the social norms approach to changing people’s behavior by describing what others “think and do,” but, instead of describing lay publics’ social group members [13], the message describes views of scientists who are epistemic authorities. The second messaging strategy is accurate to the case of consensus surrounding medical cannabis use for chronic pain—a message we developed from a report written by a consensus panel formed by the National Academies of Sciences, Engineering, and Medicine (i.e., NASEM). One of the findings of this report is that there is substantial evidence that cannabis is an effective treatment for chronic pain in adult patients [14]. We call this the “evidence message” as it puts more emphasis on the weight of the evidence evaluated by a panel of experts as opposed to naming a proportion of agreeing scientists. Although this “evidence message” design may be a closer description to what philosophers of science would label scientific consensus, and it may be more in line with how consensus is established [8], there is evidence that it may be a less effective communication strategy than the descriptive norm/authority appeal. For example, Myers et al. [3] found that when agreement among scientists was described but a numerical estimate was not used (e.g., “an overwhelming majority of scientists have concluded…” vs “97% of scientists have concluded”), participants’ estimates of scientific consensus and other variables of interest did not significantly differ from the control condition. Similarly, Landrum et al. [6], which used a message highlighting a NASEM consensus panel on genetically modified organisms, also found no significant difference between exposure to the consensus message and participants’ estimates of agreement among scientists. Landrum and Slater [8] propose that messages may be more or less successful depending on whether the question about estimating consensus is aligned with the message design. That is, if the message describes a proportion of agreeing scientists, the question asked to participants ought to be “what percent of scientists agree.” On the other hand, if the message designed describes the process of consensus or a body of evidence, the question asked to participants ought to be to what extent they agree that consensus exists or that most of the evidence is supportive. To examine this, we randomly assigned participants in the current study to receive either the numerical (“what percent of scientists…”) or the agreement (“to what extent do you agree that…”) version of the consensus estimate question. Another concern related to the applicability of the GBM outside of the climate studies relates to the choice of mediating variables used in the model. The GBM includes three mediating variables, two of which are specific to climate change: belief that climate change is real and belief that climate change is caused by humans. These first two mediating variables are expected to influence the third mediating variable, worry about climate change. Although the other two items cannot, the worry item can be modified for other contexts to represent perceptions of risk about the issue at hand. In the case of cannabis, we asked participants how much risk they believe both medicinal and recreational cannabis pose to human health, safety, and/or prosperity. In addition to this risk perception question, we also asked participants how safe they feel using cannabis is and to what extent they personally believe that cannabis is effective for the treatment of chronic pain. See Table 1.
Table 1

Survey items.

VariableQuestion TextScale
Believable1This message is _______________.0 Not Believable to 100 Believable
Credible1The source of this message, the National Academies of Sciences, Engineering, and Medicine, is _____________.0 Not Credible to 100 Very Credible
Deceptive1The message is _______________.0 Not Deceptive to 100 Very Deceptive
Perceptions of Consensus
dns2What percent of medical scientists do you believe agree that there is substantial evidence that marijuana/cannabis is effective for the treatment of chronic pain?0% to 100%
dnp2What percent of the U.S. public do you believe agree that there is substantial evidence that marijuana/cannabis is effective for the treatment of chronic pain?0% to 100%
cns3To what extent do you agree or disagree that there is consensus among the medical scientific community that marijuana/cannabis is effective for the treatment of chronic pain?0 Strongly Disagree to 100 Strongly Agree
cnp3To what extent do you agree or disagree that there is consensus among the U.S. public that marijuana/cannabis is effective for the treatment of chronic pain?0 Strongly Disagree to 100 Strongly Agree
Attitudes
effTo what extent do you, personally, believe that marijuana/cannabis is effective for the treatment of chronic pain?0 Not Effective to 100 Very Effective
safeHow safe do you, personally, believe using marijuana/cannabis is?0 Not at all safe to 100 Very safe
rmedHow much risk do you believe medical marijuana/cannabis poses to human health, safety, and/or prosperity?0 No risk at all to 100 Very high risk
rrecHow much risk do you believe recreational marijuana/cannabis poses to human health, safety, and/or prosperity?0 No risk at all to 100 Very high risk
Policy Support
ma21Medical marijuana/cannabis should be made legal for adults ages 21 and older0 Strongly disagree to 100 Strongly agree
mallMedical marijuana/cannabis should be made legal for people of all ages, including those under 18.0 Strongly disagree to 100 Strongly agree
ra21Recreational marijuana/cannabis should be made legal for adults ages 21 and older0 Strongly disagree to 100 Strongly agree
rallRecreational marijuana/cannabis should be made legal for people of all ages, including those under 18.0 Strongly disagree to 100 Strongly agree

1 Items asked only at time 2 (after being presented with the message).

2 Half of the sample were asked to estimate percentage of agreement at time 2.

3 Half of the sample were asked to what extent they agree or disagree that consensus exists at time 2.

1 Items asked only at time 2 (after being presented with the message). 2 Half of the sample were asked to estimate percentage of agreement at time 2. 3 Half of the sample were asked to what extent they agree or disagree that consensus exists at time 2. Furthermore, in many of the attempts to implement the GBM to test for potential indirect (and direct) effects of scientific consensus messaging, researchers have used only between-subjects manipulations [6, 15]. However, in the original and subsequent GBM papers by the original authors [1, 4], pre- and post-message exposure data is collected and the difference scores are used in the mediation model. Although it may not be immediately clear from visualizations of the GBM (e.g., Fig 1 [1]), condition (consensus message vs. control) is expected to predict change in perceived scientific agreement between time 1 and time 2, which is expected to predict change in beliefs (climate change is real, climate change is human caused) between time 1 and time 2, etc. To be consistent with the original intention of the GBM and to test for differences between these two designs, we conducted a modified Solomon group design in which we collected both pretest/posttest data and posttest-only data.
Fig 1

Mean difference scores by condition and question for the pretest/posttest sample.

Error bars represent 95% confidence intervals. There was approximately one week between pretest and posttest. ***p < .001, **p < .01, *p < .05 for two-tailed, single sample t tests.

Mean difference scores by condition and question for the pretest/posttest sample.

Error bars represent 95% confidence intervals. There was approximately one week between pretest and posttest. ***p < .001, **p < .01, *p < .05 for two-tailed, single sample t tests.

Current study

This study aims to contribute to our understanding of the efficacy of consensus messaging by examining how researchers’ decisions about study design (e.g., whether data collected is cross sectional or pretest/posttest, how variables are operationalized, how consensus is approached and described, [8] influence study results. As stated earlier, we examine these questions using cannabis as the context. We chose medicinal cannabis as the context for a few reasons. First, scientific consensus has been established for this issue: a consensus panel convened by the National Academies of Science, Engineering, and Medicine (NASEM) determined that there is substantial evidence that cannabis is an effective treatment for chronic pain in adults [14]. Second, like for other issues for which consensus messaging has been studied (e.g., climate change, genetically modified organisms, vaccines), public policy arguably does not align with the available scientific evidence; despite its promising effects, medical cannabis remains illegal in many states (17 at the time of data collection) and about one-third of the U.S. public oppose legalizing cannabis [16]. In fact, according to the consensus report, regulatory barriers—such as the classification of cannabis as a Schedule I substance—hinder the advancement of research on cannabis [14]. Using cannabis as an example, the current study tests and challenges aspects of the Gateway Belief Model, which provides an explanation for how scientific consensus messaging may improve public support for policies related to publicly controversial science.

Methods

The study was approved by the Institutional Review Board at Texas Tech University as exempt research involving human subjects (IRB2020-302). Data were collected from a national sample of 1,558 U.S. adults recruited using Amazon’s Cloud Research Services tool at the end of June 2020 and beginning of July 2020. Prior to answering any study questions, participants read a digital consent form that explained the study and the participants’ rights and provided contact information for the IRB office and the principal investigator. Participants were then asked whether they consented to participate in the study. Participants who selected yes continued on and participants who said no were redirected to the end of the survey. Participants ranged in age from 18 to 82 (M = 41.11, Median = 39, SD = 13.28). For self-identified race and ethnicity, 9.3% of the sample reported identifying as Black or African American, 6.6% reported identifying as Hispanic/Latino, and 8.9% reported identifying as Asian; and 52.3% of the sample identified as female. The highest level of education earned for 8% of the sample was high school, around 31% of the sample completed at least some college coursework, and around 60% had at least a college education. Furthermore, 48.16% of the sample indicated that they were somewhat to very liberal, 22.56% were moderate, and 29.28% were somewhat to very conservative. We conducted our survey experiment using a modified Solomon group design. This design is used to test for pretest sensitization, which occurs when participants’ posttest ratings are influenced by exposure to pretest questions, but also to test for any differential condition effects based on whether participants completed the pretest [17]. We included a one-week gap between the pretest (time 1) and the experiment with posttest (time 2). During the time 1 pretest, 956 participants (the pretest/posttest sample) were introduced to the topic of medical cannabis While we exclusively use the term cannabis in this manuscript, we intentionally deviate slightly in the experimental instrument. The term marijuana is commonly used within the context of legislature and regulatory guidelines (Romi et al., 2021). Therefore, in an effort towards ecological validity, we use the terms cannabis and marijuana interchangeably within the experimental instrument and any use of either terms appears as was presented to the participants with the following prompt which was adapted from content on the Mayo Clinic’s website [18]. Medical marijuana—also called medical cannabis—is a term for derivatives of the cannabis sativa plant that are thought to relieve serious and chronic symptoms. Some states allow marijuana use for medical purposes. Federal law regulating marijuana supersedes state laws. Because of this, people may still be arrested and charged with possession in states where marijuana for medical use is legal. In this study we will ask you about your views towards the use of both recreational and medical marijuana Following the prompt, participants were asked a series of questions about their perceptions of consensus surrounding the use of medical and recreational cannabis both from medical scientists and the U.S. public, their own attitudes towards recreational and medical cannabis, and their support (or lack thereof) for legalization policies. Participants who completed the time 1 pretest survey were marked as eligible to sign up for the time 2 posttest survey a week later, and 935 returning participants completed this posttest survey. During the same time period, 610 new participants (posttest-only sample) completed the posttest-only survey at time 2, which was identical to the posttest survey. To ensure we would not have duplicate participants, those who took the pretest/posttest surveys were marked ineligible to sign up for the posttest-only survey and vice versa. At time 2, participants (from both the pretest/posttest sample and posttest-only samples) were randomly assigned to one of three message conditions: (1) a consensus message stating that there is substantial evidence that cannabis is effective for the treatment of chronic pain in adults (i.e., evidence message), (2) a descriptive norm/authority consensus message stating that 97% of medical scientists agree that cannabis is effective for the treatment of chronic pain in adults (i.e., 97% message), and (3) a control message stating that researchers are investigating the potential uses of Cannabidiols (CBD), a compound in cannabis that does not have psychoactive effects (i.e., control message). All three messages were attributed to the National Academies (NASEM). These messages are available on our project site on osf.io (https://osf.io/w8u6k/). Following exposure to the message, as a manipulation check, we asked participants which of the following statements best describes the main point of the message they just saw: (a) There is scientific consensus that cannabis is effective for the treatment of chronic pain, (b) Research on the effectiveness of cannabis is still ongoing, (c) I don’t know, or (d) I prefer not to answer. Overall, participants were accurate at identifying the main point of the message. Approximately 87% of the control participants chose option b (i.e., research is still ongoing), and 93% of the descriptive norm condition participants and 89% of the evidence message condition participants correctly chose option a (i.e., scientific consensus exists). Furthermore, at time 2, we randomly assigned participants to answer one of two question formats about their perceptions of consensus. Half of the sample was asked to estimate what percent of medical scientists and what percent of the U.S. public agree that cannabis is effective for the treatment of chronic pain (on scales from 0 to 100%). The other half of the sample was asked to what extent they agree or disagree that there is consensus among medical scientists and among the U.S. public that cannabis is effective for the treatment of chronic pain (on scales of 0 –strongly disagree to 100 –strongly agree). Recall, during the pretest at time 1, participants were asked to answer both questions. See our project page on the Open Science Framework https://osf.io/w8u6k/ for data files, R script, stimuli, and survey questions.

Results

Test for pretest sensitization effects on the condition manipulation

Following the recommendations of Braver and Braver [19] for analyzing Solomon group designs, we began by determining whether an interaction effect exists between our condition manipulation and the sample (pre/posttest sample, posttest-only sample). The presence of a significant interaction would indicate both that pre-sensitization exists and that it likely moderates any effects of the condition manipulation. Because we had several outcome variables, we first conducted a multivariate analysis of variance (MANOVA) Because we split the sample for the perceptions of consensus items, we left these four variables out of the MANOVA and found a significant main effect of our condition manipulation (Pillai’s Trace = 0.053, approximate F = 3.62, p < .001). There was also a significant main effect of sample (Pillai’s Trace = 0.021, approximate F = 2.86, p = .001), suggesting pre-test sensitization exists. Importantly, there was a significant interaction between sample and condition (Pillai’s Trace = 0.024, approximate F = 1.64, p = .030), meaning that the effects of our experimental manipulation are likely conditional on whether participants answered pretest questions. Because we found evidence that pre-test sensitization exists and it likely affects the condition manipulation, we followed up on the significant main effect of condition using simple effects tests on the pre/posttest sample (within-subject effects) looking for a difference between time 1 and time 2 as well as simple effects of the condition manipulation on the posttest-only sample [19]. Descriptive statistics for the outcome variables for each sample are reported in the S1 Table. Full analyses results, including ANOVA tables, are available in the supplementary materials.

Test for within-subject effects of condition manipulation—a difference score analysis

Our study design allowed us to examine whether the differences between individuals pre- and post-message exposure ratings (i.e., their “difference scores”) varied significantly from 0—in other words, were there significant increases or decreases in the outcome variables between time 1 and time 2. We calculated difference scores by subtracting participants’ pretest ratings from their posttest ratings (posttest—pretest = difference score) and then conducted single sample t-tests We conducted single-sample t-tests on the difference scores as opposed to paired-samples t-tests because the visual depiction (see Fig 2) is simpler—i.e., there are fewer bars to keep track of. We include the analysis using paired samples t-tests in the supplementary materials on OSF. Note that the results do not differ based on which version of t-test we use. Note that these analyses are only of the pretest/posttest sample. Fig 1 shows the mean differences by question and message condition.
Fig 2

Mean rating for each item by condition and question for the posttest only sample.

Error bars represent standard error. Significant differences between message conditions (determined by Tukey tests) are shown. ***p < .001, **p < .01, *p < .05.

Mean rating for each item by condition and question for the posttest only sample.

Error bars represent standard error. Significant differences between message conditions (determined by Tukey tests) are shown. ***p < .001, **p < .01, *p < .05. Participants often moved in the non-expected direction between the pretest and posttest surveys. For instance, in all three conditions, participants said they agreed less that cannabis is effective for the treatment of chronic pain after they were exposed to the messages, even though two of the messages stated that there is consensus that cannabis is an effective treatment. Similarly, in all three conditions, participants expected a lower level of consensus among the U.S. public on the effectiveness of cannabis to treat chronic pain than before message exposure. Notably, the messages did not mention public views, only scientific ones. Though some may speculate that this move in the less desirable direction among participants could be boomerang or reactance effects [20, 21], an alternative explanation is that this is the result of pretest sensitization, especially given that this shift was seen in the control condition as well. In this case, discussing the issue incidentally may have made medicinal cannabis seem like a more controversial issue among the public than study participants originally thought. We did find two significant expected effects for the condition in which participants were exposed to the 97% message: participants in this condition rated recreational cannabis as less risky than they did at time 1 (although they didn’t shift on medical cannabis which is what the message was about), and they increased the percent of scientists presumed to agree from time 1 to time 2. See Fig 1 and S2 Table.

Test for between-subject effects of condition manipulation

Because we initially found a significant interaction between our condition manipulation and the sample from the MANOVA, which indicates that pre-sensitization exists and that it likely moderates any effects of the condition manipulation, we analyzed our pretest/posttest sample and posttest-only sample separately [19]. To analyze our posttest sample, we followed up on the MANOVA with one-way ANOVAs on each of the dependent variables (see Table 1). In this case, we were specifically looking for differences between the consensus message conditions (the 97% message, the evidence message) and the control message. See Fig 2 and S3 Table. Unlike the results from the pretest/posttest sample, the results from the posttest-only sample are more supportive of the hypothesis that there are effects of consensus messaging—at least when it comes to messages that describe a descriptive norm amongst authorities (i.e., 97% of medical scientists agree). Indeed, there were significant differences between the 97% message and the control message for four key variables: percent of scientists perceived to agree, belief that cannabis is an effective treatment for chronic pain, and support for legalizing medical and recreational cannabis for those aged 21 years and older. Notably, these differences between the control condition and consensus message condition were not significant when the consensus message described substantial evidence (as opposed to a proportion of agreeing scientists). And in many cases, participants’ item ratings in the 97% message condition differed significantly from those in the evidence message condition. This leads to questions about whether participants are actually influenced by the consensus aspect of the message or some other characteristic that differs between the two types of consensus messages examined in this study (e.g., the presence of a number, a high percentage).

Aligning the message strategy with the measurement design

Sometimes the relationship between exposure to a consensus message and participants’ estimates of scientific consensus is not statistically significant when non-numeric messages are used [3, 8, 22]. Landrum and Slater [8] hypothesize that this may be due to a lack of alignment between the type of message used and the way the participants’ perceptions of scientific consensus is measured. For example, Bolsen and Druckman [23] found a significant relationship between exposure to a process-based consensus message (describing how consensus was formed from a National Academies of Sciences panel) and their measure of perception of scientific consensus (i.e., whether most scientists agree). We designed this study to answer this question by randomly assigning participants at posttest to two different forms of measurement for this question: one that asked them to estimate the percent of scientists who agree and one that asked them how much they agreed or disagreed that there is scientific consensus (see Table 1). We found mixed evidence regarding this hypothesis. See Table 2.
Table 2

Is there a significant relationship between condition manipulation and participants’ perception of consensus based on consensus message strategy used and measurement?

Significant relationship between consensus message used and perception of consensus?
97% vs. ControlEvidence vs. Control
Pretest/posttest sample
Δ Estimated percent of scientists who agreeYesNo
Δ Agreement that consensus existsYesYes
Posttest-only sample
Estimated percent of scientists who agreeYesNo
Agreement that consensus existsNoNo

Change in perceptions of scientific consensus between time 1 and time 2

First, we looked at change in perceptions of scientific consensus among the pretest/posttest sample. A 3 (Message Condition) by 2 (Measure) ANOVA suggests that there is a main effect of condition, F(2, 882) = 6.88, p = .001, ηp2 = 0.02, but not an effect of Measure, F(1, 882) = 1.54, p = .215, ηp2 = 0.002, or an interaction effect between condition and measure, F(2, 882) = 1.87, p = .155, ηp2 = 0.004. Follow-up simple GLM analyses show that the relationships between condition (consensus vs. control) and participants’ perception of consensus are as follows. The relationship between condition manipulation—the 97% message compared to the control—significantly predicts both participants’ estimates of the percentage of scientists who agree (b = 5.76, p = .005) and participants agreement that consensus exists (b = 6.08, p = .022). However, the relationship between condition manipulation—the evidence message compared to the control—predicts only participants’ agreement that consensus exists (b = 4.88, p = .028) and not participants’ estimates of the percentage of scientists who agree (b = -0.70, p = .732). This lends some support to the hypotheses that the measurement must be aligned with the message, but this seems to be true only for the evidence message.

Perceptions of scientific consensus between conditions

Next, we looked at participants perceptions of scientific consensus at time 2 among the posttest-only sample. Unlike for the within-subjects data, a 3 (Message Condition) by 2 (Measure) ANOVA shows a significant interaction between message condition and measure, F(2, 604) = 6.65, p = .001, ηp2 = 0.02, in addition to the main effect of condition, F(2, 604) = 12.64, p < .001, ηp2 = 0.04. There was no significant main effect of measure, F(1, 604) = 1.27, p = .260, ηp2 = 0.002. Follow-up simple GLM analyses show the relationships between condition manipulation (consensus vs. control) and participants’ perceptions of consensus are as follows. The relationship between condition manipulation—the 97% message compared to the control—significantly predicts participants’ estimates of the percentage of scientists who agree (b = 11.91, p < .001) but not participants agreement that consensus exists (b = 4.04, p = .169). However, the relationship between condition manipulation—the evidence message compared to the control—does not significantly predict participants’ agreement that consensus exists (b = 1.77, p = .540) nor participants’ estimates of the percentage of scientists who agree (b = -5.65, p = .074). In this case, the alignment appears to matter for the 97% message, but not for the evidence message.

Conceptually replicating the Gateway Belief Model

To test the hypothesis by the GBM that consensus messaging indirectly influences policy support by correcting people’s estimates of scientific consensus and shifting their attitudes, we conducted mediation analysis using PROCESS (model 6 [24]). As stated earlier, the mediators in the GBM must be altered for different topics. The GBM for climate change includes three mediators after the estimate of the proportion of agreeing scientists—i.e., belief in climate change, belief in human causation, worry about climate change, two of which are not applicable to other issues like genetically modified organisms or cannabis. We used participants’ perceptions of risk for medical cannabis. An alternative to this model using our data could use “effectiveness” in place of the risk perception. However, as the change in effectiveness was negative for each of the conditions, this didn’t make sense to test for the current study See Fig 3. Furthermore, consistent with van der Linden et al. [1], each of the mediators and the outcome variable are difference scores (time 2 –time 1). For the model shown, the only effect we were able to replicate was the effect of message condition on the change in the estimated percent of agreeing scientists. No other paths (direct or indirect) were significant. For the full results, see the supplementary materials.
Fig 3

Modified version of the GBM for medical cannabis using pretest/posttest data (change scores).

All shown paths were tested but the only significant path was from the message manipulation to the change in estimated percent of agreeing scientists. Note that condition only reflects the descriptive norm/authority message versus the control message and does not include the evidence message.

Modified version of the GBM for medical cannabis using pretest/posttest data (change scores).

All shown paths were tested but the only significant path was from the message manipulation to the change in estimated percent of agreeing scientists. Note that condition only reflects the descriptive norm/authority message versus the control message and does not include the evidence message. We also ran the model using the posttest-only data. Importantly, this model describes the relationship between the variables measured at time 2 and not the relationship between change scores the way that the original GBM specified. In the case of the posttest-only data, the model worked as predicted. In terms of direct effects, compared to the control condition, the descriptive norm condition is related to a greater proportion of medical scientists assumed to agree, which is related to lower risk perceptions for medical cannabis, which is then related to greater support for legalization policies. See Fig 4. Furthermore, the indirect effect of condition on support for legalization of medical cannabis through perceptions of the percent of medical scientists who agree and risk perceptions was significant (b = 1.44, 95% CI [0.37, 3.06]).
Fig 4

Modified version of the GBM for medical cannabis using the posttest-only data.

Note that condition only reflects the descriptive norm/authority message versus the control message and does not include the evidence message.

Modified version of the GBM for medical cannabis using the posttest-only data.

Note that condition only reflects the descriptive norm/authority message versus the control message and does not include the evidence message.

Discussion

This study aimed to contribute to our understanding of the efficacy of consensus messaging by examining how researchers’ decisions about study design might influence study results, using medicinal cannabis as the context. We began by first testing for direct experimental effects on each of the outcome variables, including participants’ beliefs about and estimations of consensus, beliefs about cannabis’s efficacy for treating chronic pain, perceptions of risk associated with using medicinal and recreational cannabis, and support for policies legalizing their use. Then, we examined how researchers’ decisions about study design (e.g., whether data collected is cross sectional or pretest/posttest, how variables are operationalized, how consensus is approached and described [8]) influenced study results. Finally, we tested two models aiming to conceptually replicate the GBM for determining whether the predicted indirect path from consensus messaging to policy support is present.

Experimental effects of consensus messaging

First, we aimed to test for direct experimental effects on each of the outcome variables. Because of our study design, we were able to test this in two ways: whether participants changed their ratings of the outcome variables after being exposed to the messages (i.e., difference score analysis) and whether participants who were exposed to the consensus messages (as opposed to the control messages) had different ratings of the outcome variables (i.e., between conditions analysis). Notably, we found evidence of pretest sensitization and evidence suggesting that pretest sensitization influenced the effect of the condition manipulation. Therefore, we needed to analyze the two samples separately [19].

Differences in ratings before and after exposure to the consensus message

These specific consensus messages about the effectiveness of medical cannabis to treat chronic pain should have influenced participants to increase their perceptions of scientific consensus, potentially decrease their perceptions of risk associated with medical cannabis, increase their beliefs that medical cannabis is effective for the treatment of chronic pain, and potentially increase support for the legalization policies. However, we found that in many cases, participants moved in the non-hypothesized direction. For example, in all three conditions, participants shifted their ratings to agree less that medical cannabis is an effective treatment for chronic pain. Although pretest sensitization often assumes that the pretest will increase participants’ awareness and/or responsiveness to the condition manipulation [19], here, participants often shifted in the opposite direction. One possibility for this, as we discussed earlier, is that participants may have begun to wonder if cannabis is a more controversial issue than they originally thought because we asked them these questions on more than one occasion.

Differences in ratings between conditions for posttest-only sample

Amongst the posttest-only data, we generally found that the 97% message appears to influence participants in the expected ways relative to the control. That is, compared to the control condition, participants in the 97% message condition estimated a larger proportion of agreeing scientists on average, were more likely to agree that cannabis is an effective treatment for chronic pain, and were more likely to support legalization for both medical and recreational cannabis for adults (ages 21 and older). Although this was the case for the 97% message compared to the control, this was not true for the “evidence” message compared to the control. We discuss the implications of this difference further below.

Effects of study design decisions

The second aim of this research was to contribute to theory by examining how researchers’ decisions about study design in the consensus messaging literature influence study results. We already discussed the differences associated with pretest/posttest data compared to cross-sectional data. We now discuss some of the other design decisions that appeared to influence the results.

The descriptive norms/authority approach (97% message) appeared to be more influential than the description of the weight of evidence

In the introduction, we discussed two approaches to describing scientific consensus that we would test in this study: the descriptive norms/authority approach (i.e., the 97% message) and the evidence-based approach (i.e., the evidence message). We mentioned that prior work suggests that the descriptive norms/authority approach may be a more effective strategy for persuasion even if it is a less accurate representation of what scientific consensus is, and these previous findings were supported by our results. Even when the descriptive norm/authority message—the 97% message—didn’t vary significantly from the control condition, it often varied significantly from the evidence message condition.

Consensus messages influenced numerical estimates of consensus but not agreement that consensus exists

One hypothesis from Landrum and Slater [8] was that the reason non-numeric messages (e.g., messages that stress the evidence or describe agreement without specific numbers) may not predict perceptions of consensus is that the question is often not aligned with the message and needs to be so for the treatment to be effective. That is, if a non-numeric question is asked, then the participants need to be asked to express a non-numeric form of the perception of scientific consensus (e.g., to what extent people agree that scientific consensus exists) rather than to estimate a numeric proportion of scientists in agreement. However, we found mixed evidence regarding this hypothesis. See Table 2. Future research should continue to investigate this. Finally, we aimed to test a conceptual replication of the GBM for determining whether the predicted indirect path from consensus messaging to policy support is present. Interestingly, we failed to replicate the GBM when we constructed the model using difference scores as is consistent with the original GBM studies [1, 4]. However, we did find the expected relationships between variables when we used cross-sectional data. Using cross-sectional data predicts relationships between the ratings at time 2 as opposed to predicting the change in participants’ ratings between time 1 and time 2 (i.e., difference scores). One reason that the first model (using difference scores) didn’t show the hypothesized relationships may be related to the presence of the pre-sensitization effects and the conditional effects of our experimental manipulation based on the pre-sensitization.

Limitations

Several limitations to this study need to be taken into consideration when interpreting the results. First, we collected the data via Amazon’s Cloud Research platform. Although care was taken to attempt to get a diverse sample by requesting groups of participants based on age and religiosity, the sample is not nationally representative. This panel is generally known for having a more politically liberal panel (especially amongst older participants [25]), and this is true of our sample. However, although there are some differences between the political ideology groups in support of legalization policies, the strongest demographic predictor of support for cannabis legalization that has been reported is generation [16]. Differences in support for cannabis legalization based on gender, race/ethnicity, or education have not been found by prior nationally representative surveys [16]. A second limitation is that the data for this study was collected during the summer 2020 when many individuals were quarantining due to the COVID-19 pandemic. Some mainstream news coverage during the time of data collection suggested that cannabis may provide treatment for some of the side effects of COVID-19 [26], which could have increased support further for cannabis legalization. Indeed, cannabis sales escalated across the U.S. and Canada during this period [27, 28]; and in some states, cannabis dispensaries were considered “essential businesses” [29]. However, other news coverage suggested that bills for cannabis legalization were being sidelined while local and state governments were focused on the pandemic [30]. A third limitation is that our control condition may not have functioned as we had intended. We chose to make the control condition about ongoing research related to CBD because we wanted a control message that was tangentially related to cannabis but was not a consensus message and was not about medical uses of cannabis. We expected that there would be no change between pretest and posttest for this control condition (e.g., CBD). According to the FDA, marijuana is different from CBD [31]. CBD is one compound in the cannabis plant, is not psychoactive (c.f., tetrahydrocannabinol, or THC), and is marketed in an array of health and wellness products in places where cannabis remains illegal [31]. Most participants (87%) who were randomly assigned the control condition message about CBD understood that this message was NOT stating that scientific consensus exists, and they chose the option that indicated that research on the effectiveness of cannabis is still ongoing. Although the control condition message mentioned that “researchers are still investigating…” the topic of the investigation was CBD, and the participants may not have made a distinction between the two. In retrospect, we should have included a response option that specifically mentioned CBD and not cannabis. Importantly, though, we would still expect the message to work in a similar way as if it were clearly not about cannabis. Since participants generally seemed to understand that the message meant no consensus exists (because research is still ongoing), we would not expect change between pretest and posttest on our outcome variables. We would have only expected negative change for the control condition if the message had stated that there is scientific consensus that cannabis is NOT effective or if there was pretest sensitization. We have no reason to believe our results were due to the former as the manipulation check item suggests participants understood the purpose of the message. Thus, we do not believe our results were negatively affected by this potential issue. Lastly, it is also worthwhile to consider that many of the consensus messaging studies have focused on issues for which public support is much lower than it is for cannabis. According to the Pew Research Center, in 2019, approximately only 8% of U.S. adults believe that cannabis should be kept illegal in all circumstances (medical and recreational); in contrast, 59% say it should be legal in all circumstances and 32% say it should be legal only for medical use [16]. Overall, in our data, support for medical cannabis was very high. At pretest, before seeing any consensus message, participants assumed an average of 72% of scientists (SD = 18.19, median = 75%) agreed that cannabis is effective for the treatment of chronic pain. They strongly agreed, themselves, that cannabis is an effective treatment (M = 77.25 out of 100, SD = 21.17, median = 81). They agreed that cannabis is safe (M = 68.23 out of 100, SD = 28.3, median = 75). And they perceived the risk of medical cannabis to be low (M = 31.61 out of 100, SD = 29.16, median = 20). Furthermore, support for legalization of medical cannabis for adults (21 and older) was high (M = 80.73 of 100, SD = 26.51, median = 91). Thus, there may be no need to use consensus campaigns to increase public support on this issue. We did see less support for legalizing medical cannabis for people of all ages, including children (M = 53.48 of 100, SD = 36.38, median = 60). So, future research could consider testing messages specifically discussing the effectiveness of medical cannabis among younger populations.

Conclusions

This study provides more evidence that study design decisions influence the extent to which exposure to a consensus message influences public perceptions and indirectly influences policy support (as posed by the Gateway Belief Model). One such decision is the way in which consensus messages are described; our study adds to the literature suggesting that the descriptive norm/authority appeal strategy is more persuasive than describing the existence of substantial evidence. However, as Landrum and Slater [8] discussed, there are philosophical issues with treating the descriptive norm/authority appeal strategy as a “consensus” message as well as practical issues (e.g., there is not always an accurate measurement of the proportion of agreeing scientists). One obvious question might now be: what does the public know or think about scientific consensus? While less of an issue when using a specific percentage of scientists, as in the “97%” treatments for climate change, messages simply stating that there is a scientific consensus on an issue rely on the public understanding what constitutes such a consensus. On the contrary, the topic of scientific consensus appears to not be broadly understood. It may be that the oversimplification of consensus messaging overlooks the complexities which differentiate consensus from mere agreement. If this is true, a connection may exist between this vagueness in definition and public belief that consensus is manufactured and a product of group think [32, 33]. Further, sufficient understanding of the epistemic significance of the term “consensus” might not be attainable simply through learning a definition of the term [32]. In a large interview study, Slater et al. [33] find that few members of the general public are aware of the concept of scientific consensus at all, and that those who are familiar have a limited and unsophisticated grasp of it. This brings us to another apparent dilemma for consensus-framed science communication and particularly the use of the GBM for communicating science about which a percentage consensus message is not available: the public’s limited understanding of the subject is likely to make messaging around it ineffective.

DFescriptive statistics for each of the outcome variables for the pretest/posttest sample and the posttest only sample.

(DOCX) Click here for additional data file.

Descriptives, p values, and Cohen’s d for single-sample t-tests (pretest/posttest sample).

(DOCX) Click here for additional data file.

Between conditions effects for posttest only sample.

(DOCX) Click here for additional data file. 12 Oct 2021 PONE-D-21-22069Influences of Study Design on the Effectiveness of Consensus Messaging: The Case of Medicinal CannabisPLOS ONE Dear Dr. Landrum, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Nov 26 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Lucy J Troup, Ph.D Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright. We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission: 1. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license. We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text: “I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.” Please upload the completed Content Permission Form or other proof of granted permissions as an "Other" file with your submission. In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].” 2. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments (if provided): Thank you so much for your patience. I would like to concur with the reviewers comments that this is a well written manuscript. It could however benefit from some minor revisions. Please look careful at the reviewers suggestions and I look forward to receiving your revised manuscript. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This was a well written and organized manuscript that explored the effectiveness of consensus and evidence based messaging in view points on medical cannabis. Overall the authors found that a message reporting a high percentage of scientists agreeing is a more convincing than reporting existence of evidence. The researchers also found exposure effects related to research method. This is an important article as it can help media and researchers effectively get their message out and for researchers to choose the best method of testing method effectiveness. It was not clear in the introduction as to why medical cannabis was the topic of choice. The authors noted that "we are less concerned with increasing public support for medical cannabis than we are curious about the persuasiveness of different messaging strategies." which is fine, but this manuscript will attract readers that are concerned with this, therefore I think a bit more needs to be discussed on the topic. My major concern with the manuscript was the control condition. In Figure 1 the two experimental conditions state the "cannabis is effective" while the control condition mentions CBD and the fact that is has no psychoactive effects. I am not sure if that is a good control condition to compare against the experimental conditions, since it is only mentioning CBD when the questions asked were about cannabis. What I would like to see is a discussion on how this issue may have affected the results. Minor things The resolution for the figure was to poor to read them comfortably. In the conclusion, please add a statement on study design your findings. Reviewer #2: Development of concept using different messaging strategies is very interesting, well explained and referenced. Excellent range of testing used, interesting and statistically relevant outcome reported. Some very interesting findings in regards to a shift in perception post consensus messaging, and this would benefit from further exploration in other research papers. This paper has implications beyond the cannabis field and is an excellent contribution to the topic. One point that needs work on: Overarching research question at the start of the paper is whether consensus message or evidence messaging influenced the perception of medical cannabis. Throughout the paper it is clear there are other aspects being studies, however, in the discussion it states the overarching question for the study was whether consensus messaging influences public support for legalization of cannabis. This is not made clear at the start of the paper, and indeed there are several bits throughout the paper that set out how the study is being used to test different aspect of cannabis, and to see what impact the study design has consensus. This needs to be tidies up a bit and the focus of the paper clarified at the start/discussion. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: Yes: Anna Ross [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 21 Oct 2021 **NOTE: I have also uploaded this information as a Response to Reviewers document** Authors’ Responses to Reviewers Thank you for giving us the opportunity to revise our manuscript titled “Influences of Study Design on the Effectiveness of Consensus Messaging: The Case of Medical Cannabis” and resubmit for possible publication at PLOS One. Below, we include point-by-point responses to the comments provided by the reviewers. In addition to the changes that we describe below, we have also removed Figure 1. Although we created the images/messages and used open access stock photos, we were not able to secure the rights to publish the figure with the logo from the National Academies of Sciences, Engineering, and Medicine. Therefore, we cut the figure from the paper and direct readers to our project page on OSF.io to view the stimuli used in the study. Review Comments to the Author **Reviewer #1: This was a well written and organized manuscript that explored the effectiveness of consensus and evidence-based messaging in viewpoints on medical cannabis. Overall, the authors found that a message reporting a high percentage of scientists agreeing is a more convincing than reporting existence of evidence. The researchers also found exposure effects related to research method. This is an important article as it can help media and researchers effectively get their message out and for researchers to choose the best method of testing method effectiveness.** Thank you! **REV: It was not clear in the introduction as to why medical cannabis was the topic of choice. The authors noted that "we are less concerned with increasing public support for medical cannabis than we are curious about the persuasiveness of different messaging strategies." which is fine, but this manuscript will attract readers that are concerned with this, therefore I think a bit more needs to be discussed on the topic.** Our reason for emphasizing this is that we wanted to be clear that this is a manuscript that contributes to theory by examining the effectiveness of consensus messaging as a strategy for increasing public support for controversial policies (such as the legalization of cannabis). To address the reviewer’s concern, we have updated the “current study” paragraph by providing the reasons for which cannabis was chosen as the context for the paper. First, scientific consensus has been established for this issue, and second, the current policies in place do not align with the available scientific evidence. We also reworded the last line in this paragraph so that it does not downplay the context. It now states: “Using cannabis as an example, this research tests and challenges aspects of the Gateway Belief Model, which provides an explanation for how scientific consensus messaging may improve public support for policies related to publicly controversial science.” **REV: My major concern with the manuscript was the control condition. In Figure 1 the two experimental conditions state the "cannabis is effective" while the control condition mentions CBD and the fact that is has no psychoactive effects. I am not sure if that is a good control condition to compare against the experimental conditions, since it is only mentioning CBD when the questions asked were about cannabis. What I would like to see is a discussion on how this issue may have affected the results.** We added this to the limitations section of the paper. See this paragraph below: A third limitation is that our control condition may not have functioned as we had intended. We chose to make the control condition about ongoing research related to CBD because we wanted a control message that was tangentially related to cannabis but was not a consensus message and was not about medical uses of cannabis. We expected that there would be no change between pretest and posttest for this control condition (e.g., CBD). According to the FDA, marijuana is different from CBD (FDA, 2020). CBD is one compound in the cannabis plant, is not psychoactive (c.f., tetrahydrocannabinol, or THC), and is marketed in an array of health and wellness products in places where cannabis remains illegal (FDA, 2020). Most participants (87%) who were randomly assigned the control condition message about CBD understood that this message was NOT stating that scientific consensus exists, and they chose the option that indicated that research on the effectiveness of cannabis is still ongoing. Although the control condition message mentioned that “researchers are still investigating…” the topic of the investigation was CBD, and the participants may not have made a distinction between the two. In retrospect, we should have included a response option that specifically mentioned CBD and not cannabis. Importantly, though, we would still expect the message to work in a similar way as if it were clearly not about cannabis. Since participants generally seemed to understand that the message meant no consensus exists (because research is still ongoing), we would not expect change between pretest and posttest on our outcome variables. We would have only expected negative change for the control condition if the message had stated that there is scientific consensus that cannabis is NOT effective or if there was pretest sensitization. We have no reason to believe our results were due to the former as the manipulation check item suggests participants understood the purpose of the message. Thus, we do not believe our results were negatively affected by this potential issue. **REV: Minor things** **REV: The resolution for the figure was too poor to read them comfortably.** We converted our original images from pdf to tiff (300 dpi) as requested by PLOS One’s image submission guidelines. We agree that the images as they appear in the submission pdf are blurry. However, if you click the “Click here to access/download; Figure…” the downloaded versions of the figures are much clearer. **REV: In the conclusion, please add a statement on study design your findings.** We changed the first paragraph of the conclusions section to the following: This study provides more evidence that study design decisions influence the extent to which exposure to a consensus message influences public perceptions and indirectly influences policy support (as posed by the Gateway Belief Model). One such decision is the way in which consensus messages are described; our study adds to the literature suggesting that the descriptive norm/authority appeal strategy is more persuasive than describing the existence of substantial evidence. However, as Landrum and Slater discussed (2020), there are philosophical issues with treating the descriptive norm/authority appeal strategy as a “consensus” message as well as practical issues (e.g., there is not always an accurate measurement of the proportion of agreeing scientists). **Reviewer #2: Development of concept using different messaging strategies is very interesting, well explained and referenced. Excellent range of testing used, interesting and statistically relevant outcome reported. Some very interesting findings in regard to a shift in perception post consensus messaging, and this would benefit from further exploration in other research papers. This paper has implications beyond the cannabis field and is an excellent contribution to the topic.** Thank you! **REV: One point that needs work on:** **REV: Overarching research question at the start of the paper is whether consensus message or evidence messaging influenced the perception of medical cannabis. Throughout the paper it is clear there are other aspects being studies, however, in the discussion it states the overarching question for the study was whether consensus messaging influences public support for legalization of cannabis. This is not made clear at the start of the paper, and indeed there are several bits throughout the paper that set out how the study is being used to test different aspect of cannabis, and to see what impact the study design has consensus. This needs to be tidies up a bit and the focus of the paper clarified at the start/discussion.** Thank you for this comment. First, we changed the first line of the discussion to state “This study aimed to contribute to our understanding of the efficacy of consensus messaging by examining how researchers’ decisions about study design might influence study results, using medicinal cannabis as the context.” Furthermore, we realize that we were using “acceptance” and/or “support” generally to account for each of the different types of outcome variables. We have reworded the “current study” section of the paper to more specifically state that the goal of scientific messaging strategies has been to increase public support for policies (like legalization). We believe that this connects better to the beginning of the introduction which specifies that the Gateway Belief Model aims to explain how communicating about scientific consensus may indirectly influence change in people’s support for policies (line 43). Submitted filename: RESPONSE TO REVIEWERS.docx Click here for additional data file. 9 Nov 2021 Influences of Study Design on the Effectiveness of Consensus Messaging: The Case of Medicinal Cannabis PONE-D-21-22069R1 Dear Dr. Landrum, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Lucy J Troup, Ph.D Academic Editor PLOS ONE Additional Editor Comments (optional): Thank you so much for working with the reviewers comments to improve the manuscript. This is an important topic in light of many policy changes occurring around the use of cannabis as medicine. Reviewers' comments: 15 Nov 2021 PONE-D-21-22069R1 Influences of Study Design on the Effectiveness of Consensus Messaging: The Case of Medicinal Cannabis Dear Dr. Landrum: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Lucy J Troup Academic Editor PLOS ONE
  6 in total

1.  In consensus we trust? Persuasive effects of scientific consensus communication.

Authors:  Sedona Chinn; Daniel S Lane; Philip S Hart
Journal:  Public Underst Sci       Date:  2018-07-30

2.  Simple messages help set the record straight about scientific agreement on human-caused climate change: the results of two experiments.

Authors:  Teresa A Myers; Edward Maibach; Ellen Peters; Anthony Leiserowitz
Journal:  PLoS One       Date:  2015-03-26       Impact factor: 3.240

3.  The scientific consensus on climate change as a gateway belief: experimental evidence.

Authors:  Sander L van der Linden; Anthony A Leiserowitz; Geoffrey D Feinberg; Edward W Maibach
Journal:  PLoS One       Date:  2015-02-25       Impact factor: 3.240

4.  Changes in perceived scientific consensus shift beliefs about climate change and GM food safety.

Authors:  John R Kerr; Marc Stewart Wilson
Journal:  PLoS One       Date:  2018-07-06       Impact factor: 3.240

5.  The Effect of Information Provision on Public Consensus about Climate Change.

Authors:  Tatyana Deryugina; Olga Shurchkov
Journal:  PLoS One       Date:  2016-04-11       Impact factor: 3.240

6.  Highlighting consensus among medical scientists increases public support for vaccines: evidence from a randomized experiment.

Authors:  Sander L van der Linden; Chris E Clarke; Edward W Maibach
Journal:  BMC Public Health       Date:  2015-12-03       Impact factor: 3.295

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.