| Literature DB >> 30699155 |
Jessica Paynter1, Sarah Luskin-Saxby1, Deb Keen2, Kathryn Fordyce3, Grace Frost4, Christine Imms5, Scott Miller6, David Trembath7, Madonna Tucker8, Ullrich Ecker9.
Abstract
Misinformation poses significant challenges to evidence-based practice. In the public health domain specifically, treatment misinformation can lead to opportunity costs or direct harm. Alas, attempts to debunk misinformation have proven sub-optimal, and have even been shown to "backfire", including increasing misperceptions. Thus, optimized debunking strategies have been developed to more effectively combat misinformation. The aim of this study was to test these strategies in a real-world setting, targeting misinformation about autism interventions. In the context of professional development training, we randomly assigned participants to an "optimized-debunking" or a "treatment-as-usual" training condition and compared support for non-empirically-supported treatments before, after, and six weeks following completion of online training. Results demonstrated greater benefits of optimized debunking immediately after training; thus, the implemented strategies can serve as a general and flexible debunking template. However, the effect was not sustained at follow-up, highlighting the need for further research into strategies for sustained change.Entities:
Mesh:
Year: 2019 PMID: 30699155 PMCID: PMC6353548 DOI: 10.1371/journal.pone.0210746
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Manipulation check.
| Group | Statistic | |||||||
|---|---|---|---|---|---|---|---|---|
| Control | Debunking | |||||||
| ( | ( | |||||||
| 1. Included charts | 1.69 | 4.48 | 9.13 | 40.69 | < .001 | 2.07 | -3.41 | -2.17 |
| 2. Gave alternative options | 2.38 | 4.35 | 6.30 | 37.84 | < .001 | 1.72 | -2.60 | -1.34 |
| 3. Professional organizations advise against | 3.04 | 4.39 | 5.20 | 55 | < .001 | 1.36 | -1.87 | -.83 |
* Note. Levene’s test p < .05, thus equal variances not assumed and Welch’s t-test reported; 95% CI = 95% confidence interval of the difference
Fig 1Violin plot, showing mean support for non-empirically supported treatments across control and debunking conditions at time points 1 (pre-intervention) and 2 (post-intervention).
Error bars show 95% Cousineau-Morey confidence intervals (calculated following Baguley, 2012) [42]; density of score distribution is displayed using shaded areas with wider sections indicating more frequent scores.
Fig 2Violin plot, showing mean support for empirically supported treatments across control and debunking conditions at time points 1 (pre-intervention) and 2 (post-intervention).
Error bars show 95% Cousineau-Morey confidence intervals (calculated following Baguley, 2012) [42]; density of score distribution is displayed using shaded areas with wider sections indicating more frequent scores.
Correlations between attitude measures and change in support for non-ESTs and ESTs.
| Deference to Scientific Authority | EBPAS Divergence | EBPAS Openness | ||||
|---|---|---|---|---|---|---|
| Control | Debunking ( | Control ( | Debunking ( | Control ( | Debunking ( | |
| Non-EST | ||||||
| Δ T1/T2 | .08 | -0.17 | .10 | -.17 | .36 | -.20 |
| Δ T1/T3 | .20 | -.27 | -.14 | -.17 | .22 | -.52 |
| EST | ||||||
| Δ T1/T2 | .12 | .004 | .07 | -.38 | -.005 | .001 |
| Δ T1/T3 | -.12 | -.004 | -.07 | .38 | .005 | -.001 |
Note. Δ T1/T2 and Δ T1/T3 refer to support change from time 1 to time 2 and time 3, respectively; EBPAS, Evidence-Based Practice Attitude Scale