Literature DB >> 35858388

No reason to expect large and consistent effects of nudge interventions.

Barnabas Szaszi1, Anthony Higney2, Aaron Charlton3, Andrew Gelman4, Ignazio Ziano5, Balazs Aczel1, Daniel G Goldstein6, David S Yeager7, Elizabeth Tipton8.   

Abstract

Entities:  

Mesh:

Year:  2022        PMID: 35858388      PMCID: PMC9351519          DOI: 10.1073/pnas.2200732119

Source DB:  PubMed          Journal:  Proc Natl Acad Sci U S A        ISSN: 0027-8424            Impact factor:   12.779


× No keyword cloud information.
As policy makers are increasingly interested in implementing nudge-type interventions, it is essential that we understand under what conditions they can improve policy-relevant outcomes to make the best possible use of public resources. For that reason, the recently published metaanalysis by Mertens et al. (1) of the choice architecture literature is laudable. Our reading of the data and analyses, however, is quite different from Mertens et al.’s (1): Nudge interventions may work, under certain conditions, but their effectiveness can vary to a great degree, and the conditions under which they work are barely identified in the literature (2). For example, the authors assume that the nudge literature is impacted by publication bias; that is, larger positive, and statistically significant, comparisons are more likely to be reported. After adjusting for a hypothesized severe to moderate degree of publication bias, their adjusted estimated average effect of nudges is between d = 0.08 (severe) and d = 0.31 (moderate). Our additional analysis on the same database applying three different bias-correcting methods, compared to the nonadjusted estimate, also led to much smaller effect sizes (Andrews–Kasy, d = −0.01, SE = 0.02; weighted average of the adequately powered [WAAP], d = 0.07, SE = 0.03; Trim and fill, d = 0.08, SE = 0.03) (see also ref. 3). Furthermore, the authors estimate that, even after adjusting for publication bias, the effects of nudge interventions vary considerably across studies. For example, assuming a severe degree of publication bias, 95% of these studies’ effects would be ±1.00 around the average d = 0.08 effect, showing large variability, with much of this variability possibly arising from variability in publication bias itself. Nevertheless, Mertens et al. (1) focus their message on the average effect size estimated without adjusting for publication bias, concluding that “our results show that choice architecture interventions overall promote behavior change with a small to medium effect size of Cohen’s d = 0.43” (p. 1). We argue that this effect size is implausibly large, which could be misleading and further strengthen researchers’ and practitioners’ overoptimistic expectations (3, 4) about the impact of nudges. Furthermore, the authors focus their conclusions on this average value and on subgroups, leaving aside the large degree of unexplained heterogeneity (5) in apparent effects across published studies. For example, despite the analyses above being consistent with a large proportion of studies having near-zero underlying effects, the authors conclude that nudges work “across a wide range of behavioral domains, population segments, and geographical locations” (p. 7). Thankfully, it is because Mertens et al. (1) conducted these analyses and shared their data that we were able to notice these contradictions between findings and conclusions. We argue that, as a scientific field, instead of focusing on average effects, we need to understand when and where some nudges have huge positive effects and why others are not able to repeat those successes (2, 4, 5). Until then, with a few exceptions [e.g., defaults (6)], we see no reason to expect large and consistent effects when designing nudge experiments or running interventions.
  4 in total

1.  Megastudies improve the impact of applied behavioural science.

Authors:  Katherine L Milkman; Dena Gromet; Hung Ho; Joseph S Kay; Timothy W Lee; Pepi Pandiloski; Yeji Park; Aneesh Rai; Max Bazerman; John Beshears; Lauri Bonacorsi; Colin Camerer; Edward Chang; Gretchen Chapman; Robert Cialdini; Hengchen Dai; Lauren Eskreis-Winkler; Ayelet Fishbach; James J Gross; Samantha Horn; Alexa Hubbard; Steven J Jones; Dean Karlan; Tim Kautz; Erika Kirgios; Joowon Klusowski; Ariella Kristal; Rahul Ladhania; George Loewenstein; Jens Ludwig; Barbara Mellers; Sendhil Mullainathan; Silvia Saccardo; Jann Spiess; Gaurav Suri; Joachim H Talloen; Jamie Taxer; Yaacov Trope; Lyle Ungar; Kevin G Volpp; Ashley Whillans; Jonathan Zinman; Angela L Duckworth
Journal:  Nature       Date:  2021-12-08       Impact factor: 49.962

Review 2.  Behavioural science is unlikely to change the world without a heterogeneity revolution.

Authors:  Christopher J Bryan; Elizabeth Tipton; David S Yeager
Journal:  Nat Hum Behav       Date:  2021-07-22

3.  The effectiveness of nudging: A meta-analysis of choice architecture interventions across behavioral domains.

Authors:  Stephanie Mertens; Mario Herberz; Ulf J J Hahnel; Tobias Brosch
Journal:  Proc Natl Acad Sci U S A       Date:  2022-01-04       Impact factor: 12.779

4.  No reason to expect large and consistent effects of nudge interventions.

Authors:  Barnabas Szaszi; Anthony Higney; Aaron Charlton; Andrew Gelman; Ignazio Ziano; Balazs Aczel; Daniel G Goldstein; David S Yeager; Elizabeth Tipton
Journal:  Proc Natl Acad Sci U S A       Date:  2022-07-19       Impact factor: 12.779

  4 in total
  3 in total

1.  Left-truncated effects and overestimated meta-analytic means.

Authors:  Jonathan Z Bakdash; Laura R Marusich
Journal:  Proc Natl Acad Sci U S A       Date:  2022-07-19       Impact factor: 12.779

2.  No reason to expect large and consistent effects of nudge interventions.

Authors:  Barnabas Szaszi; Anthony Higney; Aaron Charlton; Andrew Gelman; Ignazio Ziano; Balazs Aczel; Daniel G Goldstein; David S Yeager; Elizabeth Tipton
Journal:  Proc Natl Acad Sci U S A       Date:  2022-07-19       Impact factor: 12.779

3.  Reply to Maier et al., Szaszi et al., and Bakdash and Marusich: The present and future of choice architecture research.

Authors:  Stephanie Mertens; Mario Herberz; Ulf J J Hahnel; Tobias Brosch
Journal:  Proc Natl Acad Sci U S A       Date:  2022-07-19       Impact factor: 12.779

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.