| Literature DB >> 35223059 |
Molly Lewis1, Maya B Mathur2, Tyler J VanderWeele3, Michael C Frank4.
Abstract
What is the best way to estimate the size of important effects? Should we aggregate across disparate findings using statistical meta-analysis, or instead run large, multi-laboratory replications (MLR)? A recent paper by Kvarven, Strømland and Johannesson (Kvarven et al. 2020 Nat. Hum. Behav. 4, 423-434. (doi:10.1038/s41562-019-0787-z)) compared effect size estimates derived from these two different methods for 15 different psychological phenomena. The authors reported that, for the same phenomenon, the meta-analytic estimate tended to be about three times larger than the MLR estimate. These results are a specific example of a broader question: What is the relationship between meta-analysis and MLR estimates? Kvarven et al. suggested that their results undermine the value of meta-analysis. By contrast, we argue that both meta-analysis and MLR are informative, and that the discrepancy between the two estimates that they observed is in fact still largely unexplained. Informed by re-analyses of Kvarven et al.'s data and by other empirical evidence, we discuss possible sources of this discrepancy and argue that understanding the relationship between estimates obtained from these two methods is an important puzzle for future meta-scientific research.Entities:
Keywords: meta-analysis; meta-science; multi-laboratory replication
Year: 2022 PMID: 35223059 PMCID: PMC8864345 DOI: 10.1098/rsos.211499
Source DB: PubMed Journal: R Soc Open Sci ISSN: 2054-5703 Impact factor: 3.653
Figure 1Correlation between effect size estimates from multiple-laboratory replications and random effect meta-analytic estimates (Pearson’s r(13) = 0.72 [0.32, 0.9], p = 0.003). Each point corresponds to a phenomenon (N = 15), and ranges indicate 95% confidence intervals. The best fitting linear model is , shown here with a band corresponding to the standard error. The dashed reference line has a slope of 1.
Figure 2The text values on the right represent estimated percentages and corresponding 95% CIs of true population effects in the naive meta-analysis that are as small as, or smaller than, the MLR estimate. CIs are omitted when they were not estimable via bias-corrected and accelerated bootstrapping [19]. The left side of the figure shows estimates from sensitivity analyses representing worst-case publication bias (vertical tick marks) versus naive meta-analysis estimates (triangles) and multi-laboratory replication estimates (MLR; circles). For orange-coloured meta-analyses, the worst-case estimate exceeds the MLR estimate, indicating that no amount of publication bias that results could entirely explain the discrepancy between the naive estimate and the MLR estimate.