Literature DB >> 31464485

99% impossible: A valid, or falsifiable, internal meta-analysis.

Joachim Vosgerau1, Uri Simonsohn2, Leif D Nelson3, Joseph P Simmons4.   

Abstract

Several researchers have relied on, or advocated for, internal meta-analysis, which involves statistically aggregating multiple studies in a paper to assess their overall evidential value. Advocates of internal meta-analysis argue that it provides an efficient approach to increasing statistical power and solving the file-drawer problem. Here we show that the validity of internal meta-analysis rests on the assumption that no studies or analyses were selectively reported. That is, the technique is only valid if (a) all conducted studies were included (i.e., an empty file drawer), and (b) for each included study, exactly one analysis was attempted (i.e., there was no p-hacking). We show that even very small doses of selective reporting invalidate internal meta-analysis. For example, the kind of minimal p-hacking that increases the false-positive rate of 1 study to just 8% increases the false-positive rate of a 10-study internal meta-analysis to 83%. If selective reporting is approximately zero, but not exactly zero, then internal meta-analysis is invalid. To be valid, (a) an internal meta-analysis would need to contain exclusively studies that were properly preregistered, (b) those preregistrations would have to be followed in all essential aspects, and (c) the decision of whether to include a given study in an internal meta-analysis would have to be made before any of those studies are run. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

Mesh:

Year:  2019        PMID: 31464485     DOI: 10.1037/xge0000663

Source DB:  PubMed          Journal:  J Exp Psychol Gen        ISSN: 0022-1015


  6 in total

1.  Comparing meta-analyses and preregistered multiple-laboratory replication projects.

Authors:  Amanda Kvarven; Eirik Strømland; Magnus Johannesson
Journal:  Nat Hum Behav       Date:  2019-12-23

2.  Statistical Significance Filtering Overestimates Effects and Impedes Falsification: A Critique of.

Authors:  Jonathan Z Bakdash; Laura R Marusich; Jared B Kenworthy; Elyssa Twedt; Erin G Zaroukian
Journal:  Front Psychol       Date:  2020-12-22

Review 3.  Empirical audit and review and an assessment of evidentiary value in research on the psychological consequences of scarcity.

Authors:  Michael O'Donnell; Amelia S Dev; Stephen Antonoplis; Stephen M Baum; Arianna H Benedetti; N Derek Brown; Belinda Carrillo; Andrew L Choi; Paul Connor; Kristin Donnelly; Monica E Ellwood-Lowe; Ruthe Foushee; Rachel Jansen; Shoshana N Jarvis; Ryan Lundell-Creagh; Joseph M Ocampo; Gold N Okafor; Zahra Rahmani Azad; Michael Rosenblum; Derek Schatz; Daniel H Stein; Yilu Wang; Don A Moore; Leif D Nelson
Journal:  Proc Natl Acad Sci U S A       Date:  2021-11-02       Impact factor: 11.205

4.  Personal relative deprivation and pro-environmental intentions.

Authors:  William J Skylark; Mitchell J Callan
Journal:  PLoS One       Date:  2021-11-18       Impact factor: 3.240

5.  Accuracy prompts are a replicable and generalizable approach for reducing the spread of misinformation.

Authors:  Gordon Pennycook; David G Rand
Journal:  Nat Commun       Date:  2022-04-28       Impact factor: 17.694

6.  Putting the Self in Self-Correction: Findings From the Loss-of-Confidence Project.

Authors:  Julia M Rohrer; Warren Tierney; Eric L Uhlmann; Lisa M DeBruine; Tom Heyman; Benedict Jones; Stefan C Schmukle; Raphael Silberzahn; Rebecca M Willén; Rickard Carlsson; Richard E Lucas; Julia Strand; Simine Vazire; Jessica K Witt; Thomas R Zentall; Christopher F Chabris; Tal Yarkoni
Journal:  Perspect Psychol Sci       Date:  2021-03-01
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.