Tiago V Pereira1, John P A Ioannidis. 1. Department of Hygiene and Epidemiology, Clinical Trials and Evidence-Based Medicine Unit, University of Ioannina School of Medicine, Ioannina 45110, Greece.
Abstract
OBJECTIVE: To assess whether nominally statistically significant effects in meta-analyses of clinical trials are true and whether their magnitude is inflated. STUDY DESIGN AND SETTING: Data from the Cochrane Database of Systematic Reviews 2005 (issue 4) and 2010 (issue 1) were used. We considered meta-analyses with binary outcomes and four or more trials in 2005 with P<0.05 for the random-effects odds ratio (OR). We examined whether any of these meta-analyses had updated counterparts in 2010. We estimated the credibility (true-positive probability) under different prior assumptions and inflation in OR estimates in 2005. RESULTS: Four hundred sixty-one meta-analyses in 2005 were eligible, and 80 had additional trials included by 2010. The effect sizes (ORs) were smaller in the updating data (2005-2010) than in the respective meta-analyses in 2005 (median 0.85-fold, interquartile range [IQR]: 0.66-1.06), even more prominently for meta-analyses with less than 300 events in 2005 (median 0.67-fold, IQR: 0.54-0.96). Mean credibility of the 461 meta-analyses in 2005 was 63-84% depending on the assumptions made. Credibility estimates changed >20% in 19-31 (24-39%) of the 80 updated meta-analyses. CONCLUSIONS: Most meta-analyses with nominally significant results pertain to truly nonnull effects, but exceptions are not uncommon. The magnitude of observed effects, especially in meta-analyses with limited evidence, is often inflated.
OBJECTIVE: To assess whether nominally statistically significant effects in meta-analyses of clinical trials are true and whether their magnitude is inflated. STUDY DESIGN AND SETTING: Data from the Cochrane Database of Systematic Reviews 2005 (issue 4) and 2010 (issue 1) were used. We considered meta-analyses with binary outcomes and four or more trials in 2005 with P<0.05 for the random-effects odds ratio (OR). We examined whether any of these meta-analyses had updated counterparts in 2010. We estimated the credibility (true-positive probability) under different prior assumptions and inflation in OR estimates in 2005. RESULTS: Four hundred sixty-one meta-analyses in 2005 were eligible, and 80 had additional trials included by 2010. The effect sizes (ORs) were smaller in the updating data (2005-2010) than in the respective meta-analyses in 2005 (median 0.85-fold, interquartile range [IQR]: 0.66-1.06), even more prominently for meta-analyses with less than 300 events in 2005 (median 0.67-fold, IQR: 0.54-0.96). Mean credibility of the 461 meta-analyses in 2005 was 63-84% depending on the assumptions made. Credibility estimates changed >20% in 19-31 (24-39%) of the 80 updated meta-analyses. CONCLUSIONS: Most meta-analyses with nominally significant results pertain to truly nonnull effects, but exceptions are not uncommon. The magnitude of observed effects, especially in meta-analyses with limited evidence, is often inflated.
Authors: Katherine S Button; John P A Ioannidis; Claire Mokrysz; Brian A Nosek; Jonathan Flint; Emma S J Robinson; Marcus R Munafò Journal: Nat Rev Neurosci Date: 2013-04-10 Impact factor: 34.870
Authors: Michael H Andreae; George M Carter; Naum Shaparin; Kathryn Suslov; Ronald J Ellis; Mark A Ware; Donald I Abrams; Hannah Prasad; Barth Wilsey; Debbie Indyk; Matthew Johnson; Henry S Sacks Journal: J Pain Date: 2015-09-09 Impact factor: 5.820
Authors: Behnood Bikdeli; Thomas McAndrew; Aaron Crowley; Shmuel Chen; Ghazaleh Mehdipoor; Björn Redfors; Yangbo Liu; Zixuan Zhang; Mengdan Liu; Yiran Zhang; Dominic P Francese; David Erlinge; Stefan K James; Yaling Han; Yi Li; Adnan Kastrati; Stefanie Schüpke; Rod H Stables; Adeel Shahzad; Philippe Gabriel Steg; Patrick Goldstein; Enrico Frigoli; Roxana Mehran; Marco Valgimigli; Gregg W Stone Journal: Thromb Haemost Date: 2019-12-09 Impact factor: 5.249