Literature DB >> 23583358

Why small low-powered studies are worse than large high-powered studies and how to protect against "trivial" findings in research: comment on Friston (2012).

Michael Ingre1.   

Abstract

It is sometimes argued that small studies provide better evidence for reported effects because they are less likely to report findings with small and trivial effect sizes (Friston, 2012). But larger studies are actually better at protecting against inferences from trivial effect sizes, if researchers just make use of effect sizes and confidence intervals. Poor statistical power also comes at a cost of inflated proportion of false positive findings, less power to "confirm" true effects and bias in reported (inflated) effect sizes. Small studies (n=16) lack the precision to reliably distinguish small and medium to large effect sizes (r<.50) from random noise (α=.05) that larger studies (n=100) does with high level of confidence (r=.50, p=.00000012). The present paper presents the arguments needed for researchers to refute the claim that small low-powered studies have a higher degree of scientific evidence than large high-powered studies.
Copyright © 2013 Elsevier Inc. All rights reserved.

Keywords:  False positive findings; Inflated effect sizes; Statistical power

Mesh:

Year:  2013        PMID: 23583358     DOI: 10.1016/j.neuroimage.2013.03.030

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


  23 in total

1.  Confidence and precision increase with high statistical power.

Authors:  Katherine S Button; John P A Ioannidis; Claire Mokrysz; Brian A Nosek; Jonathan Flint; Emma S J Robinson; Marcus R Munafò
Journal:  Nat Rev Neurosci       Date:  2013-07-03       Impact factor: 34.870

2.  Cross-validation and hypothesis testing in neuroimaging: An irenic comment on the exchange between Friston and Lindquist et al.

Authors:  Philip T Reiss
Journal:  Neuroimage       Date:  2015-04-25       Impact factor: 6.556

3.  Discovery reliability.

Authors:  Anders Hånell
Journal:  J Cereb Blood Flow Metab       Date:  2019-03-13       Impact factor: 6.200

4.  The reliability paradox: Why robust cognitive tasks do not produce reliable individual differences.

Authors:  Craig Hedge; Georgina Powell; Petroc Sumner
Journal:  Behav Res Methods       Date:  2018-06

5.  Searching for behavior relating to grey matter volume in a-priori defined right dorsal premotor regions: Lessons learned.

Authors:  Sarah Genon; Tobias Wensing; Andrew Reid; Felix Hoffstaedter; Svenja Caspers; Christian Grefkes; Thomas Nickl-Jockschat; Simon B Eickhoff
Journal:  Neuroimage       Date:  2017-05-25       Impact factor: 6.556

Review 6.  Attempted and successful compensation in preclinical and early manifest neurodegeneration - a review of task FMRI studies.

Authors:  Elisa Scheller; Lora Minkova; Mathias Leitner; Stefan Klöppel
Journal:  Front Psychiatry       Date:  2014-09-29       Impact factor: 4.157

7.  Statistical inferences under the Null hypothesis: common mistakes and pitfalls in neuroimaging studies.

Authors:  Jean-Michel Hupé
Journal:  Front Neurosci       Date:  2015-02-19       Impact factor: 4.677

8.  Potential reporting bias in fMRI studies of the brain.

Authors:  Sean P David; Jennifer J Ware; Isabella M Chu; Pooja D Loftus; Paolo Fusar-Poli; Joaquim Radua; Marcus R Munafò; John P A Ioannidis
Journal:  PLoS One       Date:  2013-07-25       Impact factor: 3.240

9.  Two distinct dynamic modes subtend the detection of unexpected sounds.

Authors:  Jean-Rémi King; Alexandre Gramfort; Aaron Schurger; Lionel Naccache; Stanislas Dehaene
Journal:  PLoS One       Date:  2014-01-27       Impact factor: 3.240

10.  Attention shifts the language network reflecting paradigm presentation.

Authors:  Kathrin Kollndorfer; Julia Furtner; Jacqueline Krajnik; Daniela Prayer; Veronika Schöpf
Journal:  Front Hum Neurosci       Date:  2013-11-25       Impact factor: 3.169

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.