| Literature DB >> 23583358 |
Abstract
It is sometimes argued that small studies provide better evidence for reported effects because they are less likely to report findings with small and trivial effect sizes (Friston, 2012). But larger studies are actually better at protecting against inferences from trivial effect sizes, if researchers just make use of effect sizes and confidence intervals. Poor statistical power also comes at a cost of inflated proportion of false positive findings, less power to "confirm" true effects and bias in reported (inflated) effect sizes. Small studies (n=16) lack the precision to reliably distinguish small and medium to large effect sizes (r<.50) from random noise (α=.05) that larger studies (n=100) does with high level of confidence (r=.50, p=.00000012). The present paper presents the arguments needed for researchers to refute the claim that small low-powered studies have a higher degree of scientific evidence than large high-powered studies.Keywords: False positive findings; Inflated effect sizes; Statistical power
Mesh:
Year: 2013 PMID: 23583358 DOI: 10.1016/j.neuroimage.2013.03.030
Source DB: PubMed Journal: Neuroimage ISSN: 1053-8119 Impact factor: 6.556