Literature DB >> 14601759

A new method for choosing sample size for confidence interval-based inferences.

Michael R Jiroutek1, Keith E Muller, Lawrence L Kupper, Paul W Stewart.   

Abstract

Scientists often need to test hypotheses and construct corresponding confidence intervals. In designing a study to test a particular null hypothesis, traditional methods lead to a sample size large enough to provide sufficient statistical power. In contrast, traditional methods based on constructing a confidence interval lead to a sample size likely to control the width of the interval. With either approach, a sample size so large as to waste resources or introduce ethical concerns is undesirable. This work was motivated by the concern that existing sample size methods often make it difficult for scientists to achieve their actual goals. We focus on situations which involve a fixed, unknown scalar parameter representing the true state of nature. The width of the confidence interval is defined as the difference between the (random) upper and lower bounds. An event width is said to occur if the observed confidence interval width is less than a fixed constant chosen a priori. An event validity is said to occur if the parameter of interest is contained between the observed upper and lower confidence interval bounds. An event rejection is said to occur if the confidence interval excludes the null value of the parameter. In our opinion, scientists often implicitly seek to have all three occur: width, validity, and rejection. New results illustrate that neglecting rejection or width (and less so validity) often provides a sample size with a low probability of the simultaneous occurrence of all three events. We recommend considering all three events simultaneously when choosing a criterion for determining a sample size. We provide new theoretical results for any scalar (mean) parameter in a general linear model with Gaussian errors and fixed predictors. Convenient computational forms are included, as well as numerical examples to illustrate our methods.

Mesh:

Year:  2003        PMID: 14601759     DOI: 10.1111/1541-0420.00068

Source DB:  PubMed          Journal:  Biometrics        ISSN: 0006-341X            Impact factor:   2.571


  5 in total

1.  Power(ful) myths: misconceptions regarding sample size in quality of life research.

Authors:  Samantha F Anderson
Journal:  Qual Life Res       Date:  2021-10-29       Impact factor: 3.440

2.  A sample size planning approach that considers both statistical significance and clinical significance.

Authors:  Bin Jia; Henry S Lynn
Journal:  Trials       Date:  2015-05-12       Impact factor: 2.279

3.  Conditional equivalence testing: An alternative remedy for publication bias.

Authors:  Harlan Campbell; Paul Gustafson
Journal:  PLoS One       Date:  2018-04-13       Impact factor: 3.240

4.  Bayesian sample size determination for diagnostic accuracy studies.

Authors:  Kevin J Wilson; S Faye Williamson; A Joy Allen; Cameron J Williams; Thomas P Hellyer; B Clare Lendrem
Journal:  Stat Med       Date:  2022-04-10       Impact factor: 2.497

5.  Predicting sample size required for classification performance.

Authors:  Rosa L Figueroa; Qing Zeng-Treitler; Sasikiran Kandula; Long H Ngo
Journal:  BMC Med Inform Decis Mak       Date:  2012-02-15       Impact factor: 2.796

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.