Melanie L Bell1, Armando Teixeira-Pinto2, Joanne E McKenzie3, Jake Olivier4. 1. Psycho-Oncology Co-Operative Research Group, School of Psychology, University of Sydney, Australia; Mel and Enid Zuckerman College of Public Health, University of Arizona, 295 N Martin Ave, Tucson, AZ 85724, USA. 2. School of Public Health, Edward Ford Building (A27), University of Sydney, Sydney NSW 2206, Australia. Electronic address: armando.teixeira-pinto@sydney.edu.au. 3. School of Public Health and Preventive Medicine, Alfred Centre, Monash University, Melbourne VIC 3004, Australia. 4. School of Mathematics and Statistics, The Red Centre, The University of New South Wales, Sydney 2052, Australia.
Abstract
OBJECTIVES: Several methods exist to calculate sample size for the difference of proportions (risk difference). Researchers are often unaware that there are different formulae, different underlying assumptions, and what the impact of choice of formula is on the calculated sample size. The aim of this study was to discuss and compare different sample size formulae for the risk difference. STUDY DESIGN AND SETTING: Four sample size formulae were used to calculate sample size for nine scenarios. Software documentation for SAS, Stata, G*Power, PASS, StatXact, and several R libraries were searched for default assumptions. Each package was used to calculate sample size for two scenarios. RESULTS: We demonstrate that for a set of parameters, sample size can vary as much as 60% depending on the formula used. Varying software and assumptions yielded discrepancies of 78% and 7% between the smallest and largest calculated sizes, respectively. Discrepancies were most pronounced when powering for large risk differences. The default assumptions varied considerably between software packages, and defaults were not clearly documented. CONCLUSION: Researchers should be aware of the assumptions in power calculations made by different statistical software packages. Assumptions should be explicitly stated in grant proposals and manuscripts and should match proposed analyses. Crown
OBJECTIVES: Several methods exist to calculate sample size for the difference of proportions (risk difference). Researchers are often unaware that there are different formulae, different underlying assumptions, and what the impact of choice of formula is on the calculated sample size. The aim of this study was to discuss and compare different sample size formulae for the risk difference. STUDY DESIGN AND SETTING: Four sample size formulae were used to calculate sample size for nine scenarios. Software documentation for SAS, Stata, G*Power, PASS, StatXact, and several R libraries were searched for default assumptions. Each package was used to calculate sample size for two scenarios. RESULTS: We demonstrate that for a set of parameters, sample size can vary as much as 60% depending on the formula used. Varying software and assumptions yielded discrepancies of 78% and 7% between the smallest and largest calculated sizes, respectively. Discrepancies were most pronounced when powering for large risk differences. The default assumptions varied considerably between software packages, and defaults were not clearly documented. CONCLUSION: Researchers should be aware of the assumptions in power calculations made by different statistical software packages. Assumptions should be explicitly stated in grant proposals and manuscripts and should match proposed analyses. Crown