Lilian Golzarri-Arroyo1, Stephanie L Dickinson2, Yasaman Jamshidi-Naeini2, Roger S Zoh2, Andrew W Brown3, Arthur H Owora2, Peng Li4, J Michael Oakes5, David B Allison2. 1. Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington USA. Electronic address: lgolzarr@indiana.edu. 2. Department of Epidemiology and Biostatistics, Indiana University School of Public Health-Bloomington USA. 3. Department of Applied Health Science, Indiana University School of Public Health-Bloomington USA. 4. School of Nursing, University of Alabama at Birmingham USA. 5. School of Public Health, University of Minnesota USA.
Abstract
BACKGROUND: Cluster randomized controlled trials (cRCTs) are increasingly used but must be analyzed carefully. We conducted a simulation study to evaluate the validity of a parametric bootstrap (PB) approach with respect to the empirical type I error rate for a cRCT with binary outcomes and a small number of clusters. METHODS: We simulated a case study with a binary (0/1) outcome, four clusters, and 100 subjects per cluster. To compare the validity of the test with respect to error rate, we simulated the same experiment with K=10, 20, and 30 clusters, each with 2,000 simulated datasets. To test the null hypothesis, we used a generalized linear mixed model including a random intercept for clusters and obtained p-values based on likelihood ratio tests (LRTs) using the parametric bootstrap method as implemented in the R package "pbkrtest". RESULTS: The PB test produced error rates of 9.1%, 5.5%, 4.9%, and 5.0% on average across all ICC values for K=4, K=10, K=20, and K=30, respectively. The error rates were higher, ranging from 9.1% to 36.5% for K=4, in the models with singular fits (i.e., ignoring clustering) because the ICC was estimated to be zero. CONCLUSION: Using the parametric bootstrap for cRCTs with a small number of clusters results in inflated error rates and is not valid.
BACKGROUND: Cluster randomized controlled trials (cRCTs) are increasingly used but must be analyzed carefully. We conducted a simulation study to evaluate the validity of a parametric bootstrap (PB) approach with respect to the empirical type I error rate for a cRCT with binary outcomes and a small number of clusters. METHODS: We simulated a case study with a binary (0/1) outcome, four clusters, and 100 subjects per cluster. To compare the validity of the test with respect to error rate, we simulated the same experiment with K=10, 20, and 30 clusters, each with 2,000 simulated datasets. To test the null hypothesis, we used a generalized linear mixed model including a random intercept for clusters and obtained p-values based on likelihood ratio tests (LRTs) using the parametric bootstrap method as implemented in the R package "pbkrtest". RESULTS: The PB test produced error rates of 9.1%, 5.5%, 4.9%, and 5.0% on average across all ICC values for K=4, K=10, K=20, and K=30, respectively. The error rates were higher, ranging from 9.1% to 36.5% for K=4, in the models with singular fits (i.e., ignoring clustering) because the ICC was estimated to be zero. CONCLUSION: Using the parametric bootstrap for cRCTs with a small number of clusters results in inflated error rates and is not valid.
Authors: Lilian Golzarri-Arroyo; Colby J Vorland; Lehana Thabane; J Michael Oakes; Ethan T Hunt; Andrew W Brown; David B Allison Journal: Eur J Pediatr Date: 2020-02-20 Impact factor: 3.183
Authors: Yasaman Jamshidi-Naeini; Andrew W Brown; Tapan Mehta; Deborah H Glueck; Lilian Golzarri-Arroyo; Keith E Muller; Carmen D Tekwe; David B Allison Journal: Obesity (Silver Spring) Date: 2022-03 Impact factor: 9.298