OBJECTIVE: In this article, we examine whether a well-executed comparative interrupted time series (CITS) design can produce valid inferences about the effectiveness of a school-level intervention. This article also explores the trade-off between bias reduction and precision loss across different methods of selecting comparison groups for the CITS design and assesses whether choosing matched comparison schools based only on preintervention test scores is sufficient to produce internally valid impact estimates. RESEARCH DESIGN: We conduct a validation study of the CITS design based on the federal Reading First program as implemented in one state using results from a regression discontinuity design as a causal benchmark. RESULTS: Our results contribute to the growing base of evidence regarding the validity of nonexperimental designs. We demonstrate that the CITS design can, in our example, produce internally valid estimates of program impacts when multiple years of preintervention outcome data (test scores in the present case) are available and when a set of reasonable criteria are used to select comparison organizations (schools in the present case).
OBJECTIVE: In this article, we examine whether a well-executed comparative interrupted time series (CITS) design can produce valid inferences about the effectiveness of a school-level intervention. This article also explores the trade-off between bias reduction and precision loss across different methods of selecting comparison groups for the CITS design and assesses whether choosing matched comparison schools based only on preintervention test scores is sufficient to produce internally valid impact estimates. RESEARCH DESIGN: We conduct a validation study of the CITS design based on the federal Reading First program as implemented in one state using results from a regression discontinuity design as a causal benchmark. RESULTS: Our results contribute to the growing base of evidence regarding the validity of nonexperimental designs. We demonstrate that the CITS design can, in our example, produce internally valid estimates of program impacts when multiple years of preintervention outcome data (test scores in the present case) are available and when a set of reasonable criteria are used to select comparison organizations (schools in the present case).
Authors: Kristin H Gigli; Billie S Davis; Jonathan G Yabes; Chung-Chou H Chang; Derek C Angus; Tina Batra Hershey; Jennifer R Marin; Grant R Martsolf; Jeremy M Kahn Journal: Pediatrics Date: 2020-07 Impact factor: 7.124
Authors: Jeremy M Kahn; Billie S Davis; Jonathan G Yabes; Chung-Chou H Chang; David H Chong; Tina Batra Hershey; Grant R Martsolf; Derek C Angus Journal: JAMA Date: 2019-07-16 Impact factor: 56.272
Authors: Donald S Bourne; Billie S Davis; Kristin H Gigli; Chung-Chou H Chang; Jonathan G Yabes; Grant R Martsolf; Jeremy M Kahn Journal: Crit Care Med Date: 2020-10 Impact factor: 9.296
Authors: Trevor Hill; Carol Coupland; Denise Kendrick; Matthew Jones; Ashley Akbari; Sarah Rodgers; Michael Craig Watson; Edward Tyrrell; Sheila Merrill; Elizabeth Orton Journal: J Epidemiol Community Health Date: 2021-06-22 Impact factor: 3.710