Hugh Waddington1, Ariel M Aloe2, Betsy Jane Becker3, Eric W Djimeu4, Jorge Garcia Hombrados5, Peter Tugwell6, George Wells6, Barney Reeves7. 1. International Initiative for Impact Evaluation, New Delhi, India. Electronic address: hwaddington@3ieimpact.org. 2. University of Iowa, Iowa City, IA, USA. 3. Florida State University, Tallahassee, FL, USA. 4. International Initiative for Impact Evaluation, New Delhi, India. 5. University of Sussex, Brighton, UK. 6. Department of Medicine, University of Ottawa, Ottawa, Canada. 7. University of Bristoln, Bristol, UK.
Abstract
OBJECTIVES: Rigorous and transparent bias assessment is a core component of high-quality systematic reviews. We assess modifications to existing risk of bias approaches to incorporate rigorous quasi-experimental approaches with selection on unobservables. These are nonrandomized studies using design-based approaches to control for unobservable sources of confounding such as difference studies, instrumental variables, interrupted time series, natural experiments, and regression-discontinuity designs. STUDY DESIGN AND SETTING: We review existing risk of bias tools. Drawing on these tools, we present domains of bias and suggest directions for evaluation questions. RESULTS: The review suggests that existing risk of bias tools provide, to different degrees, incomplete transparent criteria to assess the validity of these designs. The paper then presents an approach to evaluating the internal validity of quasi-experiments with selection on unobservables. CONCLUSION: We conclude that tools for nonrandomized studies of interventions need to be further developed to incorporate evaluation questions for quasi-experiments with selection on unobservables.
OBJECTIVES: Rigorous and transparent bias assessment is a core component of high-quality systematic reviews. We assess modifications to existing risk of bias approaches to incorporate rigorous quasi-experimental approaches with selection on unobservables. These are nonrandomized studies using design-based approaches to control for unobservable sources of confounding such as difference studies, instrumental variables, interrupted time series, natural experiments, and regression-discontinuity designs. STUDY DESIGN AND SETTING: We review existing risk of bias tools. Drawing on these tools, we present domains of bias and suggest directions for evaluation questions. RESULTS: The review suggests that existing risk of bias tools provide, to different degrees, incomplete transparent criteria to assess the validity of these designs. The paper then presents an approach to evaluating the internal validity of quasi-experiments with selection on unobservables. CONCLUSION: We conclude that tools for nonrandomized studies of interventions need to be further developed to incorporate evaluation questions for quasi-experiments with selection on unobservables.
Authors: Frank de Vocht; Srinivasa Vittal Katikireddi; Cheryl McQuire; Kate Tilling; Matthew Hickman; Peter Craig Journal: BMC Med Res Methodol Date: 2021-02-11 Impact factor: 4.612
Authors: Carmen Feria-Ramírez; Juan D Gonzalez-Sanz; Rafael Molina-Luque; Guillermo Molina-Recio Journal: Int J Environ Res Public Health Date: 2021-06-30 Impact factor: 3.390
Authors: David Ogilvie; Jean Adams; Adrian Bauman; Edward W Gregg; Jenna Panter; Karen R Siegel; Nicholas J Wareham; Martin White Journal: J Epidemiol Community Health Date: 2019-11-19 Impact factor: 3.710
Authors: Frank de Vocht; Cheryl McQuire; Alan Brennan; Matt Egan; Colin Angus; Eileen Kaner; Emma Beard; Jamie Brown; Daniela De Angelis; Nick Carter; Barbara Murray; Rachel Dukes; Elizabeth Greenwood; Susan Holden; Russell Jago; Matthew Hickman Journal: Addiction Date: 2020-03-10 Impact factor: 6.526