| Literature DB >> 31890846 |
Ellicott C Matthay1,2, Erin Hagan1, Laura M Gottlieb1, May Lynn Tan1, David Vlahov3, Nancy E Adler1, M Maria Glymour1,2.
Abstract
Population health researchers from different fields often address similar substantive questions but rely on different study designs, reflecting their home disciplines. This is especially true in studies involving causal inference, for which semantic and substantive differences inhibit interdisciplinary dialogue and collaboration. In this paper, we group nonrandomized study designs into two categories: those that use confounder-control (such as regression adjustment or propensity score matching) and those that rely on an instrument (such as instrumental variables, regression discontinuity, or differences-in-differences approaches). Using the Shadish, Cook, and Campbell framework for evaluating threats to validity, we contrast the assumptions, strengths, and limitations of these two approaches and illustrate differences with examples from the literature on education and health. Across disciplines, all methods to test a hypothesized causal relationship involve unverifiable assumptions, and rarely is there clear justification for exclusive reliance on one method. Each method entails trade-offs between statistical power, internal validity, measurement quality, and generalizability. The choice between confounder-control and instrument-based methods should be guided by these tradeoffs and consideration of the most important limitations of previous work in the area. Our goals are to foster common understanding of the methods available for causal inference in population health research and the tradeoffs between them; to encourage researchers to objectively evaluate what can be learned from methods outside one's home discipline; and to facilitate the selection of methods that best answer the investigator's scientific questions.Entities:
Keywords: Causal inference; Econometrics; Epidemiologic methods; Instrumental variable; Quasi-experiment; Threats to validity
Year: 2019 PMID: 31890846 PMCID: PMC6926350 DOI: 10.1016/j.ssmph.2019.100526
Source DB: PubMed Journal: SSM Popul Health ISSN: 2352-8273
Comparison of common approaches to nonexperimental causal inference for population health scientists studying the effects of treatments.
| Feature | Confounder-control | Instrument-based |
|---|---|---|
| Main strategies for estimating causal effects | Identify, measure, and control for a sufficient set of confounders through matching, regression adjustment, propensity score methods, or related methods. | Identify and leverage a random or conditionally random source of variation in chances of treatment which would be otherwise unrelated to the outcome through instrumental variables, regression discontinuity, differences-in-differences, or related approaches. |
| Key assumptions | Conditional exchangeability between treated and untreated individuals, including no uncontrolled common causes of treatment and outcome. | Variation in the instrumental variable alters chances of treatment, is unrelated to potential outcomes, and influences the outcome via no other mechanism except the treatment at hand. The instrument's variation cannot have opposite effects on probability of treatment for different people in the study. |
| Assessment of assumptions | Assumptions cannot be proven and are primarily evaluated based on background knowledge, negative controls, or testable implications of the hypothesized causal mechanisms. Measured covariates are often assumed to proxy for unmeasured covariates and inform sensitivity analyses. | The “relevance” assumption can be proven. Other assumptions cannot be proven and are primarily evaluated using background knowledge, falsification tests drawing on multiple instrumental variables, or testable implications of the hypothesized causal mechanisms. |
| Typical analyses | Regression with confounder-control. Propensity score matching, adjustment, or weighting. Doubly robust analyses. | Two-stage least-squares regression. Method of moments. Residual control methods. |
| Key methodological advantages | Analyses leverage treatment variation in the entire populations, improving statistical power relative to instrument-based approaches with the same data source. Often based on diverse and representative samples that facilitate assessment of differential treatment effects across and within populations. | Study design and analytic approaches can circumvent bias from unmeasured confounders of the treatment-outcome association. Can deliver a treatment effect specific to the individuals most affected by the instrument. |
| Key methodological challenges | Reliance on identifying, measuring, and appropriately adjusting for all confounders. | Valid sources of instruments can be difficult to identify. Reduced statistical power relative to total-population studies. Treatment effects (LATE) only generalize to the subset of participants whose treatment is affected by the instrument. |
See Box 1, Box 2, Box 3 for definitions. We present a simplified characterization of each approach to highlight key distinctions.