| Literature DB >> 33757947 |
Rachel Visontay1, Matthew Sunderland1, Tim Slade1, Jack Wilson1, Louise Mewton2,3.
Abstract
INTRODUCTION: There is a substantial literature finding that moderate alcohol consumption is protective against certain health conditions. However, more recent research has highlighted the possibility that these findings are methodological artefacts, caused by confounding and other biases. While modern analytical and study design approaches can mitigate confounding and thus enhance causal inference in observational studies, they are not routinely applied in research assessing the relationship between alcohol use and long-term health outcomes. The purpose of this systematic review is to identify observational studies that employ these analytical/design-based approaches in assessing whether relationships between alcohol consumption and health outcomes are non-linear. This review seeks to evaluate, on a per-outcome basis, what these studies find the strength and form of the relationship between alcohol consumption and health to be. METHODS AND ANALYSIS: Electronic databases (MEDLINE, PsycINFO, Embase and SCOPUS) were searched in May 2020. Study selection will comply with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Articles will be screened against eligibility criteria intended to capture studies using observational data to assess the relationship between varying levels of alcohol exposure and any long-term health outcome (actual or surrogate), and that have employed at least one of the prespecified approaches to enhancing causal inference. Risk of bias of included articles will be assessed using study design-specific tools. A narrative synthesis of the results is planned. ETHICS AND DISSEMINATION: Formal ethics approval is not required given there will be no primary data collection. The results of the study will be disseminated through published manuscripts, conferences and seminar presentations. PROSPERO REGISTRATION NUMBER: CRD42020185861. © Author(s) (or their employer(s)) 2021. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.Entities:
Keywords: epidemiology; public health; statistics & research methods; substance misuse
Mesh:
Year: 2021 PMID: 33757947 PMCID: PMC7993196 DOI: 10.1136/bmjopen-2020-043985
Source DB: PubMed Journal: BMJ Open ISSN: 2044-6055 Impact factor: 2.692
Description of methods to enhance causal inference of interest for this systematic review
| Method | Relevant submethods | Description |
| Analytical methods applied to traditional longitudinal study designs | ||
| Propensity scores (PS) | Covariate balancing propensity scores | The PS is a single value reflecting the probability of exposure for an individual given their values on all relevant covariates. PS generation occurs as a data ‘preprocessing’ step prior to main analysis. Usually generated via logistic regression. Once generated, the PS can be used for matching, stratification, weighting (using inverse probability of treatment weights) or as a covariate for adjustment in regression. |
| G-methods | A family of methods intended for use with time-dependent variables. Developed as a solution to the problem of time-varying covariates affected by past exposure, including those that act as both confounders and mediators over time. The three G-methods are the G-formula, marginal structural models and G-estimation, each relying on its own modelling assumptions. | |
| G-formula | First models relationship given observed data (using actual exposure for each individual), and then predicts outcomes under counterfactual exposures, with the difference taken as the causal effect. Is a generalisation of standardisation (conditioning on covariates and then marginalising) that accounts for dynamic variables by considering covariate distribution over follow-up time. | |
| Marginal structural models (MSMs) | Use weights based on inverse probability of exposure at each time point to create a pseudo-population, where each combination of covariates is equally present in each exposure condition. Using these weights, MSMs then estimate the causal effect. The most popular of the G-methods. | |
| G-estimation of structural nested models | At each wave assesses the relationship between exposure and likelihood of outcome given covariates, adjusting for exposure and covariate values from past waves, thus accounting for dynamic confounders affected by past exposure. Considered semi-parametric in that mean counterfactual outcomes under no exposure are unspecified. | |
| Doubly robust methods | Targeted maximum likelihood estimation Augmented inverse probability weighting | Incorporates both an estimation of the outcome mechanism (as in G-formula or regression adjustment) and the exposure mechanism (as in propensity scores). |
| Fixed effects regression | A technique developed in the econometrics literature for use with longitudinal data with repeat outcome measurements, only using information on within-subject variation, thus controlling for all time-invariant sources of confounding. Treats time-invariant characteristics that differ between individuals as fixed parameters (unlike in mixed models), allowing estimation of parameters of interest net of stable confounders. Each participant serves as own control. | |
| Causal mediation analysis | Integrates traditional mediation analysis (which separately estimates total effect of exposure on outcome, indirect effect via mediators and direct effect unexplained by mediators) with the potential outcomes framework to allow for exposure-mediator interaction and non-linear relationships (ie, is a non-parametric method). Uses the concepts of ‘controlled direct effect’, ‘natural direct effect’ and ‘natural indirect effect’. Makes explicit underlying assumptions related to unmeasured confounding, and encourages sensitivity analyses to test robustness to assumption violations. | |
| Alternative observational study designs | ||
| Natural experiments | Mimic randomised controlled trials by exploiting exogenous events that are truly randomised/approximate random assignment. Differ from true experiments in that exposure is not assigned by the researcher. Assignment may be as a result of naturally occurring phenomena (eg, a weather event), or of human intervention implemented for reasons other than the research question (eg, army draft lottery). | |
| Standard natural experiments | Natural experiments where individuals are as-if/randomly assigned to exposure and control groups. | |
| Instrumental variable analysis | Assesses the relationship between an as-if/ randomly assigned A valid instrumental variable must be associated with the exposure of interest, be independent of confounders of the exposure-outcome relationship and should affect the outcome only via the exposure. | |
| Genetic instrumental variables | Subset of instrumental variable analysis using genetic variants as proxies for exposure. The most prominent technique is Mendelian Randomisation. | |
| Quasi-experiments | Like natural experiments, exploit exogenous events to assess relationships between exposures and outcomes, but lack random or as-if random assignment. | |
| Family based designs | Twin studies | By comparing genetically related participants discordant for the exposure of interest, accounts for confounding from genetic or shared environmental sources. |
| Negative controls | Negative control exposures | Have the same confounding structures as the exposure-outcome relationship of interest, but lack a plausible causal mechanism. If association is greater for the relationship of interest than for the negative control, a causal relationship is likely; if not, suggests confounding/other shared biases responsible. May take the form of a negative control exposure or a negative control outcome. |