| Literature DB >> 29285840 |
Laura Pazzagli1, Marie Linder1, Mingliang Zhang2, Emese Vago2, Paul Stang2, David Myers2, Morten Andersen1,3, Shahram Bahmanyar1.
Abstract
PURPOSE: Lack of control for time-varying exposures can lead to substantial bias in estimates of treatment effects. The aim of this study is to provide an overview and guidance on some of the available methodologies used to address problems related to time-varying exposure and confounding in pharmacoepidemiology and other observational studies. The methods are explored from a conceptual rather than an analytical perspective.Entities:
Keywords: cumulative exposure and latency; pharmacoepidemiology; time-varying confounders; time-varying exposure; treatment episodes; treatment switching
Mesh:
Year: 2017 PMID: 29285840 PMCID: PMC5814826 DOI: 10.1002/pds.4372
Source DB: PubMed Journal: Pharmacoepidemiol Drug Saf ISSN: 1053-8569 Impact factor: 2.890
Pharmacoepidemiological challenges in longitudinal studies
| Problem | Description | Applied Example |
|---|---|---|
| 1. Treatment episodes construction | The exposure to a treatment is usually not a single occurrence, but may be prolonged and vary over time. Therefore, the exposure definition needs to be handled in a time‐dependent manner, and based on treatment histories as complete as possible (dates, dosage, or duration of each prescription is essential information). When estimating cumulative exposure, reasonable assumptions on gaps and overlaps between consecutive prescriptions are needed. | In a follow‐up study of antidepressant drug users, Gardarsdottir et al. |
| 2. Time‐varying confounders | In studies investigating time‐varying exposure, there may be a presence of time‐varying confounders affected by previous exposure levels, i.e., acting as intermediates in the pathway between exposure and outcome. | In studying the effect of a drug to reduce glycaemia levels in type II diabetes patients on the onset of cardiac events, a measure of the blood glucose levels (HbA1c) can both be affected by previous treatment dose and affect the outcome (high HbA1c may increase the drug dosage and the risk of cardiac event). See Daniel et al. |
| 3. Cumulative exposure and latency | Dose and duration of exposure accumulated over time may increase or decrease the effect on the outcome. It should be considered that different outcomes have different latent periods (time to initiation of the treatment to diagnosis). Therefore, different drugs require different exposure periods in relation to the latency. However, these latent periods are usually unknown, and a long follow‐up period allows different assumptions about latent period to be tested. | Sylvestre et al. |
| 4. Treatment switching | Individuals exposed to a treatment can switch to an alternative during the follow‐up time. When considering a time‐varying exposure the switching process complicates the estimation of treatment effects because it cannot be considered a random mechanism. When individuals are not under the initial treatment during the entire follow‐up, the method to use in the investigation should account for it. | Diaz et al. |
Methods for longitudinal studies in pharmacoepidemiology
| Methods | |||
|---|---|---|---|
| 1. Treatment episodes construction | Main Assumptions | Main Strengths | Main Limitations |
| ‐ use of the DDD | The dose is the average use of the drug in an adult patient and for its main indication. | Facilitation of international comparisons. | The drug under study could be prescribed with a dosage different from the average, with an indication other than the main, and/or in children. |
| ‐ accounting for gaps and overlaps | Predefined length for allowed gaps (based on prior knowledge on drug utilisation). Overlapping days can be added to the treatment episode duration or ignored. | Estimations of treatment episodes accounting for gaps and overlaps drive to a more accurate exposure definition and allow the consideration of the nature of the treatment under study when deciding on the length of gaps and the way to account for overlaps. | Particular attention needs to be paid in choosing the predefined length for the allowed gap and the way to account for overlaps. When the assumptions on gaps and overlaps are distant from the real drug utilisation, the treatment episodes estimates may be biased. |
| ‐ prospectively filling gaps | Predefined length for allowed gaps. | Assuming gaps of a fixed number of days avoids the immortal bias that could be introduced if the allowed gaps would depend by future prescriptions. The time between the two subsequent prescriptions would be risk‐free (immortal) time. | Particular attention needs to be paid in choosing the predefined length for the allowed gap trying to emulate the real drug utilisation patterns. |
| 2. Time‐varying confounders | Main assumptions | Main strengths | Main limitations |
| ‐ methods without any adjustment strategy | There are no time‐varying confounders or they can be treated as baseline. | Simplicity of application. | The time dependency of the variables involved in the study is not considered introducing bias in the treatment effect estimates. |
| ‐ time‐varying covariates and propensity scores methods | Time‐varying confounders are not intermediate factors. | Accounting for the time dependency of the confounders. Simplicity of the application. | Not accounting for the potential role of intermediate that a confounder can assume in a time‐varying analysis. |
| ‐ MSMs | Stable unit treatment, positivity, and no unmeasured confounders. | Controlling for time‐varying confounders without conditioning on them. Natural extension of Cox and logistic models. | The no unmeasured confounders assumption requires information on all the variables of interest for the study. The complexity of application with respect to standard methods limits the availability of statistical packages. |
| ‐ SNFTMs | Stable unit treatment, positivity, and no unmeasured confounders. | Interactions between time‐varying covariates and treatment can be included in the model. Efficiency. | The no unmeasured confounders assumption requires information on all the variables of interest for the study. Computationally intensive. The complexity of application with respect to standard methods limits the availability of statistical packages. |
| 3. Cumulative exposure and latency | Main assumptions | Main strengths | Main limitations |
| ‐ methods without any adjustment strategy | The amount in dose and duration of exposure does not affect the outcome and can be ignored. | Simplicity of application. | Undervaluation of the importance of all the aspects of the magnitude of exposure. |
| ‐ WCD models | The form of the weight function has to be estimated using cubic regression B‐splines. | It accounts for the quantity of exposure and the time since exposure. | Not accounting for the potential role of intermediate that a confounder can assume in a time‐varying analysis. |
| ‐ fractional polynomials | Selection of the fractional polynomial function, representing the cumulative exposure, to include in the regression model. | No assumptions on the functional form of the hazard. The model accounts for time‐varying covariates and time‐varying covariate effects. | Not accounting for the potential role of intermediate that a confounder can assume in a time‐varying analysis. |
| 4. Treatment switching | Main assumptions | Main strengths | Main limitations |
| ‐ methods without any adjustment strategy | The switching is a random ignorable mechanism. | Simplicity of application. | Ignoring that switch can affect the treatment effect estimates. |
| ‐ excluding and censoring switching | The switching is a random ignorable mechanism. | Simplicity of application. | Ignoring that switch can affect the treatment effect estimates. |
| ‐ models with a time‐varying covariate for the switch | The switch is not affected by previous treatment levels. | Simplicity of application. | Ignoring that switch can be affected by the treatment while affecting the outcome. |
| ‐ MSMs with IPCW | Stable unit treatment, positivity, and no unmeasured confounders. | Emulation of the population in absence of switching. Accounting, in the treatment effect estimates, for what would have happened in absence of switching. | The no unmeasured confounders assumption requires information on all the variables of interest for the study. The complexity of application with respect to standard methods limits the availability of statistical packages. |
| ‐ SNFTMs | Stable unit treatment, positivity, and no unmeasured confounders. | The survival can be derived accounting also for the counterfactual event times of the switchers emulating what would have happened at the treatment effects in the absence of switching. | The no unmeasured confounders assumption requires information on all the variables of interest for the study. Computational intensive. The complexity of application with respect to standard methods limits the availability of statistical packages. |
For all the probabilistic models involved in the above methods, a further assumption of correct model specification must be considered.
Software for some of the most complex methods
| Methods | Software |
|---|---|
| ‐ MSMs | The weights for a MSM can be implemented in the R package |
| ‐ SNFTMs | The code that can be used as a reference for implementing a SNFTM is available at the website of the Harvard School of Public Health ( |
| ‐ WCD models | The R package |
| ‐ fractional polynomials | The R package |
Figure 1DAG A: Relationship between a confounder variable C, a treatment variable T, and an outcome variable Y in a time point study. DAG B: Relationships between a time‐varying exposure, a time‐varying confounder (which also acts as an intermediate factor), and an outcome variable in a longitudinal study. The double role of the confounder level is indicated drawing a double arrow. The observations at each time point of the time‐varying exposure and the time‐varying confounder are indicated, respectively, with , , , and since they are measured at time 0 and at time 1. The variable indicates the outcome. For simplicity of the graphical representations, in DAG A and in DAG B a variable representing the set of potential unmeasured confounders has been omitted
Figure 2Two DAGs representing how different statistical methods account for a variable that acts as intermediate. MSMs and SNFTMs consider all the arrows that connect the level of the variable C with the level of the treatment variable T and an outcome variable Y. Methods such as regression models with time‐varying covariates and propensity score methods cannot adjust in the analysis for the arrow going from the value of the treatment variable and the value of the confounder variable so removing from the estimation the indirect effect that the exposure to the treatment has on the outcome passing through the intermediate variable. For simplicity of the graphical representation, in the DAGs, a variable representing the set of potential unmeasured confounders has been omitted
Figure 3DAG depicting the potential relationships between a time‐varying exposures T, a time‐varying confounder C (which is also intermediate), a switching variable S, and an outcome variable Y in a longitudinal study. The observations at each time point of the time‐varying exposure and the time‐varying confounder are indicated, respectively, with , , , and since they are measured at time 0 and at time 1. The observation of the switching variable is indicated with , measured at time 1 when patients may start to switch to an alternative treatment. The variable indicates the outcome. For simplicity of the graphical representation, in the DAG has been omitted a variable representing the set of potential unmeasured confounders