Literature DB >> 20817650

Estimating causal effects using prior information on nontrial treatments.

Simon J Bond1, Ian R White.   

Abstract

BACKGROUND: Departures from randomized treatments complicate the analysis of many randomized controlled trials. Intention-to-treat analysis estimates the effect of being allocated to treatment. It is now possible to estimate the effect of receiving treatment without assuming comparability of groups defined by actual treatment. However, the methodology is largely confined to trials where the only treatment changes were switches to other trial treatments.
PURPOSE: To propose a method for comparing the effects of receiving trial treatments in an active-controlled clinical trial where some participants received nontrial treatments and others received no treatment at all, and to illustrate the method in the PENTA 5 trial in HIV-infected children.
METHODS: We combine the instrumental variables approach, which forms unbiased estimating equations based on the randomization but does not fully identify the contrasts of trial treatment effects, with prior information about the distribution of possible effects of nontrial treatments and of one trial treatment; we do not need to use prior information about the comparisons of trial treatments. Prior information in PENTA 5 was elicited from the investigators.
RESULTS: In PENTA 5, the prior information suggested that all treatments were beneficial, but there was uncertainty about the degree of benefit. Allowing for this prior information changed point estimates and increased standard errors compared with ignoring nontrial treatments. LIMITATIONS: The method depends on the correct specification of the causal effect of treatment: in PENTA 5, this assumed a linear effect of dose and no interactions between treatments. This specification is hard to check from the data but can be explored in sensitivity analyses. Prior information would be better derived from the literature whenever possible.
CONCLUSIONS: The use of partial prior information gives a way to adjust for complex patterns of departures from randomized treatments. It should be useful in all trials where nontrial treatments are used and in active-controlled trials where trial treatments are not universally used.

Entities:  

Mesh:

Year:  2010        PMID: 20817650      PMCID: PMC3131117          DOI: 10.1177/1740774510382439

Source DB:  PubMed          Journal:  Clin Trials        ISSN: 1740-7745            Impact factor:   2.486


Introduction

Many forms of departure from randomized treatment occur in clinical trials: nonreceipt of randomized treatment (sometimes termed noncompliance), receipt of the treatment allocated to a different trial arm (sometimes termed contamination), or receipt of a nontrial treatment, defined as a treatment not randomly allocated in the trial. Intention-to-treat (ITT) analysis is accepted as a valid way to explore the effect of allocating treatment [1]. However, there has long been interest in estimating the causal effect of receiving treatment [2]. Modern statistical methods are able to base such estimation on comparisons of randomized groups [3], unlike popular methods such as per-protocol analysis which invalidly compare subgroups with different receipt of treatment [4]. Much statistical literature assumes that participants can only switch to receive the treatment allocated to a different trial arm, a situation with the convenient property that the ITT analysis also tests the null hypothesis of equivalence of the randomized treatments. This article considers the more difficult, but common, case where participants can receive nontrial treatments. In this case, the ITT analysis does not test the equivalence of the randomized treatments: for example, one treatment could appear better simply because participants receiving that treatment were more likely to change to an effective nontrial treatment. The difficulty introduced by participants receiving nontrial treatments is that the effects of these treatments must be included in a statistical model, but cannot easily be estimated from the trial data. Past work has addressed this problem by assuming that there are no unmeasured confounders [5,6] or by making strong distributional assumptions [7,8]. Our approach avoids such assumptions and instead uses information external to the trial to place informative prior distributions on the effects on nontrial treatments. To do this, we introduce a hybrid of Bayesian inference [9,10] and instrumental variables methods [11]. A particular application of our proposed methodology is to equivalence and noninferiority trials in which some participants receive no treatment, since receipt of no treatment introduces the same issues as receipt of nontrial treatments. ITT analysis is generally held to be anti-conservative for equivalence and noninferiority trials [12]. Per-protocol analysis is commonly done [13], but better methods are needed [14]. In this article, we first present the hybrid approach in general. We then show using a simple example how comparing the causal effects of trial treatments depend on the effects of the nontrial treatments, and we illustrate the hybrid approach. We next present an application to PENTA 5, a three-arm comparative trial in pediatric HIV patients in which some children received no treatment and others received nontrial treatments. We describe how we elicited and used expert priors, the results with the various prior distributions that were considered, and the implications. Finally, the discussion places this work in the wider context of the noncompliance literature.

Proposed method

Basic assumptions

In Rubin's causal model [15], participant i has a set of counterfactual outcomes Y(), where represents a potential set of treatments, or drug dosages. For each participant only one of this set of outcomes is observed, Y(D). Two assumptions are implicit in this notation. The Stable Unit Treatment Value Assumption (SUTVA) [15] states that the set of counterfactual outcomes for a particular participant is a function of that participant's potential treatments, and is not in any way influenced by other participants' potential treatments. The exclusion restriction assumption states that randomization has no direct effect on the outcome, and may only have an indirect effect through its effect on treatment actually received. Thus if a participant would receive an identical treatment regardless of their randomized treatment, then their counterfactual outcomes would be identical in all such arms. All potential outcomes are assumed to be independent of randomization.

Model

For the i-th participant, let R be their randomized group, taking values 1, 2, … , g; let be a p-dimensional vector of their actual amount of the trial and nontrial treatments (or dosages, for drug treatments); let Y be their observed outcome; and let Y(0) be their counterfactual untreated outcome which would have been observed if no treatment had been received. We define a causal model relating the observed outcome for the i-th participant to the same participant's counterfactual untreated outcome [16]:where α is a p-dimensional set of causal parameters for the trial and nontrial treatments. This model assumes that the treatments have an additive, linear effect proportional to the dose taken. For any individuals with  = 0, consistency between Y and Y(0) requires ϵ = 0. We also assumewhere is a set of baseline covariates. Random variation is captured by the within-individual error ϵ with E[ϵ|, R] = 0 and the between-individual error δ with E[δ|, R] = 0. The parameter α requires careful interpretation. Making the stronger assumption E[ϵ|, , R] = 0 allows us to interpret as the average causal effect of a dose among those who receive dose . It cannot be interpreted as the average causal effect of dose among all individuals, unless we also assume that treatment assignment is independent of treatment benefit [17].

Ordinary least-squares estimation

One way to estimate α is to regress Y on . Unfortunately, this is only valid if Y(0) is uncorrelated with , which is usually unrealistic: participants with a good prognosis usually show different patterns of D to those with a poor prognosis. Sometimes, confounding variables X can be measured and included in the analysis in order to motivate a ‘no unmeasured confounders’ assumption that Y(0) is uncorrelated with given X, but this is rarely plausible. We do not consider this method further.

Instrumental variables estimation

A better approach to estimating the parameters uses the idea of instrumental variables (IV). Combining Equations Equations (1) and (2) and taking the expectation conditional on and R givesWe estimate α in two stages. First, we fit a linear regression for the d-th treatment dose,for each d = 1 to p, where is a vector of dummy variables for randomized group. We assemble the p fitted values into a vector . Then we estimate α in the linear regression

Identification

Regression (5) is only identified if the number of unknown treatment effects p is less than the number of arms g. For example, in a two-arm trial comparing two treatments, in which some individuals receive no treatment, Equation (5) does not identify the two unknown treatment effects, since there is only one contrast between trial arms. We will consider this example further below. There are various ways to deal with nonidentification. First, we could expand the causal model to a full probability model, modeling the association between Y(0) and , and estimating the model using maximum likelihood [18]. However, this approach can be sensitive to model mis-specification [19]. Second, we could include interactions between and in model (4), as is done in structural mean models [20,21]. Including such interactions typically identifies the treatment effects α, but such identification is highly dependent on the assumption that the causal effect does not vary with [16]. In this article we do not include such interactions, but they are explored separately [22]. Third, we could perform a sensitivity analysis in which we separate the model parameters into two groups: the ‘protocol effects’ α which would have been estimated if there had been perfect compliance, and the ‘nonprotocol effects’ α which would have been irrelevant if there had been perfect compliance. In trials comparing active treatments, the protocol effects would be contrasts of active treatments, while the nonprotocol effects would be the effect of one trial treatment and the effects of any nontrial treatments used (so that they are not uniquely defined: see ‘Obtaining prior information’) later. The sensitivity analysis would then estimate the protocol effects over a range of plausible values of the nonprotocol effects.

Hybrid estimation

Building on the idea of sensitivity analysis, our proposal in this article is to combine IV estimation with prior information about the nonprotocol effects in order to identify the model. The next section will discuss how the prior information may be obtained. To implement the method, we use a Bayesian approach. We first fit the linear regression (4) and sample from the posterior distribution of parameters β. For each sampled value, we compute . We then consider the linear regression (5) with prior p(α) and sample from the posterior distribution of α and γ. This procedure is conveniently implemented in WinBUGS [10]. It could be considered a Bayesian procedure using a partial likelihood that does not consider the relationship between D, δ, and ϵ. Alternatively, it could be considered a probabilistic sensitivity analysis for IV estimation where any unidentified parameters may be imputed following an informative prior distribution [23]. Its advantages are enabling the use of prior identifying information and allowing for uncertainty in the parameters of E[|, R].

Obtaining prior information

The nonprotocol effects are commonly effects of widely used treatments, so results of relevant randomized trials or meta-analyses of such trials would ideally be available as prior information. In some cases, however, trials or meta-analyses are inadequate, and it is preferable to elicit prior information from experts [24]. In practice, one may elicit a prior distribution for all of α rather than just for the nonprotocol parameters; this was done in the case study below. A fully Bayesian analysis would use this full prior p(α). However, we argue against using prior beliefs about the protocol effects, because we would not usually use this information in a clinical trial without noncompliance, and because it is not needed in order to adjust for noncompliance. We prefer to use prior information only where it is needed to achieve identifiability. There is usually more than one way to achieve identifiability, because there is more than one reasonable way to define the nonprotocol parameters. We propose using the full expert prior to define the nonprotocol parameters α as a linear combination of parameters in α that are uncorrelated with α in the prior distribution. We then modify the full prior distribution by giving a large value to the prior variance of the protocol effects: that is, we use the expert prior for α and a vague prior for α. This method has the desirable property that the same prior is obtained, however, α is defined. In the case of complete compliance, or just switching between trial treatments, the prior will become redundant, as would be desired. Details of how to compute such a set of constrasts are given in appendix D.

Simple example

We now consider a two-arm trial comparing experimental with standard treatment, where participants may receive either treatment, both treatments or neither treatment, but receipt of any one treatment is all-or-nothing. As before, R indicates whether participant i was randomized to treatment 1 or 2, Y is their actual (quantitative) outcome, and Y(0) is their counterfactual untreated outcome. In this setting, treatment for participant i is summarized by two binary variables D indicating whether or not they took treatment d = 1, 2. The causal model (1) then becomeswhere, as before, ϵ has mean zero, is uncorrelated with randomized group and actual treatment and equals 0 if D1 = D2 = 0. The target of inference is the ‘protocol effect’ α = α1 − α2. Strictly, this is the difference between the average effect of treatment 1 in those who receive treatment 1 and the average effect of treatment 2 in those who receive treatment 2. It seems reasonable to believe that the average effect of treatment 1 in those who received it is unaffected by whether they would have received treatment 2 if allocated to it. Therefore the protocol effect may be interpreted as the ITT difference in perfect compliers [22]. We initially consider the case where treatment 1 is new and treatment 2 is a standard treatment, so we are likely to have more prior knowledge about α2 than about α1, and we define the nonprotocol effect as α = α2.

Estimation by instrumental variables

Randomization ensures that the distributions of the error term ϵ and the untreated outcome Y(0) are identical in both arms, suggesting the unbiased estimating equationwhere n1 and n2 are the sample sizes in the two arms. We write this aswhere and are the mean of D and Y in arm r, and hence Equation (9) shows that we cannot in general estimate the protocol effect α1 − α2 without knowing the nonprotocol effect α2. The only exception is if , which occurs in the special case when the participants can only switch treatments. In this case the estimate of α1 − α2 is well defined and is the standard IV estimate . In the case of perfect compliance, , so the estimate equals the standard ITT estimate . Now suppose we have external knowledge about α2, the causal effect of the standard treatment. Expressing this knowledge as a probability distribution with mean mα and variance , our best estimate iswith approximate variancewhere σ is the standard deviation of Y in each arm. The two terms in the numerator of (11) allow for uncertainty in the data about and prior uncertainty about α2, respectively. (We ignore for illustrative purposes the uncertainty in the , which is typically much less than the uncertainty in the .) The role of the external knowledge is typically limited, because the term is typically small: it represents the difference in treatment in arm 2 compared with arm 1, summed over treatments. We have so far assumed that treatment 1 is new. If instead the trial is a head-to-head comparison of two existing treatments, one might use prior knowledge about . This suggests replacing Equation (9) withGiven a full expert prior p(α1, α2), we could also define α to be uncorrelated with α in the prior, as described above.

Numerical example

We illustrate these ideas using a fictitious data set with imperfect compliance (Table 1). If α2 were known then we could use Equation (9) to show . Instead, we use various priors for α = α2, using Equations (10) and (11). The first four rows of Table 2 have priors centered at α2 = 0. They all have point estimate  = 1.25, but the standard error of increases with uncertainty about α2 (although this standard error is much smaller than the prior uncertainty about α2). The next four rows of Table 2 change the prior mean from 0 to 1, which reduces the posterior mean from 1.25 to 1 – a smaller change, but one that could still be practically important.
Table 1

Data for simple example

Arm 1Arm 2
Mean outcome, Y¯r32
Outcome SD, σ11
Sample size, n100100
Mean use of treatment 1, D¯r10.80
Mean use of treatment 2, D¯r200.6
Table 2

Results for simple example

Prior p2)
Posterior for αp = α1 − α2
MeanStandard error
N(0, 02)1.250.18
N(0, 0.52)1.250.22
N(0, 12)1.250.31
N(0, 22)1.250.53
N(1, 02)1.000.18
N(1, 0.52)1.000.22
N(1, 12)1.000.31
N(1, 22)1.000.53
Data for simple example Results for simple example

Example: PENTA 5 trial

Trial design

The PENTA 5 trial [25] compared the effectiveness of three combination treatments for pediatric HIV-1 infection: lamivudine (3TC) + abacavir (ABC), zidovudine (ZDV) + 3TC, and ZDV + ABC. Of the 128 children randomized, 55 who were initially symptom-free were additionally randomized to receive nelfinavir, a protease inhibitor (PI), or placebo in an incomplete factorial design: symptomatic children all received PI. In this analysis we include all 128 children but consider PI as a nontrial drug, ignoring the fact that it was partly randomized. The children returned to the clinic at 4-week intervals up to 24 weeks and at 12-week intervals thereafter up to 224 weeks. The primary endpoint was the log concentration of HIV RNA in plasma at 24 and 48 weeks. Here we focus on the 24-week outcome, defined as the outcome observed closest to 24 weeks within the 22–30 week window. Baseline covariates included sex, ethnic origin, age, CDC disease stage, CD4 cell count, and height-for-age Z-scores.

Observed treatment

The trial drugs ZDV, 3TC and ABC, and the nontrial PI were taken at many different doses. Table 3 shows a ‘recommended dose’ for each drug (the protocol dose for the trial drugs and current best practice for the nontrial drug), dependent on a child's weight or estimated body surface area. For each drug, we defined the ‘dose fraction’ for each child at each clinic visit as the fraction of the ‘recommended dose’ taken, except that doses above the maximum adult dose had a dose fraction of 1 [16]. The ‘standardized dose’ was defined as the dose fraction if a drug had been taken without interruption; after interruption, the effect of the drug may be diminished by possible acquisition of viral resistance, and the dose fraction was multiplied by the ‘re-start factor’ given in Table 3.
Table 3

PENTA 5 trial: recommended and maximum drug doses and re-start factors

TypeDrugRecommended doseMaximumRe-start
(mg)(mg)factor (%)
TrialZDV360 /m2/day60075
3TC8 /kg/day30050
ABC16 /kg/day60050
NontrialPI90 /kg/day250075
PENTA 5 trial: recommended and maximum drug doses and re-start factors D1, D2, D3, and D4 were defined as the standardized doses for ZDV, 3TC, ABC, and PI, respectively for child i at 24 weeks. A box and whisker plot summarizing the standardized doses, by randomized arm, is given in Figure 1.
Figure 1

PENTA 5 trial: standardized drug dosages at 24 weeks

PENTA 5 trial: standardized drug dosages at 24 weeks The focus of the trial is to estimate differences between 3TC & ABC, ZDV & 3TC, and ZDV & ABC [25]. Assuming the drugs have an additive effect, the contrasts of interest are then represented by α2 − α1 (3TC & ABC vs. ZDV & ABC) and α2 − α3 (ZDV & 3TC vs. ZDV & ABC). Thus the protocol effects are α = (α2 − α1, α2 − α3). We consider how to assess sensitivity to the modeling assumptions in the discussion. The nonprotocol effects are taken as α = (α1 + α3, α4), representing the causal effect of ZDV & ABC versus no drugs and the causal effect of the nontrial PI drugs. We emphasize that α is of primary clinical interest; α is only important because it is needed in order to estimate the protocol parameters.

Expert prior

To obtain expert opinion on drug effects, five clinicians active in the field of HIV medicine and HIV clinical trials were sent a questionnaire describing 22 hypothetical randomized controlled trials (Appendices A and B). The trials compared the two- and three-drug combinations of all trial and nontrial drugs actually used in PENTA 5, although several of these (the classes NRTI and nNRTI in the appendices) were not in use at 24 weeks. For each trial, the questionnaire elicited the median, 75th and 87.5th quantile of the expert's prior distribution for the true difference in log HIV RNA between the trial arms. The elicited data were converted into a pooled multivariate Normal prior distribution with the parameters given in Table 4.
Table 4

PENTA 5 trial: means, standard deviations and correlation matrix of the expert prior distribution for the four treatment effects


ZDV (α1)

3TC (α2)

ABC (α3)

PI (α4)
Mean−0.408−0.576−0.457−1.032
SD0.4180.4780.4430.525
Correlation
 ZDV10.410.320.45
 3TC10.380.42
 ABC10.30
 PI1
PENTA 5 trial: means, standard deviations and correlation matrix of the expert prior distribution for the four treatment effects

Analysis model

We modeled the dose of the d-th drug for child i at 24 weeks usingwhere R and R are indicator variables for the 3TC & ABC and ZDV & 3TC arms, and Y is the baseline level of log HIV RNA. The choice of covariates was based on statistical significance. Our model for the primary outcome was The error terms, ϵ and ϵ were assumed to be independent, normally distributed, with zero mean and variance and , respectively. As described more fully in [16], some outcome values were only reported as being below a cutoff. The MCMC simulation approach easily copes with such censored data by imputing the censored values under the assumption that the underlying distribution is Normal. WinBUGS code is given in Appendix C. For comparative purposes, we fitted an intention-to-treat (ITT) model which made no use of the observed doses. The model fitted was similar to Equation (14) but with the four terms replaced with two indicator variables for the 3TC & ABC and ZDV & 3TC arms of the trial; the baseline log HIV RNA count term was retained. A classical Tobit regression analysis was used to account for the censored observations, replicating the approach used in [25].

Results

We considered five different choices of priors, the last three of which were described above under ‘Obtaining prior information’: An uninformative prior, N(0, 22), on the two nonprotocol effects. The value of 2 was chosen as a very large value from the clinical context; A ‘naïve’ prior that assumes the value of the nonprotocol parameters is zero; Partial expert (method 1): the experts' full prior distribution, with the marginal variance on the two protocol effects changed to a large value, 1000, and with the prior for the two nonprotocol effects unchanged; Partial expert (method 2): as method 1, but with the two nonprotocol parameters redefined to be independent of the protocol parameters in the prior; The experts' full prior distribution on both protocol and nonprotocol parameters. Apart from these different specifications of the informative prior, broadly noninformative priors were used for the remaining parameters: the coefficients in the D models were N(0, 100); precisions for all the error terms were Γ(0.01, 0.01); α (except in the model using the full prior) and γ were N(0, 1000). The resulting confidence intervals for the two protocol parameters are shown in Figure 2. With the uninformative prior, the data are fairly uninformative on the causal effect of the trial drugs. The naïve prior gives much more certain results, but makes the implausible assumption that nontrial treatments have no effect. The two partial expert prior methods agree to a reasonable extent in both location and width; we prefer method 2, since it is invariant to re-parameterization and has a clear theoretical interpretation. The full expert prior gives results that are similar to the partial expert prior but closer to the null, reflecting the experts' prior beliefs being centered near the null. Recall that these parameters require careful interpretation. The ITT analysis, also shown in Figure 2, estimates the treatment effect of 3TC & ABC to be closer to the null than all the other models, with a similar size confidence interval to the expert prior analysis; the ITT treatment effect of ZDV & 3TC is roughly comparable to the two expert prior analysis.
Figure 2

PENTA 5 trial: estimated parameters under intention-to-treat (ITT) analysis and under causal analysis with different priors

PENTA 5 trial: estimated parameters under intention-to-treat (ITT) analysis and under causal analysis with different priors The qualitative conclusion is that 3TC & ABC is the superior drug combination, but the degree of its superiority depends on our opinion of the effect of the other drugs.

Discussion

We have proposed a way to estimate causal differences between treatments in a randomized trial with a complex pattern of nonreceipt of randomized treatments and receipt of nontrial treatments. Here we compare our approach with some alternative methods that have been proposed. The largest body of literature concerns causal analyses when the only treatment change is switching between the trial treatments. The intuitive presentation of Sommer and Zeger [26] has been formalized by Angrist et al. [27] and by the notion of principal strata [28], and widely extended [18,29-31]. These techniques estimate a single treatment contrast from a single randomization and do not extend to cases where some participants take nontrial treatments, or to comparative trials where some participants take no treatments, because in these settings it is necessary to estimate more parameters without having more randomizations. Some attempts have been made to handle nontrial treatments. Robins and Greenland considered a trial of low versus high dose of AZT in AIDS patients where the ITT analysis unexpectedly found that the low dose improved survival [5], possibly due to imbalance in the receipt of a nontrial treatment (prophylaxis against Pneumocystis carinii pneumonia). The authors estimated a structural nested failure time model involving the causal effects of the two treatments in two ways. First, they used two different weighted log-rank tests: since there were more parameters than randomized arms, this method gave very imprecise inferences, and depended strongly on distributional assumptions. Second, they estimated the effect of the nontrial treatment under the assumption that conditioning on recorded clinical variables was sufficient to make the decision to switch treatments independent of potential future outcomes. This gave more precise inferences, but the ‘no unmeasured confounders’ assumption is highly questionable in this and other settings. Robins introduced a wide range of methods for equivalence trials based on the assumption of no unmeasured confounders [6]. An alternative approach by Walter et al. [7] extended the approach of Sommer and Zeger [26] to a comparative trial with a binary outcome where participants might receive no treatment. They defined five principal strata [28] by cross-classifying the treatment a participant would receive if randomized to A with the treatment they would receive if randomized to B. To identify the model, they made limited assumptions about comparability of principal strata, either constraining three relative risks to be equal or constraining two pairs of probabilities to be equal: the estimates may be sensitive to such assumptions, and external information is needed to justify them. Roy et al. [8] identified causal effects in different principal strata without assuming either comparability between groups as treated or no unmeasured confounders [8]. However, their method assumed that potential outcomes were independent of covariates within principal strata, and this questionable assumption may have been the key to model identification. Our approach avoids making strong and typically unjustified assumptions, either comparability between groups as treated, possibly after covariate adjustment, or modeling assumptions. It accepts the likely nonidentifiability of the contrasts of interest from the observed data, and instead uses contextual or prior knowledge about the treatments. Is such prior knowledge likely to be available and reliable? In a comparative trial with some individuals receiving no treatment, we require prior knowledge about one of the treatment effects, or perhaps about some linear combination of the two treatment effects. This is very likely to be available from the literature, so we would often be able to use published trials or (ideally) meta-analyses to provide our prior knowledge. Even rather imprecise prior information is likely to be useful, and prior imprecision is unlikely to carry over substantially into the estimates of treatment contrasts unless nonreceipt of treatment is widespread and unbalanced. Greater difficulties may arise in the case of nontrial treatments, which may be new, untested or inadequately tested treatments. In this case expert opinions may be required. The analysis of the PENTA 5 trial presented here was part of a larger investigation of treatment changes throughout follow-up, which required priors for seven drugs from four classes. Even assuming common drug effects within classes, this was considered to be more information than could be obtained from the literature, since drug effects are likely to differ between children and adults, and few HIV trials have been done in children. Another difficulty with the use of expert prior information is deciding who should provide the prior. Ideally the prior would come from independent experts, but it will often more practically come from the trial investigators [32]. Clearly the prior must be stated so that it can be assessed by readers. Our methodology is a hybrid of instrumental variables and Bayesian methods. Purists will not like this, since full Bayesian methods require a full probability model while instrumental variables allows us to avoid this. However, we believe that our method combines the best aspects of both methodologies, using instrumental variables methods to avoid assumptions about the relationship between the dosage D and the untreated outcome Y, and Bayesian methods to incorporate prior beliefs about nonprotocol parameters. Our results depend on the truth of the causal model. In PENTA 5, this assumed that the causal effect of a dose of drug was proportional to the dose received, and that there were no interactions between drugs. These assumptions are not easily checked from the data, since adding further terms to the causal model would only increase the number of nonidentified parameters on which prior information is needed. Therefore, sensitivity analyses are important [33]. In PENTA 5, they could involve adding to the causal model (1) nonlinear terms with assumed coefficient values. Estimation would then require expectations of these nonlinear terms to be computed as in (13) and added into (14). Our computational approach for the PENTA 5 data used Monte Carlo Markov Chain methods in WinBUGS. Computationally simpler approaches could be available; one idea might be to draw a small sample from the prior p(α), analyze the data conditionally on each value of α, and combine results as for multiply imputed data [34]. The methods presented here can also been extended to analyze longitudinal data, by replacing models (13) and (14) with time-dependent versions. This involves attention to the longitudinal error structure. Finally, our method in the simple case presented requires just knowledge of the mean receipt of each drug on each arm. Some information about treatment received is typically reported, but application of methods such as ours would be facilitated if treatment received were reported in a more standard way.
  19 in total

1.  Principal stratification in causal inference.

Authors:  Constantine E Frangakis; Donald B Rubin
Journal:  Biometrics       Date:  2002-03       Impact factor: 2.571

2.  A comparison of intent-to-treat and per-protocol results in antibiotic non-inferiority trials.

Authors:  Erica Brittain; Daphne Lin
Journal:  Stat Med       Date:  2005-01-15       Impact factor: 2.373

3.  Sense and sensitivity when correcting for observed exposures in randomized clinical trials.

Authors:  S Vansteelandt; E Goetghebeur
Journal:  Stat Med       Date:  2005-01-30       Impact factor: 2.373

4.  Uses and limitations of randomization-based efficacy estimators.

Authors:  Ian R White
Journal:  Stat Methods Med Res       Date:  2005-08       Impact factor: 3.021

5.  Discordance between reported intention-to-treat and per protocol analyses.

Authors:  Núria Porta; Catalina Bonet; Erik Cobo
Journal:  J Clin Epidemiol       Date:  2007-04-11       Impact factor: 6.437

6.  Comparison of dual nucleoside-analogue reverse-transcriptase inhibitor regimens with and without nelfinavir in children with HIV-1 who have not previously been treated: the PENTA 5 randomised trial.

Authors: 
Journal:  Lancet       Date:  2002-03-02       Impact factor: 79.321

7.  Practical properties of some structural mean analyses of the effect of compliance in randomized trials.

Authors:  K Fischer-Lapp; E Goetghebeur
Journal:  Control Clin Trials       Date:  1999-12

8.  A new preference-based analysis for randomized trials can estimate treatment acceptability and effect in compliant patients.

Authors:  S D Walter; Gordon Guyatt; Victor M Montori; R Cook; K Prasad
Journal:  J Clin Epidemiol       Date:  2006-03-27       Impact factor: 6.437

9.  Trials to assess equivalence: the importance of rigorous methods.

Authors:  B Jones; P Jarvis; J A Lewis; A F Ebbutt
Journal:  BMJ       Date:  1996-07-06

10.  Instrumental variables and interactions in the causal analysis of a complex clinical trial.

Authors:  Simon J Bond; Ian R White; A Sarah Walker
Journal:  Stat Med       Date:  2007-03-30       Impact factor: 2.373

View more
  3 in total

1.  Allowing for missing outcome data and incomplete uptake of randomised interventions, with application to an Internet-based alcohol trial.

Authors:  Ian R White; Eleftheria Kalaitzaki; Simon G Thompson
Journal:  Stat Med       Date:  2011-09-21       Impact factor: 2.373

2.  Causal inference methods to assess safety upper bounds in randomized trials with noncompliance.

Authors:  Yiting Wang; Jesse A Berlin; José Pinheiro; Marsha A Wilcox
Journal:  Clin Trials       Date:  2015-03-01       Impact factor: 2.486

3.  Evaluation of a weighting approach for performing sensitivity analysis after multiple imputation.

Authors:  Panteha Hayati Rezvan; Ian R White; Katherine J Lee; John B Carlin; Julie A Simpson
Journal:  BMC Med Res Methodol       Date:  2015-10-13       Impact factor: 4.615

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.