Literature DB >> 29573044

Missing data in trial-based cost-effectiveness analysis: An incomplete journey.

Baptiste Leurent1, Manuel Gomes2, James R Carpenter1,3.   

Abstract

Cost-effectiveness analyses (CEA) conducted alongside randomised trials provide key evidence for informing healthcare decision making, but missing data pose substantive challenges. Recently, there have been a number of developments in methods and guidelines addressing missing data in trials. However, it is unclear whether these developments have permeated CEA practice. This paper critically reviews the extent of and methods used to address missing data in recently published trial-based CEA. Issues of the Health Technology Assessment journal from 2013 to 2015 were searched. Fifty-two eligible studies were identified. Missing data were very common; the median proportion of trial participants with complete cost-effectiveness data was 63% (interquartile range: 47%-81%). The most common approach for the primary analysis was to restrict analysis to those with complete data (43%), followed by multiple imputation (30%). Half of the studies conducted some sort of sensitivity analyses, but only 2 (4%) considered possible departures from the missing-at-random assumption. Further improvements are needed to address missing data in cost-effectiveness analyses conducted alongside randomised trials. These should focus on limiting the extent of missing data, choosing an appropriate method for the primary analysis that is valid under contextually plausible assumptions, and conducting sensitivity analyses to departures from the missing-at-random assumption.
© 2018 The Authors Health Economics published by John Wiley & Sons Ltd.

Entities:  

Keywords:  cost-effectiveness analysis; missing data; multiple imputation; randomised controlled trials; sensitivity analysis

Mesh:

Year:  2018        PMID: 29573044      PMCID: PMC5947820          DOI: 10.1002/hec.3654

Source DB:  PubMed          Journal:  Health Econ        ISSN: 1057-9230            Impact factor:   3.046


INTRODUCTION

Cost‐effectiveness analyses (CEA) conducted alongside randomised controlled trials are an important source of information for health commissioners and decision makers. However, clinical trials rarely succeed in collecting all the intended information (Bell, Fiero, Horton, & Hsu, 2014), and inappropriate handling of the resulting missing data can lead to misleading inferences (Little et al., 2012). This issue is particularly pronounced in CEA because these usually rely on collecting rich, longitudinal information from participants, such as their use of healthcare services (e.g., Client Service Receipt Inventory; Beecham & Knapp, 2001) and their health‐related quality of life (e.g., EQ‐5D‐3L; Brooks, 1996). Several guidelines have been published in recent years on the issue of missing data in clinical trials (National Research Council, 2010; Committee for Medicinal Products for Human Use (CHMP), 2011; Burzykowski et al., 2010; Carpenter & Kenward, 2007) and for CEA in particular (Briggs, Clark, Wolstenholme, & Clarke, 2003; Burton, Billingham, & Bryan, 2007; Faria, Gomes, Epstein, & White, 2014; Manca & Palmer, 2005; Marshall, Billingham, & Bryan, 2009). Key recommendations include: taking practical steps to limit the number of missing observations; avoiding methods whose validity rests on contextually implausible assumptions, and using methods that incorporate all available information under reasonable assumptions; and assessing the sensitivity of the results to departures from these assumptions. In particular, following Rubin's taxonomy of missing data mechanisms (Little & Rubin, 2002), methods valid under a missing‐at‐random (MAR) assumption (i.e., when, given the observed data, missingness does not depend on the unseen values) appear more plausible than the more restrictive assumption of missing completely at random, where missingness is assumed to be entirely independent of the variables of interest. Because we cannot exclude the possibility that the missingness may depend on unobserved values (missing not at random [MNAR]), an assessment of the robustness of the conclusions to alternative missing data assumptions should also be undertaken. Noble and colleagues (Noble, Hollingworth, & Tilling, 2012) have previously reviewed how missing resource use data were addressed in trial‐based CEA. They found that practice fell markedly short of recommendations in several aspects. In particular, that reporting was usually poor and that complete‐case analysis was the most common approach. However, missing data research is a rapidly evolving area, and several of the key guidelines were published after that review. We therefore aimed to review how missing cost‐effectiveness data were addressed in recent trial‐based CEA. We reviewed studies published in the National Institute for Health Research Health Technology Assessment (HTA) journal, as it provides an ideal source for assessing whether recommendations have permeated CEA practice. These reports give substantially more information than a typical medical journal article, allowing authors the space to clearly describe the issues raised by missing data in their study and the methods they used to address these. Our primary objectives were to determine the extent of missing data, how these were addressed in the analysis, and whether sensitivity analyses to different missing data assumptions were performed. We also provide a critical review of our findings and recommendations to improve practice.

METHODS

The PubMed database was used to identify all trial‐based CEA published in HTA between the January 1, 2013, and December 31, 2015. We combined search terms such as “randomised,” “trial,” “cost,” or “economic” to capture relevant articles (see Appendix A.1 for details of the search strategy). The full reports of these articles were downloaded then screened for eligibility by excluding all studies that were pilot or feasibility studies; reported costs and effects separately (e.g., cost‐consequence analysis); or did not report a within‐trial CEA. For each included study, we extracted key information about the study and the analysis to answer our primary research questions. A detailed definition of each indicator extracted is provided in Appendix B. In a second stage, we drew on published guidelines and our experience to derive a list of recommendations to address missing data, and then re‐reviewed the studies to assess to which extent they followed these recommendations (see Appendix B for further details). Data analysis was conducted with Stata version 15 (StataCorp, 2017). The data from this review are available on request (Leurent, Gomes, & Carpenter, 2017).

RESULTS

Included studies

Sixty‐five articles were identified in our search (Figure 1), and 52 eligible studies were included in the review (listed in Appendix A.2). The median time frame for the CEA was over 12 months, and the majority of trials (71%, n = 37) conducted a follow‐up with repeated assessments over time (median of 2; Table 1). The most common effectiveness measure was the quality‐adjusted life year (81%, n = 42). Other outcomes included score on clinical measures, or dichotomous outcomes such as “smoking status”.
Figure 1

Studies selection flow diagram. CEA = cost‐effectiveness analyses; HTA = health technology assessment; RCT = randomised controlled trial

Table 1

Characteristics of included studies (n = 52)

n %
Median(IQR)
General characteristics
Publication year
20131427
20141529
20152344
CEA time frame
0–11 months2242
12 months1937
≥24 months1121
Follow‐up design
Continuous (time to event)48
One follow‐up assessment1121
Repeated assessments3771
Effectiveness measure
QALY4281
Binary612
Clinical scale score36
Time to recovery12
Missing data
Report exact number of complete cases2038
Proportion of complete casesa 0.63(0.47–0.81)
Proportion complete effectiveness data (n = 47)0.73(0.55–0.86)
Proportion complete cost data (n = 40)0.79(0.67–0.92)
Differs between costs and effectivenessb
Yes, more cost data missing36
Yes, more effect data missing1019
No2242
No missing (<5%)510
Unclear1223
Differs between armsc
Yes1019
No3262
No missing (<5%)510
Unclear510

Note. IQR = interquartile range; QALY = quality‐adjusted life year.

Proportion of trial participants with complete cost‐effectiveness data. An upper bound was used if exact number not reported.

More than 5% difference in the proportion of participants with complete cost or effectiveness data.

More than 5% difference in the proportion of complete cases between arms.

Studies selection flow diagram. CEA = cost‐effectiveness analyses; HTA = health technology assessment; RCT = randomised controlled trial Characteristics of included studies (n = 52) Note. IQR = interquartile range; QALY = quality‐adjusted life year. Proportion of trial participants with complete cost‐effectiveness data. An upper bound was used if exact number not reported. More than 5% difference in the proportion of participants with complete cost or effectiveness data. More than 5% difference in the proportion of complete cases between arms.

Extent of missing data

Missing data was an issue in almost all studies, with only five studies (10%) having less than 5% of participants with missing data. The median proportion of complete cases was 63% (interquartile range, 47–81%; Figure 2). Missing data arose mostly from patient‐reported (e.g., resource use and quality of life) questionnaires. The extent of missing data was generally similar for cost and effectiveness data, but 10 (19%) studies had more missing data in the latter (Table 1). The proportion of complete cases reduced, as the number of follow‐up assessments increased (Spearman's rank correlation coefficient ρ = −0.59, p value < .001) and as the study duration increased (ρ = −0.29, p = .04).
Figure 2

Proportion of trial participants with complete data for the primary cost‐effectiveness analysis. Shown for cost‐effectiveness (n = 52), effectiveness (n = 47, unclear in 5 studies), and cost data (n = 40, unclear in 12 studies)

Proportion of trial participants with complete data for the primary cost‐effectiveness analysis. Shown for cost‐effectiveness (n = 52), effectiveness (n = 47, unclear in 5 studies), and cost data (n = 40, unclear in 12 studies)

Approach to missing data

In the remaining assessments, we excluded the five studies with over 95% of complete cases. Three main approaches to missing data were used: complete‐case analysis (CCA; Faria et al., 2014), reported in 66% of studies (n = 31), multiple imputation (MI; Rubin, 1987; 49%, n = 23), and ad hoc hybrid methods (17%, n = 8). For the primary analysis, CCA was the most commonly used method (43%, n = 20), followed by MI (30%, n = 14; Table 2). MI was more common when the proportion of missing data was high and when there were multiple follow‐up assessments (see Table 3).
Table 2

Methods for handling missing data in primary analysis (n = 47)

Primary analysis method n %
Complete‐case analysis2043
Multiple imputation1430
Other—single methods
Inverse probability weighting12
Bayesian model, missing data as unknown parameter12
Other—ad hoc hybrid methodsa 817
Using a combination of
Mean imputationb 6
Regression imputationc 3
Inverse probability weightingd 2
Assuming failure when outcome missing2
Multiple imputation1
Last observation carried forward1
Unclear36

Ad hoc hybrid method = several approaches to missing data combined, for example, using mean imputation for missing individual resource use items and multiple imputation for fully incomplete observations.

Mean imputation = replacing missing values by the average across other participants.

Regression imputation = replace missing values by predicted value based on observed variables.

Inverse probability weighting = analysing complete data, weighted according to their modelled probability of being observed. These methods are presented in more details in other references (Baio & Leurent, 2016; Faria et al., 2014).

Table 3

Approaches to missing data, by year, number of follow‐ups, and extent of missing data (n = 47)

Primary analysis methodReported a sensitivity analysis
CCAMIOtherYesNo
n % n % n % n % n %
Publication year
2013 (n = 13)646323431538862
2014 (n = 15)96017533640960
2015 (n = 19)52610534211158842
Number of follow‐up assessmentsa
1 (n = 10)770110220330770
≥2 (n = 36)13361336102818501850
Proportion of complete casesb
<50% (n = 15)427640533853747
50–75% (n = 18)1056422422950950
75%–95% (n = 14)643429429536964
Information missingc
Similar (n = 22)135962731410451255
More cost missing (n = 3)13326700267133
More effect missing (n = 10)440220440660440

Note. % = row percentages. CCA = complete‐case analysis; MI = multiple imputation.

Excluding one study with continuous follow‐up (n = 46).

For the five studies with less than 5% of incomplete cases, four used CCA and one an ad hoc hybrid method for their primary analysis. One of the five studies conducted a sensitivity analysis to missing data.

Excluding 12 studies where this was unclear (n = 35).

Methods for handling missing data in primary analysis (n = 47) Ad hoc hybrid method = several approaches to missing data combined, for example, using mean imputation for missing individual resource use items and multiple imputation for fully incomplete observations. Mean imputation = replacing missing values by the average across other participants. Regression imputation = replace missing values by predicted value based on observed variables. Inverse probability weighting = analysing complete data, weighted according to their modelled probability of being observed. These methods are presented in more details in other references (Baio & Leurent, 2016; Faria et al., 2014). Approaches to missing data, by year, number of follow‐ups, and extent of missing data (n = 47) Note. % = row percentages. CCA = complete‐case analysis; MI = multiple imputation. Excluding one study with continuous follow‐up (n = 46). For the five studies with less than 5% of incomplete cases, four used CCA and one an ad hoc hybrid method for their primary analysis. One of the five studies conducted a sensitivity analysis to missing data. Excluding 12 studies where this was unclear (n = 35).

Sensitivity analyses

Over half of the studies (53%, n = 25) did not conduct any sensitivity analysis around missing data, with 21% (n = 10) reporting CCA results alone and 11% (n = 5) MI results under MAR alone (Table 4). The remaining studies (n = 22, 47%) assessed the sensitivity of their primary analysis results to other approaches for the missing data. This was usually performing either MI under MAR, or CCA, when the other approach was used in the primary analysis. Other sensitivity analyses included using last observation carried forward or regression imputation.
Table 4

Sensitivity analysis, overall, and by primary analysis method (n = 47)

NoneSensitivity analysis method
CCAMI (MAR)MNAROthera
n % n % n % n % n %
Overall
Total (n = 47)2553112391924511
By primary analysis
CCA (n = 20)10500084000210
MI (n = 14)53696400214214
Other (n = 13)1077215180018

Note. % = row percentages; CCA = complete‐case analysis; MAR = assuming data missing at random; MI = multiple imputation; MNAR = assuming data missing not at random. Total may be more than 100% as some studies conducted more than one sensitivity analysis.

Other methods used for sensitivity analysis include last observation carried forward (n = 1), regression imputation (n = 1), adjusting for baseline predictors of missingness (n = 1), imputing by average of observed values for that patient (n = 1), and an ad hoc hybrid method using multiple and mean imputation (n = 1).

Sensitivity analysis, overall, and by primary analysis method (n = 47) Note. % = row percentages; CCA = complete‐case analysis; MAR = assuming data missing at random; MI = multiple imputation; MNAR = assuming data missing not at random. Total may be more than 100% as some studies conducted more than one sensitivity analysis. Other methods used for sensitivity analysis include last observation carried forward (n = 1), regression imputation (n = 1), adjusting for baseline predictors of missingness (n = 1), imputing by average of observed values for that patient (n = 1), and an ad hoc hybrid method using multiple and mean imputation (n = 1). Only two studies (4%) conducted sensitivity analyses, assuming data could be MNAR. In both studies, values imputed under a standard MI were modified to incorporate possible departures from the MAR assumption for both the cost and effectiveness data using a simplified pattern‐mixture model approach (Faria et al., 2014; Leurent et al., 2018). The studies then discussed the plausibility of these departures from MAR and their implications for the cost‐effectiveness inferences.

Recommendations criteria

Table 5 reports the number of studies that reported evidence of following the recommendations from Figure 3 (see Section 4). Most studies reported being aware of the risk of missing data, for example, by taking active steps to reduce them (n = 35, 74%). In addition, almost two‐thirds of the studies (n = 29, 62%) reported the breakdown of missing data by arm, time point, and endpoint. Only about one‐third of the studies have clearly reported the reasons for the missing data (n = 16, 34%) and the approach used for handling the missing data and its underlying assumptions (n = 17, 36%). Only one study (2%) appropriately discussed the implications of missing data in their cost‐effectiveness conclusions.
Table 5

Review of indicators based on recommendations criteria (n = 47)

Criteriona Metb Not metUnclear
n % n % n %
Prevent
A1. Maximise response rate3574122600
A2. Alternative data sources1021377900
A3. Monitor completeness1736306400
Primary
B1. Assumption for primary analysis1736275736
B2. Appropriate primary method1736275736
Sensitivity
C1. Discuss departures from the primary assumption004710000
C2. Consider broad range of assumptions24459600
C3. Method valid under these assumptions24459600
Report
D1. Missing data by endpoint, arm, and time point2962183800
D2. Discuss reasons for missing data1634316600
D3. Describe methods used and assumptions1736306400
D4. Conclusions in light of missing data12469800

See Figure 3 and Appendix B for definition of each criterion.

Report demonstrates evidence of having followed this recommendation. Not met if the recommendation was not followed or not mentioned. Unclear if some suggestions the criteria may have been met but information not clear enough. See Appendix B for detailed definitions and methodology used.

Figure 3

Recommendations for improving handling of missing data in trial‐based cost‐effectiveness analysis. References: 1, Little et al., 2012; 2, Noble et al., 2012; 3, Faria et al., 2014; and 4, Carpenter and Kenward 2007 [Colour figure can be viewed at http://wileyonlinelibrary.com]

Review of indicators based on recommendations criteria (n = 47) See Figure 3 and Appendix B for definition of each criterion. Report demonstrates evidence of having followed this recommendation. Not met if the recommendation was not followed or not mentioned. Unclear if some suggestions the criteria may have been met but information not clear enough. See Appendix B for detailed definitions and methodology used. Recommendations for improving handling of missing data in trial‐based cost‐effectiveness analysis. References: 1, Little et al., 2012; 2, Noble et al., 2012; 3, Faria et al., 2014; and 4, Carpenter and Kenward 2007 [Colour figure can be viewed at http://wileyonlinelibrary.com]

DISCUSSION

Summary of findings

Missing data remain ubiquitous in trial‐based CEA. The median proportion of participants with complete cost‐effectiveness data was only 63%. This reflects the typical challenges faced by CEA of randomised controlled trials, which often rely on patient questionnaires to collect key resource use and health outcome data. Despite best efforts to ensure completeness, a significant proportion of nonresponse is likely. This is consistent with other reviews, which also found no reduction of the extent of missing data in trials over time (Bell et al., 2014). CCA remains the most commonly used approach for handling missing data in trial‐based CEA, in contrast to recommendations. This approach makes the restrictive assumption that, given the variables in the analysis model, the distributions of the outcome data are the same, whether or not those outcome data are observed. This approach is also problematic because it can result in a loss in precision, as it discards participants who have partially complete data postrandomisation and who can provide important information to the analysis. Other unsatisfactory approaches based on unrealistic assumptions, such as last observation carried forward and single imputation, are also occasionally used. MI (Rubin, 1987) assuming MAR has been widely recommended for CEA (Briggs et al., 2003; Burton et al., 2007; Faria et al., 2014; Marshall et al., 2009), allowing for baseline variables and postrandomisation data not in the primary analysis to be used for the imputation. It seems to be now more commonly used, with around half of the studies using MI for at least one of their analyses (up to 74% in 2015). Around one‐third of the studies used MI for their primary CEA, which is higher than seen in primary clinical outcome analyses (8%; Bell et al., 2014). On the other hand, sensitivity analyses to missing data remain clearly insufficient. Only two studies (4%) conducted comprehensive sensitivity analyses and assessed whether the study's conclusions were sensitive to departures from the MAR assumption (i.e., possible MNAR mechanisms). Half of the studies did not conduct any sensitivity analysis regarding the missing data. The remaining studies performed some sort of sensitivity analyses, but usually consisting of simple variations from the primary analysis, such as reporting CCA results in addition to MI. This may be more for completeness than proper missing data sensitivity analyses. For example, if MI is used for the primary analysis (having assumed that MAR is the realistic primary missing data assumption), a sensitivity analysis that involves CCA will make stronger missing data assumptions.

Strengths and limitations

Our review follows naturally from the review of Noble et al. (2012) and gives an update of the state of play after the publication of several key guidelines. Our review, however, differs in scope and methods and cannot be directly compared with the results of Noble et al. One of the key strengths of this review is that HTA comprehensive reports allowed us to obtain a more complete picture of the missing data and the methods used to tackle it. HTA monographs are published alongside more succinct peer‐reviewed papers in specialist medical journals, and they are often seen as the “gold‐standard” for trial‐based CEA in the UK. It seems therefore reasonable to assume that these are representative of typical practice in CEA. This review is, to our knowledge, the first to look at completeness of both cost and effectiveness data. A limitation is the use of a single‐indicator “proportion of complete cases” to capture the extent of the missing data issue. This is however a clearly defined indicator and allows comparison with other reviews. The “recommendations indicators” also focused on the information reported in the study, not necessarily what might have been done in practice.

Recommendations

A list of recommendations to address missing data in trial‐based CEA is presented in Figure 3. Trial‐based CEA are prone to missing data, and it is important that analysts take active steps at the design and data‐collection stages to limit their extent (Bernhard et al., 2006; Brueton et al., 2013; National Research Council, 2010). Resource use questionnaires should be designed in a user‐friendly way, and their completion encouraged during follow‐up visits, possibly supported by a researcher (Mercieca‐Bebber et al., 2016; National Research Council, 2010). Alternative sources should also be considered to minimise missing information, for example, administrative data or electronic health records (Franklin & Thorn, 2018; Noble et al., 2012). For any study with missing data, clear reporting of the issue is required. Ideally, the study should report details of the pattern of missing data (Faria et al., 2014), possibly as an appendix. At a minimum, CEA studies should report for each analysis the number of participants included by trial arm, as recommended in the Consolidated Standards of Reporting Trials guidelines (Noble et al., 2012; Schulz et al., 2010). Although CCA may be justifiable in some circumstances, the choice of CCA for the primary analysis approach appears difficult to justify in the presence of repeated measurements, because the loss of power (by discarding all patients with any missing values) across the different time points tends to be large. Other approaches valid under more plausible MAR assumptions and making use of all the observed data, such as MI (Rubin, 1987); likelihood‐based repeated measures models (Faria et al., 2014; Verbeke, Fieuws, Molenberghs, & Davidian, 2014); or Bayesian models (Ades et al., 2006), should be considered. In particular, MI has been increasingly used in CEA, and further guidance to support an appropriate use in this context is warranted. An area with clear room for improvement is the conduct of sensitivity analyses. This review found that many studies used CCA for the primary analysis and MI as a sensitivity analysis, or vice‐versa, and concluded that the results were robust to missing data. This is misleading because both of these methods rely on the assumption that the missingness is independent of the unobserved data. Although the MAR assumption provides a sensible starting point, it is not possible to determine the true missing‐data mechanism from the observed data. Studies should therefore assess whether their conclusions are sensitive to possible departures from that assumption (National Research Council, 2010; Committee for Medicinal Products for Human Use (CHMP), 2011; Faria et al., 2014). Several approaches have been suggested to conduct analyses under MNAR assumptions. Selection models express how the probability of being missing is related to the value itself. Pattern‐mixture models, on the other hand, capture how missing data could differ from the observed (Molenberghs et al., 2014; Ratitch, O'Kelly, & Tosiello, 2013). Pattern‐mixture models appear attractive because they frame the departure from MAR in a way that can be more readily understood by clinical experts and decision makers and can be used with standard analysis methods such as MI (Carpenter & Kenward, 2012; Ratitch et al., 2013). MNAR modelling can be challenging, but accessible approaches have also been proposed (Faria et al., 2014; Leurent et al., 2018). Further developments are still needed to use these methods in the CEA context and to provide the analytical tools and practical guidance to implement them in practice.

CONCLUSION

Missing data can be an important source of bias and uncertainty, and it is imperative that this issue is appropriately recognised and addressed to help ensure that CEA studies provide sound evidence for healthcare decision making. Over the last decade, there have been some welcome improvements in handling missing data in trial‐based CEA. In particular, more attention has been devoted to assessing the reasons for the missing data and adopting methods (e.g., MI) that can incorporate those in the analysis. However, there is substantial room for improvement. Firstly, more efforts are needed to reduce missing data. Secondly, the extent and patterns of missing data should be more clearly reported. Thirdly, the primary analysis should consider methods that make contextually plausible assumptions rather than resort automatically to CCA. Lastly, sensitivity analyses to assess the robustness of the study's results to potential MNAR mechanisms should be conducted.

CONFLICT OF INTEREST

The authors have no conflict of interest.
SearchQueryItems found
4Search (“health Technol assess”[journal]) AND (“2013/01/01”[date ‐ publication] : “2015/12/31”[date ‐ publication]) AND (“randomised”[title] OR “randomised”[title] OR “trial”[title]) AND (“economic”[title/abstract] OR “cost*”[title/abstract]) NOT (“pilot”[title] OR “feasibility”[title])65
3Search (“Health Technol Assess”[Journal]) AND (“2013/01/01”[Date ‐ Publication] : “2015/12/31”[Date ‐ Publication]) AND (“randomised”[title] OR “randomized”[Title] OR “trial”[Title]) AND (“economic”[Title/Abstract] OR “cost*”[Title/Abstract])74
2Search (“Health Technol Assess”[Journal]) AND (“2013/01/01”[Date ‐ Publication] : “2015/12/31”[Date ‐ Publication]) AND (“randomised”[Title] OR “randomized”[Title] OR “trial”[Title])91
1Search (“Health Technol Assess”[Journal]) AND (“2013/01/01”[Date ‐ Publication] : “2015/12/31”[Date ‐ Publication])236
IndicatorDefinitionNotes
Proportion of complete casesProportion of randomised participants for whom all data were available for the primary cost‐effectiveness analysisIf the number of complete‐cases was not clearly reported, we estimated an “upper bound,” from information, such as the proportion of participants with complete cost, or effect, data. See definition of primary analysis below.
Proportion complete effectiveness dataProportion of randomised participants for whom all effectiveness data were Available for the primary cost‐effectiveness analysisSame as above
Proportion complete cost dataProportion of randomised participants for whom all cost data were available for the primary cost‐effectiveness analysisSame as above
Report exact number of complete casesWhether the number of participants with complete cost and effectiveness data was clearly reported.
More missing costs or effectivenessWhether the proportion of complete cases differ between cost and effectiveness variable.Considered “similar” when the proportion of complete cases was within 5% of each other.
Primary analysis methodMethods used to address missing data in the primary (base case) cost‐effectiveness analysisWhen multiple effectiveness measures, time‐frames, or cost perspectives were reported, without a base‐case clearly defined, we considered the analysis based on quality‐adjusted life years (QALYs) over the longest within‐trial follow‐up period, from the NHS and social services cost perceptive.
Conducted a sensitivity analysis to missing dataReport results under more than one approach for addressing missing data
RecommendationIndicator definitionExamples “yes”Examples “no”Notes
A1. Maximise response rate (consider questionnaire design, mode of administration, reminders, incentives, participants' engagement, etc.)Mention taking steps to maximise response rateReminder, incentives, home/hospital visit, multiple attempts,Mention response was maximised for clinical outcome but not reported for cost‐effectiveness endpointsCan be for overall trial data if implicit includes cost or effect data. Except if steps are clearly for non‐CE variables only (e.g., primary outcome only).
A2. Consider alternative data sources (e.g., routinely collected data)Mention that considered missing data issues when choosing appropriate source, OR mention more than one source used for a CE data.Use of electronic health records or administrative data, e.g., hospital episode statistics were used to supplement trial's data, for example, about hospital admissions post‐randomisation (which might be otherwise missing).Using routine data as a primary source: e.g., resource use taken primarily from administrative/hospital records.
A3. Monitor cost‐effectiveness data completeness while trial ongoingMentioned monitoring data completeness while trial ongoing.Data managers checked inconsistent and missing data (if not clear “while trial ongoing” but mention monitoring probably fine). Mention taking new steps to reduce MD (e.g., incentive) as realised lots of MD after trial started.Mention data checks for inconsistencies, but no mention of checking missing data.Can be for overall trial data. Except if monitoring clearly for non‐CE variables only (e.g., primary outcome only).
B1. Formulate realistic and accessible missing data assumption for the primary analysis (typically, but not necessarily, a form of the missing at random assumption)Primary (base‐case) CEA based on reasonable missing data assumptions. (likely MAR, or alternative if well justified).– Used MI for primary analysis ‐ well justified and clear alternative– Hybrid method, except if clearly explain and justify underlying assumptions
B2. Use appropriate method valid under that assumption (typically, but not necessarily, multiple imputation or maximum likelihood)Use appropriate analysis method.– MI for primary analysis ‐Bayesian under MAR ‐ well justified and clear alternative– Use unadjusted CCA when reporting data are MAR.
C1. Discuss with clinicians and investigators to formulate plausible departures from the primary missing data assumptionConducted MNAR SA + mention elicitation.Did not conduct MNAR SA
C2. Consider a broad range of assumptions, including missing not at random mechanismsConducted MNAR SADid not conduct MNAR SA
C3. Use appropriate method valid under these assumptions (typically, but not necessarily, pattern‐mixture models or reference‐based approach)Conducted MNAR SA, and used an appropriate method (PMM, etc.).Did not conduct MNAR SA
D1. Report number of participants with cost and outcome data, by arm and time‐pointReport number (or %) of complete or missing data. Split at least by effectiveness vs. cost, time point (when applicable), and armReported missing data by endpoint and arm, but not by time point.Do not have to be all at the same time (split by endpoint + time + arm), can be three separate table/texts.
D2. Report possible reasons for non‐response, and baseline predictors of missing valuesMention something about main reason for the missing data, OR Explore factors associated with it.Comment on why missing data (e.g., “because patients were too ill”). Or explore baseline factors associated with missingnessNo mention of reasons for MD in the CE section.Have to be specific to the CE missing data, or clearly mentioning something like “reasons for MD are discussed in clinical analysis section …”
D3. Describe methods used, and underlying missing data assumptionsClearly state the method used to address missing data, AND the underlying assumption.No report of missing data assumption or method used
Draw overall conclusion in light of the different results and the plausibility of the respective assumptionsConduct sensitivity analyses, and interpret results appropriately.Did MNAR SA and appropriate conclusion. – Did not conduct sensitivity analyses – Conducted sensitivity analyses, but no comment/conclusion – Did MI and CC and only say “results did not change/robust to missing data”
  71 in total

1.  Saline in acute bronchiolitis RCT and economic evaluation: hypertonic saline in acute bronchiolitis - randomised controlled trial and systematic review.

Authors:  Mark L Everard; Daniel Hind; Kelechi Ugonna; Jennifer Freeman; Mike Bradburn; Simon Dixon; Chin Maguire; Hannah Cantrill; John Alexander; Warren Lenney; Paul McNamara; Heather Elphick; Philip Aj Chetcuti; Eduardo F Moya; Colin Powell; Jonathan P Garside; Lavleen Kumar Chadha; Matthew Kurian; Ravinderjit S Lehal; Peter I MacFarlane; Cindy L Cooper; Elizabeth Cross
Journal:  Health Technol Assess       Date:  2015-08       Impact factor: 4.014

2.  A randomised controlled trial of computerised cognitive behaviour therapy for the treatment of depression in primary care: the Randomised Evaluation of the Effectiveness and Acceptability of Computerised Therapy (REEACT) trial.

Authors:  Elizabeth Littlewood; Ana Duarte; Catherine Hewitt; Sarah Knowles; Stephen Palmer; Simon Walker; Phil Andersen; Ricardo Araya; Michael Barkham; Peter Bower; Sally Brabyn; Gwen Brierley; Cindy Cooper; Linda Gask; David Kessler; Helen Lester; Karina Lovell; Usman Muhammad; Glenys Parry; David A Richards; Rachel Richardson; Debbie Tallon; Puvan Tharmanathan; David White; Simon Gilbody
Journal:  Health Technol Assess       Date:  2015-12       Impact factor: 4.014

3.  Effectiveness and economic evaluation of self-help educational materials for the prevention of smoking relapse: randomised controlled trial.

Authors:  Annie Blyth; Vivienne Maskrey; Caitlin Notley; Garry R Barton; Tracey J Brown; Paul Aveyard; Richard Holland; Max O Bachmann; Stephen Sutton; Jo Leonardi-Bee; Thomas H Brandon; Fujian Song
Journal:  Health Technol Assess       Date:  2015-07       Impact factor: 4.014

4.  A randomised controlled trial of Outpatient versus inpatient Polyp Treatment (OPT) for abnormal uterine bleeding.

Authors:  T Justin Clark; Lee J Middleton; Natalie Am Cooper; Lavanya Diwakar; Elaine Denny; Paul Smith; Laura Gennard; Lynda Stobert; Tracy E Roberts; Versha Cheed; Tracey Bingham; Sue Jowett; Elizabeth Brettell; Mary Connor; Sian E Jones; Jane P Daniels
Journal:  Health Technol Assess       Date:  2015-07       Impact factor: 4.014

5.  Clinical effectiveness and cost-effectiveness results from the randomised controlled Trial of Oral Mandibular Advancement Devices for Obstructive sleep apnoea-hypopnoea (TOMADO) and long-term economic analysis of oral devices and continuous positive airway pressure.

Authors:  Linda Sharples; Matthew Glover; Abigail Clutterbuck-James; Maxine Bennett; Jake Jordan; Rebecca Chadwick; Marcus Pittman; Clare East; Malcolm Cameron; Mike Davies; Nick Oscroft; Ian Smith; Mary Morrell; Julia Fox-Rushby; Timothy Quinnell
Journal:  Health Technol Assess       Date:  2014-10       Impact factor: 4.014

6.  Randomised controlled trial of tumour necrosis factor inhibitors against combination intensive therapy with conventional disease-modifying antirheumatic drugs in established rheumatoid arthritis: the TACIT trial and associated systematic reviews.

Authors:  David L Scott; Fowzia Ibrahim; Vern Farewell; Aidan G O'Keeffe; Margaret Ma; David Walker; Margaret Heslin; Anita Patel; Gabrielle Kingsley
Journal:  Health Technol Assess       Date:  2014-10       Impact factor: 4.014

7.  Protocolised Management In Sepsis (ProMISe): a multicentre randomised controlled trial of the clinical effectiveness and cost-effectiveness of early, goal-directed, protocolised resuscitation for emerging septic shock.

Authors:  Paul R Mouncey; Tiffany M Osborn; G Sarah Power; David A Harrison; M Zia Sadique; Richard D Grieve; Rahi Jahan; Jermaine C K Tan; Sheila E Harvey; Derek Bell; Julian F Bion; Timothy J Coats; Mervyn Singer; J Duncan Young; Kathryn M Rowan
Journal:  Health Technol Assess       Date:  2015-11       Impact factor: 4.014

8.  The London Exercise And Pregnant smokers (LEAP) trial: a randomised controlled trial of physical activity for smoking cessation in pregnancy with an economic evaluation.

Authors:  Michael Ussher; Sarah Lewis; Paul Aveyard; Isaac Manyonda; Robert West; Beth Lewis; Bess Marcus; Muhammad Riaz; Adrian H Taylor; Pelham Barton; Amanda Daley; Holly Essex; Dale Esliger; Tim Coleman
Journal:  Health Technol Assess       Date:  2015-10       Impact factor: 4.014

9.  Folate Augmentation of Treatment--Evaluation for Depression (FolATED): randomised trial and economic evaluation.

Authors:  Emma Bedson; Diana Bell; Daniel Carr; Ben Carter; Dyfrig Hughes; Andrea Jorgensen; Helen Lewis; Keith Lloyd; Andrew McCaddon; Stuart Moat; Joshua Pink; Munir Pirmohamed; Seren Roberts; Ian Russell; Yvonne Sylvestre; Richard Tranter; Rhiannon Whitaker; Clare Wilkinson; Nefyn Williams
Journal:  Health Technol Assess       Date:  2014-07       Impact factor: 4.014

10.  The clinical effectiveness and cost-effectiveness of brief intervention for excessive alcohol consumption among people attending sexual health clinics: a randomised controlled trial (SHEAR).

Authors:  Mike J Crawford; Rahil Sanatinia; Barbara Barrett; Sarah Byford; Madeleine Dean; John Green; Rachael Jones; Baptiste Leurent; Anne Lingford-Hughes; Michael Sweeting; Robin Touquet; Peter Tyrer; Helen Ward
Journal:  Health Technol Assess       Date:  2014-05       Impact factor: 4.014

View more
  18 in total

Review 1.  An Educational Review About Using Cost Data for the Purpose of Cost-Effectiveness Analysis.

Authors:  Matthew Franklin; James Lomas; Simon Walker; Tracey Young
Journal:  Pharmacoeconomics       Date:  2019-05       Impact factor: 4.981

2.  Out of Date or Best Before? A Commentary on the Relevance of Economic Evaluations Over Time.

Authors:  Gemma E Shields; Becky Pennington; Ash Bullement; Stuart Wright; Jamie Elvidge
Journal:  Pharmacoeconomics       Date:  2021-12-06       Impact factor: 4.981

3.  Healthcare utilization and related costs among older people seeking primary care due to back pain: findings from the BACE-N cohort study.

Authors:  Rikke Munk Killingmo; Kjersti Storheim; Danielle van der Windt; Zinajda Zolic-Karlsson; Ørjan Nesse Vigdal; Lise Kretz; Milada Cvancarova Småstuen; Margreth Grotle
Journal:  BMJ Open       Date:  2022-06-20       Impact factor: 3.006

4.  Economic evaluation of the Target-D platform to match depression management to severity prognosis in primary care: A within-trial cost-utility analysis.

Authors:  Yong Yi Lee; Cathrine Mihalopoulos; Mary Lou Chatterton; Susan L Fletcher; Patty Chondros; Konstancja Densley; Elizabeth Murray; Christopher Dowrick; Amy Coe; Kelsey L Hegarty; Sandra K Davidson; Caroline Wachtler; Victoria J Palmer; Jane M Gunn
Journal:  PLoS One       Date:  2022-05-25       Impact factor: 3.752

5.  Modifiable prognostic factors of high costs related to healthcare utilization among older people seeking primary care due to back pain: an identification and replication study.

Authors:  Rikke Munk Killingmo; Alessandro Chiarotto; Danielle A van der Windt; Kjersti Storheim; Sita M A Bierma-Zeinstra; Milada C Småstuen; Zinajda Zolic-Karlsson; Ørjan N Vigdal; Bart W Koes; Margreth Grotle
Journal:  BMC Health Serv Res       Date:  2022-06-18       Impact factor: 2.908

6.  The statistical approach in trial-based economic evaluations matters: get your statistics together!

Authors:  Elizabeth N Mutubuki; Mohamed El Alili; Judith E Bosmans; Teddy Oosterhuis; Frank J Snoek; Raymond W J G Ostelo; Maurits W van Tulder; Johanna M van Dongen
Journal:  BMC Health Serv Res       Date:  2021-05-19       Impact factor: 2.655

7.  Missing data in trial-based cost-effectiveness analysis: An incomplete journey.

Authors:  Baptiste Leurent; Manuel Gomes; James R Carpenter
Journal:  Health Econ       Date:  2018-03-24       Impact factor: 3.046

8.  Sensitivity Analysis for Not-at-Random Missing Data in Trial-Based Cost-Effectiveness Analysis: A Tutorial.

Authors:  Baptiste Leurent; Manuel Gomes; Rita Faria; Stephen Morris; Richard Grieve; James R Carpenter
Journal:  Pharmacoeconomics       Date:  2018-08       Impact factor: 4.981

9.  Is the whole larger than the sum of its parts? Impact of missing data imputation in economic evaluation conducted alongside randomized controlled trials.

Authors:  Bernhard Michalowsky; Wolfgang Hoffmann; Kevin Kennedy; Feng Xie
Journal:  Eur J Health Econ       Date:  2020-02-27

10.  A Bayesian framework for health economic evaluation in studies with missing data.

Authors:  Alexina J Mason; Manuel Gomes; Richard Grieve; James R Carpenter
Journal:  Health Econ       Date:  2018-07-03       Impact factor: 3.046

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.