Literature DB >> 32547244

Meta-Analyses Proved Inconsistent in How Missing Data Were Handled Across Their Included Primary Trials: A Methodological Survey.

Lara A Kahale1, Assem M Khamis2, Batoul Diab1, Yaping Chang3, Luciane Cruz Lopes4, Arnav Agarwal3,5, Ling Li6, Reem A Mustafa3,7, Serge Koujanian8, Reem Waziry9, Jason W Busse3,10,11,12, Abeer Dakik1, Lotty Hooft13, Gordon H Guyatt3,14, Rob J P M Scholten13, Elie A Akl1,3.   

Abstract

BACKGROUND: How systematic review authors address missing data among eligible primary studies remains uncertain.
OBJECTIVE: To assess whether systematic review authors are consistent in the way they handle missing data, both across trials included in the same meta-analysis, and with their reported methods.
METHODS: We first identified 100 eligible systematic reviews that included a statistically significant meta-analysis of a patient-important dichotomous efficacy outcome. Then, we successfully retrieved 638 of the 653 trials included in these systematic reviews' meta-analyses. From each trial report, we extracted statistical data used in the analysis of the outcome of interest to compare with the data used in the meta-analysis. First, we used these comparisons to classify the "analytical method actually used" for handling missing data by the systematic review authors for each included trial. Second, we assessed whether systematic reviews explicitly reported their analytical method of handling missing data. Third, we calculated the proportion of systematic reviews that were consistent in their "analytical method actually used" across trials included in the same meta-analysis. Fourth, among systematic reviews that were consistent in the "analytical method actually used" across trials and explicitly reported on a method for handling missing data, we assessed whether the "analytical method actually used" and the reported methods were consistent.
RESULTS: We were unable to determine the "analytical method reviews actually used" for handling missing outcome data among 397 trials. Among the remaining 241, systematic review authors most commonly conducted "complete case analysis" (n=128, 53%) or assumed "none of the participants with missing data had the event of interest" (n=58, 24%). Only eight of 100 systematic reviews were consistent in their approach to handling missing data across included trials, but none of these reported methods for handling missing data. Among seven reviews that did explicitly report their analytical method of handling missing data, only one was consistent in their approach across included trials (using complete case analysis), and their approach was inconsistent with their reported methods (assumed all participants with missing data had the event).
CONCLUSION: The majority of systematic review authors were inconsistent in their approach towards reporting and handling missing outcome data across eligible primary trials, and most did not explicitly report their methods to handle missing data. Systematic review authors should clearly identify missing outcome data among their eligible trials, specify an approach for handling missing data in their analyses, and apply their approach consistently across all primary trials.
© 2020 Kahale et al.

Entities:  

Keywords:  assumption; meta-analysis; missing data; randomized controlled trial; systematic review

Year:  2020        PMID: 32547244      PMCID: PMC7266325          DOI: 10.2147/CLEP.S242080

Source DB:  PubMed          Journal:  Clin Epidemiol        ISSN: 1179-1349            Impact factor:   4.790


Key Message

Systematic review authors were inconsistent in their methods of handling missing data across included trials. Most systematic review authors did not explicitly report their methods to handle missing data. Systematic review authors may simply use what trialists have reported, without consciously planning a method to handle missing data. Systematic review authors should clearly describe an approach for handling missing data outcomes and apply this approach consistently among eligible trials for their review.

Background

Reporting of missing outcome data in randomized controlled trials (RCTs) is often suboptimal.1 Randomized controlled trials typically report the overall prevalence of study participants that failed to complete the study;2 however, not all outcome measures may be similarly affected. Some trial participants may have experienced one or more outcomes (and have them documented) prior to discontinuing the study prematurely. Also, it is not always clear whether RCT authors followed all participants, such as those who withdrew consent to participate (i.e., whether they have missing data or not).1 Moreover, RCT authors often fail to clearly describe how they handled missing outcome data in their analyses.1,2 The poor reporting of missing outcome data in RCTs necessitates that systematic review authors develop plans to address this issue.3–16 However, a recent methodological survey found that only 25% of systematic review authors reported strategies to address whether certain categories of participants (e.g., withdrew consent, non-compliant) might have missing outcome data, and only 19% of systematic reviews reported a method for handling missing data (e.g., complete case analysis, making assumptions).17 Even when systematic review authors decide to handle missing outcome data in their analysis, they may do so inconsistently across trials included in the same meta-analysis. As an illustrative scenario, for a systematic review that plans to include only participants with available outcome data in their meta-analysis (i.e., use complete case analysis),18 one would expect the denominators of all trials included in that meta-analysis to be restricted to only participants with available outcome data. However, for one trial,19 reviewer authors used the total number randomized for the denominator (despite having participants with missing data), and in another trial,20 they excluded participants with missing data from the denominator. In such a scenario, we observe two main potential problems: (1) the analytical method review authors actually used for handling missing data is inconsistent across trials included in the same meta-analysis; and (2) the analytical method review authors actually used for handling missing data is, for some trials, inconsistent with their methods. These issues may complicate the reproducibility of systematic reviews and bias results. The extent of these problems remains, however, unclear.

Objective

The overall objective of this study was to assess whether the systematic review authors are consistent in the way they handle missing data, both across trials included in the same meta-analysis, and with their reported methods. More specifically, we aimed to: (1) classify the methods systematic review authors actually used for handling missing data for each included trial; (2) assess whether systematic reviews authors explicitly reported on the method of handling missing data; (3) assess the extent to which systematic review authors were consistent in their methods actually used across trials included in the same meta-analysis; and (4) when consistent, assess whether the methods the systematic review authors actually used were consistent with their reported methods (if reported).

Methods

Study Design and Definitions

This methodological study is part of a larger project examining methodological issues related to missing outcome data in systematic reviews and RCTs.21 Our published protocol includes detailed information on the definitions, eligibility criteria, search strategy, selection process, data extraction and data analysis.21 A patient-important outcome is defined as an outcome for which a patient would answer with “yes” to the following question: If the patient knew that this outcome was the only thing to change with treatment, would the patient consider receiving this treatment if associated with burden, side effects, or cost?21 We defined missing data as outcome data for trial participants that are not available to systematic review authors from the published RCT reports or personal contact with RCT authors. We used our recently published guidance22 to identify categories of trial participants who might have missing outcome data.

Sample Selection

Our random sample included 50 Cochrane and 50 non-Cochrane systematic reviews published in 2012 that reported a statistically significant, group-level meta-analysis, of a patient-important dichotomous efficacy outcome.17 We retrieved all 653 RCTs included in the 100 meta-analyses of interest.1 Eleven pairs of reviewers extracted data, in duplicate and independently, from the systematic reviews and RCTs and resolved disagreements with the help of a third reviewer. We conducted calibration exercises and used standardized and pilot-tested forms with detailed written instructions to improve reliability of data extraction.

Classifying the “Analytical Method Reviews Actually Used” for Handling Missing Data

Authors of reviews may fail to clearly report their approach to handling missing data. Alternatively, the approach they report in their methods may not correspond with the method they actually used. Therefore, we established the “analytical method reviews actually used” for handling missing data using the following steps: From each RCT report, we extracted (per study arm) the number of participants randomized, the numerator (i.e. the number of events) used in the analysis of interest, and the number of participants with missing data. From the meta-analysis (forest plot plus text) and for each arm of all contributing RCTs, we extracted the denominator and the numerator used in the meta-analysis of interest. We compared the statistical data from the RCT report with data from the meta-analysis. Based on this comparison, we classified the “analytical method reviews actually used” for handling missing data as: Unclear, cannot be verified (provided numbers that could not be explained or did not add up to match any of the suggested analytical method actually used). Complete case analysis. Making assumptions (e.g., best case scenario, all participants had the event). Different methods (from the above bullet points) for different categories of participants with missing data. Not applicable, no missing data. Table 1 lists commonly used methods of handling missing outcome data of trial participants. The hypothetical examples in Table 2 illustrate how different meta-analyses addressing the same study question (i.e. same patients, interventions, comparators and outcomes) may handle missing data from a single eligible RCT, and how we classified the “analytical method reviews actually used” in each case. We also assessed the confidence of data extractors in classifying the analytical method actually used, i.e., whether based on explicit reporting (higher confidence) or best guess (lower confidence).
Table 1

Commonly Used Methods of Handling Missing Outcome Data Among Trial Participants

Method of Handling Missing DataImplications for Participants with Missing Data in the Numerator and Denominator
Intervention ArmControl Arm
NumeratorDenominatorNumeratorDenominator
Complete case analysisExcludedExcludedExcludedExcluded
Best case scenarioAssumed that all had a favorable outcomeIncludedAssumed that all had an unfavorable outcomeIncluded
None of the participants with missing data had the outcomeAssumed that none had the outcomeIncludedAssumed that none had the outcomeIncluded
All participants with missing data had the outcomeAssumed that all had the outcomeIncludedAssumed that all had the outcomeIncluded
Worst case scenarioAssumed that all had an unfavorable outcomeIncludedAssumed that all had a favorable outcomeIncluded
Table 2

Examples Illustrating How Meta-Analyses Addressing the Same Study Question Might Handle Missing Outcome Data for an Unfavorable Dichotomous Outcome from an RCT Report and Thus Informed Classification of the “Analytical Method Reviews Actually Used”

Intervention ArmControl Arm
RCT report data
Number of participants randomizedNumber of eventsNumber of participants with missing dataNumber of participants randomizedNumber of eventsNumber of participants with missing data
RCT 110052100105
Meta-analysis data
DenominatorNumeratorDenominatorNumeratorOur classification of the actual analytical method by the SR
MA 19859510Complete case analysis
MA 2100510010Assumed none had the event
MA 3100710015Assumed that all had the event
MA 4100710010Worst-case scenario
MA 5100510015Best-case scenario

Abbreviations: MA, meta-analysis; RCT, randomized controlled trial; SR, systematic review.

Commonly Used Methods of Handling Missing Outcome Data Among Trial Participants Examples Illustrating How Meta-Analyses Addressing the Same Study Question Might Handle Missing Outcome Data for an Unfavorable Dichotomous Outcome from an RCT Report and Thus Informed Classification of the “Analytical Method Reviews Actually Used” Abbreviations: MA, meta-analysis; RCT, randomized controlled trial; SR, systematic review. Also, for systematic reviews that reported on having participants with missing outcome data, we assessed whether systematic review authors used the same denominator and/or numerator as the one(s) reported in the RCTs that contributed to their meta-analysis.

Consistency Between Analytical Methods Reported, and Used, for Handling Missing Outcome Data

After classifying the “analytical method reviews actually used” for handling missing data (aim 1), we assessed whether the authors explicitly reported on the analytical method of handling missing data which, if present, we designated as the “reported analytical method” (aim 2). Second, for each meta-analysis, we assessed whether the “analytical method reviews actually used” for handling missing data was consistent across trials within this meta-analysis (aim 3). If so, we explored whether the “analytical method reviews actually used” was consistent with the “reported analytical method” (aim 4). We displayed the results of the “reported” and “actual” analytical methods in a matrix (see Table 3).
Table 3

Hypothetical Scenarios Illustrating the Process for Judging Consistency Between “Reported” and “Actual” Analytical Methods for Addressing Missing Outcome Data

Reported Analytical MethodAnalytical Method Actually UsedAnalytical Method Actually Used Consistent Within the Meta-AnalysisAnalytical Method Actually Used Consistent with Reported Analytical Method
Systematic review 1Assume all had the event
 RCT 1Complete case analysisNoNot applicable since the analytical method actually used were inconsistent across trials
 RCT 2Assume none had the event
 RCT 3Different methods for different categories of participants with missing data
Systematic review 2Complete case analysis
 RCT 4Complete case analysisYesYes
 RCT 5Complete case analysis
 RCT 6Complete case analysis
Systematic review 3Assume all had the event
 RCT 7Complete case analysisYesNo
 RCT 8Complete case analysis
 RCT 9Complete case analysis
Systematic review 4Not reported
 RCT 10Complete case analysisYesNot applicable since the reported analytical method is not available
 RCT 11Complete case analysis
 RCT 12Complete case analysis
Systematic review 5Not reported
 RCT 13Complete case analysisNoNot applicable since the reported analytical method is not available
 RCT 14Assume none had the event
 RCT 15Different methods for different categories of participants with missing data
Hypothetical Scenarios Illustrating the Process for Judging Consistency Between “Reported” and “Actual” Analytical Methods for Addressing Missing Outcome Data

Statistical Analysis

Using SPSS statistical software, version 21.0,23 we conducted a descriptive analysis (frequencies and percentages) of all collected variables. We also planned to conduct regression analyses to study the association between “consistency between actual and reported method” and characteristics of included systematic reviews.

Results

Our sample of 100 systematic reviews with significant pooled effect estimates included 653 RCTs that informed the meta-analyses of interest, of which we acquired the full-text reports for 638. We have previously reported on the details of these systematic reviews17 and the included RCTs.1 Briefly, four hundred RCTs (63%) reported on at least one category of participants that were either explicitly not followed-up or with unclear follow-up status. Among these 400 RCTs, the median percentage of participants that were either explicitly not followed-up or with unclear follow-up status was 11.7% (IQR 5.6–23.7%).1 Among trials with missing outcome data, the meta-analyses they contributed to most often used the denominator (81%) and numerator (80%) reported by the RCT (Table 4).
Table 4

“Analytical Method Reviews Actually Used” to Handle Missing Data Across RCTs Included in Meta-Analysis

Variablen (%)
Ability to classify the analytical method reviews actually used (n=638)
 Able to classify241 (37.8)
 Not applicable (no missing data)207 (32.4)
 Could not be explained (numbers do not add up)161 (25.2)
 Wrong data extraction5 (0.8)
 No data available from RCT or SR24 (3.8)
Classification of the analytical method reviews actually used (n= 241+)
 Complete case analysis128 (53.1)
 None of the participants with missing data had the event of interest58 (24.1)
 All the participants with missing data had the event of interest2 (0.8)
 Worst-case scenario1 (0.4)
 Best-case scenario0
 Same event rate as those followed up0
 Other1 (0.4)
 Different methods for different categories of participants with missing data51 (21.2)
The SR authors used in the meta-analysis a denominator used by the RCT (n=431*)
 Definitely yes348 (80.7)
 Definitely no65 (15.1)
 Unclear18 (4.2)
The SR authors used in the meta-analysis a numerator used by the RCT (n=431*)
 Definitely yes345 (80.0)
 Definitely no53 (12.3)
 Unclear33 (7.7)

Notes: +n=241 RCTs for which we could classify an “analytical method reviews actually used”. *n= 431 RCTs excluding those with no missing data (638–207 RCTs that had no missing data).

Abbreviations: ITT, intention-to-treat; LTFU, lost to follow-up; RCT, randomized controlled trials; SR, systematic reviews.

“Analytical Method Reviews Actually Used” to Handle Missing Data Across RCTs Included in Meta-Analysis Notes: +n=241 RCTs for which we could classify an “analytical method reviews actually used”. *n= 431 RCTs excluding those with no missing data (638–207 RCTs that had no missing data). Abbreviations: ITT, intention-to-treat; LTFU, lost to follow-up; RCT, randomized controlled trials; SR, systematic reviews. We were able to classify the “analytical method reviews actually used” for 241 (38%) of the included RCTs; 67% were classified with lower confidence (best guess) and 33% with higher confidence (based on explicit reporting) (Table 4). For the remaining RCTs, 207 (32%) included no participants with missing data (complete follow-up), 161 (25%) provided numbers that could not be explained (did not add up to match any of the suggested analytical method actually used), 5 (1%) had extracted the wrong data (e.g., used data from the wrong outcome), and 24 (4%) provided insufficient information for even a best guess at the method used to handle missing data. Among the 241 included RCTs for which we were able to classify the “analytical method reviews actually used”, systematic review authors conducted “complete case analysis” in 128 (53%), assumed “none of the participants with missing data had the event of interest” in 58 (24%), and used different methods for different categories of participants with missing data in 51 (21%) trials. In four RCTs (2%) assumptions other than the five we explored (Table 1) were used. Only seven of the 100 systematic reviews we assessed explicitly reported on methods to handle missing data in their meta-analysis. Two planned a complete case analysis, two proposed assuming all participants with missing data had the event of interest, and three reported their intention to assume that none of the participants with missing data had the event of interest.

Consistency in Analytical Methods

Of the seven systematic reviews that explicitly reported on the analytical method for handling missing data, only one was consistent in handling missing data across all included trials (using complete case analysis) (Figure 1). However, the analytical method actually used was not consistent with their “reported analytical methods” (“if missing data were unable to be obtained, a result was assumed to have a particular value, such as poor outcome”).24 Of the 93 systematic reviews that did not explicitly report their analytical method of handling missing data, seven were consistent in their actual analytical method for handling missing data across all included trials.
Figure 1

Consistency in analytical methods within the same meta-analysis and versus the reported analytical method.

Abbreviations: RCTs: randomized controlled trials; SR: systematic review.

Consistency in analytical methods within the same meta-analysis and versus the reported analytical method. Abbreviations: RCTs: randomized controlled trials; SR: systematic review. Due to the low number of reviews that were consistent within the same meta-analysis, we were not able to conduct any regression analysis to study the association between “consistency between actual and reported method” and characteristics of included systematic reviews.

Discussion

Summary of Findings

In this systematic survey of Cochrane and non-Cochrane systematic reviews, we found that missing data of trial participants was inconsistently handled by almost all reviews. Most reviews did not specify an approach for handling missing data, and of the few that did, none applied their approach consistently across eligible trials.

Strengths and Limitations

The main strength of our study is the systematic and transparent methods used, including screening independently and in duplicate, conducting calibration exercises and use of pilot-tested forms for data extraction to increase reliability and applying a systematic strategy for making the numerous classification judgments involved in the study. To our knowledge, this is the first methodological survey exploring how systematic review authors actually dealt with trial missing outcome data in their meta-analysis. Also, this is the first study to assess whether the methods used for handling missing outcome data in the meta-analysis are consistent with the “reported analytical methods”. A limitation of our study is that we considered only dichotomous outcome data. The methods for handling missing continuous data are different and our findings may not be generalizable to continuous outcomes.25,26 Another limitation was our reliance on reviewers’ judgments at different stages of the process (e.g., judgment regarding the actual analytical method used to handle missing data). Our development and application of a logically coherent strategy for making the numerous classification judgments involved in the study may mitigate this concern. Further, our sample included systematic reviews that were published in 2012, and may not reflect more current reviews; however, recent surveys suggest that the reporting, handling, and assessment of risk of bias in relation to missing data have not improved since we acquired the reviews used in our study.16,27-29

Interpretation of Findings

Both the challenge we faced in classifying the “analytical method reviews actually used” (25% of RCTs provided numbers that could not be explained) and the observed inconsistency in handling missing data within the same meta-analysis reflect the failure of reviewers to adopt standardized approaches to reporting and dealing with missing data.22,30,31 This inconsistency may bias the results and could produce disparate findings among different meta-analyses addressing the same research question, even when considering the same trials.32 We uncovered three limitations in how systematic reviews authors handle missing data in their meta-analysis: Ninety-three percent did not explicitly report on their methods for handling missing data. Ninety-two percent were inconsistent in the methods used to handle missing data across RCTs within the same meta-analysis. In the few meta-analyses that did explicitly report a method for handling missing data, none actually applied that method. We also found that for more than 80% of RCTs with missing outcome data contributing to the meta-analyses of interest, the systematic review authors used the same denominator and numerator as those reported by the trialists. So, systematic review authors may simply use what trialists have reported, without consciously planning a method to handle missing data. As trialists use different approaches to handling missing outcome data of trial participants, this practice might explain why systematic review authors are not consistent with their approach in handling missing data across trials included in the same meta-analysis. In other cases, systematic review authors and trial authors, intending to apply the “intention to treat” principle, include the total number of participants randomized in the denominator while using whatever the trial authors have used in the numerator. Subsequently, they would be implicitly applying “none of the participants with missing data had the event”. Given that this assumption is highly implausible in a real-world context, the confidence in the findings would be lower.33,34

Recommendations for Practice

In order to ensure consistency in handling missing data across trials included in the same meta-analysis, authors should: Develop a transparent and detailed strategy for handling missing data (e.g., using complete case analysis, applying assumptions);35–38 Refer to available guidance on how to identify participants with missing data from RCT reports;22 Apply their strategy for handling missing data consistently across all trials included in the meta-analysis. Report clearly on the above.

Conclusion

The large majority of systematic reviews considered in our study did not report a method for handling missing data in their meta-analyses. For the few that did, the actual method used for handling missing outcome data was often inconsistent with their reported methods. As such inconsistency might threaten the validity of the results of systematic reviews, methodologic rigor requires improved adherence to guidance on identifying, reporting, and handling participants with missing outcome data.
  35 in total

1.  Several reasons explained the variation in the results of 22 meta-analyses addressing the same question.

Authors:  Assem M Khamis; Mohamad El Moheb; Johny Nicolas; Ghida Iskandarani; Marwan M Refaat; Elie A Akl
Journal:  J Clin Epidemiol       Date:  2019-05-29       Impact factor: 6.437

2.  GRADE guidelines 17: assessing the risk of bias associated with missing participant outcome data in a body of evidence.

Authors:  Gordon H Guyatt; Shanil Ebrahim; Pablo Alonso-Coello; Bradley C Johnston; Alexander G Mathioudakis; Matthias Briel; Reem A Mustafa; Xin Sun; Stephen D Walter; Diane Heels-Ansdell; Ignacio Neumann; Lara A Kahale; Alfonso Iorio; Joerg Meerpohl; Holger J Schünemann; Elie A Akl
Journal:  J Clin Epidemiol       Date:  2017-05-18       Impact factor: 6.437

Review 3.  Antiarrhythmics for maintaining sinus rhythm after cardioversion of atrial fibrillation.

Authors:  Carmelo Lafuente-Lafuente; Lucie Valembois; Jean-François Bergmann; Joël Belmin
Journal:  Cochrane Database Syst Rev       Date:  2015-03-28

4.  Addressing continuous data measured with different instruments for participants excluded from trial analysis: a guide for systematic reviewers.

Authors:  Shanil Ebrahim; Bradley C Johnston; Elie A Akl; Reem A Mustafa; Xin Sun; Stephen D Walter; Diane Heels-Ansdell; Pablo Alonso-Coello; Gordon H Guyatt
Journal:  J Clin Epidemiol       Date:  2014-03-05       Impact factor: 6.437

5.  Amiodarone versus sotalol for atrial fibrillation.

Authors:  Bramah N Singh; Steven N Singh; Domenic J Reda; X Charlene Tang; Becky Lopez; Crystal L Harris; Ross D Fletcher; Satish C Sharma; J Edwin Atwood; Alan K Jacobson; H Daniel Lewis; Dennis W Raisch; Michael D Ezekowitz
Journal:  N Engl J Med       Date:  2005-05-05       Impact factor: 91.245

6.  Efficacy and safety of sotalol in digitalized patients with chronic atrial fibrillation. The Sotalol Study Group.

Authors:  S Singh; R K Saini; J DiMarco; J Kluger; R Gold; Y W Chen
Journal:  Am J Cardiol       Date:  1991-11-01       Impact factor: 2.778

7.  A review of RCTs in four medical journals to assess the use of imputation to overcome missing data in quality of life outcomes.

Authors:  Shona Fielding; Graeme Maclennan; Jonathan A Cook; Craig R Ramsay
Journal:  Trials       Date:  2008-08-11       Impact factor: 2.279

Review 8.  Reporting missing participant data in randomised trials: systematic survey of the methodological literature and a proposed guide.

Authors:  Elie A Akl; Khaled Shawwa; Lara A Kahale; Thomas Agoritsas; Romina Brignardello-Petersen; Jason W Busse; Alonso Carrasco-Labra; Shanil Ebrahim; Bradley C Johnston; Ignacio Neumann; Ivan Sola; Xin Sun; Per Vandvik; Yuqing Zhang; Pablo Alonso-Coello; Gordon H Guyatt
Journal:  BMJ Open       Date:  2015-12-30       Impact factor: 2.692

Review 9.  A systematic review of randomised controlled trials in rheumatoid arthritis: the reporting and handling of missing data in composite outcomes.

Authors:  Fowzia Ibrahim; Brian D M Tom; David L Scott; Andrew Toby Prevost
Journal:  Trials       Date:  2016-06-02       Impact factor: 2.279

Review 10.  Reporting and dealing with missing quality of life data in RCTs: has the picture changed in the last decade?

Authors:  S Fielding; A Ogbuagu; S Sivasubramaniam; G MacLennan; C R Ramsay
Journal:  Qual Life Res       Date:  2016-09-20       Impact factor: 4.147

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.