| Literature DB >> 24671099 |
Brian Hutton1, Georgia Salanti2, Anna Chaimani2, Deborah M Caldwell3, Chris Schmid4, Kristian Thorlund5, Edward Mills6, Ferrán Catalá-López7, Lucy Turner8, Douglas G Altman9, David Moher1.
Abstract
INTRODUCTION: Some have suggested the quality of reporting of network meta-analyses (a technique used to synthesize information to compare multiple interventions) is sub-optimal. We sought to review information addressing this claim.Entities:
Mesh:
Year: 2014 PMID: 24671099 PMCID: PMC3966807 DOI: 10.1371/journal.pone.0092508
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Summary of Included Studies.
| First Author(Year) | Cited Objectives | Inclusion Criteria | # of NMAs orITCs Included | Methodologic Components Reviewed | Authors’ Conclusions |
|
| |||||
| Nikolakopoulouet al (2014) | Develop an understandingof characteristics of andproblems encountered inpast network meta-analyses. | Reports published prior to theend of 2012 that included anyform of indirect comparison;networks had to include aminimum of 4 treatments and adescription of data analysis. | 186 | Outcome, number of included studies, synthesismethod (e.g. Bucher method, hierarchical model,meta-regression), approach to evaluation of consistency, control treatment, number of competing interventions, network shape (e.g. star shape, closed loops), category of comparison (pharmacologic versus placebo, etc), outcome measure (e.g. odds ratio, risk ratio, etc) | While NMA validity is highly reliant on assumptions, the reporting and evaluation of these assumptions are commonly not addressed by authors |
| Bafeta et al | To examine whether NMAsfollow the key methodologicalrecommendations for reportingand conduct of systematicreviews. | All NMAs from 2003-July 2012that compared clinical efficacyof three or more interventions basedon randomised controlled trials,excluding meta-analyses with anopen loop network of threeinterventions. | 121 | Reporting of general characteristics and keymethodological components of the systematic review process (e.g. PRISMA and AMSTAR) using twocomposite outcomes. For some components, ifreporting was adequate, the authors assessed theirconduct quality. Authors assessed whether NMAsmentioned or discussed the assumptions required(based on homogeneity, similarity, consistency, andexchangeability). Information regarding assessment of reporting bias and other details were also collected. | Essential methodological components of the systematic review process are frequently lacking in reports of NMAs, even when published in journals with high impact factors. |
| Tan et al | To establish guidance and currentpractice on summarizing resultsfrom indirect comparisons andmixed treatment comparisons; to produce recommendations toimprove current practice; toidentify research priorities forimproving reporting. | HTA programme reports publishedbetween 1997–2011 by the NIHRin the UK that employed methodsfor indirect comparisons of mixedtreatment comparisons. | 19 (8 indirect comparisons,11 mixed treatmentcomparisons) | Reporting of input data (presentation of the numberof interventions, study level data and the relationship structure of the interventions and the studies included in the analysis), methods (specification of Bayesian or frequentist statistical models, software used, presentation of prior distributions used, sensitivity analyses, model convergence assessment), results (presentation of relative and absolute effects, probability of treatment being best, ranking of interventions). | A variety of methods for reporting reviews involving these methods were identified. There is no current standard approach used. Standardization of reporting methods and improvement in the use of graphical approaches to present information are needed. |
| Coleman | Summarize guidance for NMA;summarize traits of publishedNMAs; understand reasons forchoice and implementation ofNMA | Studies from 2006– July 2011 wereincluded. Studies had to include 3or more treatments compared usingRCTs; published in full text;Bayesian or Frequentist approachaccepted; not included in a methodspublication or economic evaluation;English publication. | 33 Bayesian (81%), 8frequentist, 1 both | Type of analysis (bayesian vs frequentist); reportingof methods choices (prior choice, inconsistencyevaluation, heterogeneity assessment, modelconvergence assessment); number of treatments;number of trials and patients; funding and country;clinical area; network pattern; involvement of amethodologist; fixed or random effects; use ofcovariate adjustments; account for multi-arm trials;software used; reporting details (type of outcome,summary measure, probabilities and claims ofequivalence or non-inferiority. | Further guidance on proper conduct, interpretation and reporting of network meta-analyses is required. |
| Donegan | Review quality of publishedindirect treatment comparisons to add to empirical data supporting a need for improvements ofreporting of such work. |
| 43 ITCs (review excluded NMAs); all based on frequentist methods | Study inclusion criteria; ITCs made; clinical area ofstudy; # of trials and patients involved in ITC; thetype of data and measure of effect; consideration &assessment of similarity, homogeneity and consistency(with detailed criteria for each); approach to reportingof results and interpretation. Criteria based on theabove components used to identify higher quality andlower quality ITCs. | The assumptions of ITCs aren’t always explored or reported. Reporting should be improved by more routine assessment of assumptions and clear statement of methods used for assessments. |
| Song | Assess the basic assumptions and other current limitations in the use of indirect comparisons in systematic reviews of healthcareinterventions | Systematic reviews/meta-analysespublished between 2007–2007which used an indirect comparison(based on title/abstract). | 88 indirect comparisons (49/88 frequentist adjusted ITC, 18/88 NMA or Bayesian MTC, 13/88 informal ITC, 6/88 naive ITC, 2/88 unclear) | Collected data related to comprehensiveness ofliterature search for trials included in indirectcomparisons, methods used to conduct indirectcomparison, availability of head-to-head trial data,presence/absence of explicit mention of the similarityassumption (and related efforts to explore/improvesimilarity) | Several key methodologic problems related to authors’ unclear understanding of key assumptions for ITC. Sub-optimal search for evidence, statement and exploration/resolution of trial similarity and inappropriate combination of direct and indirect evidence are all current problems. |
|
| |||||
| Brooks-Renney | Review the use of NMAs within published health technology appraisals from NICE. | Technology appraisals from NICEpublished between 2006–2011containing the term ‘mixed treatment comparison’. Submissions which werewithdrawn, terminated or suspended were not included. | 17 included technology appraisals | Specific criteria gathered are not described. Providesinformation related to methodologic limitationsidentified in the conduct of network meta-analysessubmitted by manufacturers. | NMA and ITC are increasingly common in technology assessments, however robust design and methdologies are needed to enesure maximum uptake of their findings. |
| Bending | Review the methodology andimpact of ITCs and NMAssubmitted by manufacturerson the NICE committee’sappraisal of pharmaceuticals. | Technology appraisals from NICEpublished between 2006–2011containing either of the terms‘indirect treatment comparison’or ‘mixed treatment comparison’.Submissions which were withdrawn,terminated or suspended were notincluded. | 24 included technologyappraisals | Number of trials included, availability of head-headevidence, disease area, treatment comparisons made,justification of study selection, sensitivity analysesrelated to trial selection, outcomes assessed. Alsocollected related information on key critiques of ITCsand NMAs. | There is wide variation in the reporting and validity of ITCs and NMAs. There is a need for guidance for the conduct of NMAs and ITCs. |
| Buckley | To investigate factors impactingoutcome of health technologyassessment submissions involvingindirect and mixed treatmentcomparisons. | Technology appraisals from NICEpublished between 2003–2008 | 19 published technologyappraisals | Collection of evidence presented in HTA submissionsincluding therapeutic area, clinical comparisons made,degree of direct and indirect evidence, suitability andcriticisms of type of analysis and statisticalmethodologies used, outcome appraisal (full/partialrecommendation versus not recommended) | There is a clear increase and acceptance in use of indirect comparison methods in technology assessments. To maximize their quality, past criticsms related to use of validated methods, proper accounting for population heterogeneity, clear justification of analysis decisions made need to be considered. |
Summary of Considerations for Methodologic Reporting.
|
|
| • Specification of efforts to search for direct and indirect evidence of relevance |
| • Presentation of search terms and the full search strategy (or strategies if separate searches undertakenfor each comparison) for one electronic database |
| • Provision of information regarding involvement of primary literature searching versus use of existingsystematic reviews as a means of including studies |
| • If existing reviews were used, specification of how these were located and description of theirrelated inclusion criteria |
|
|
| • Specification of PICOS eligibility criteria for the review, including specification of all treatmentsincluded in the planned meta-analysis |
| • Specification of how related but different implementations of the same agent (e.g. varied doses ofpharmacologic treatments) are to be handled with associated rat ionale (i.e. address ‘lumping and splitting’of interventions) |
|
|
| • Specification of the assumptions of homogeneity, similarity and consistency (or related terminologyused, e.g. transitivity, exchangeability) |
| • Specification of efforts taken by reviewers to evaluate the assumption appropriateness |
| • Specification of what information is being provided to readers to allow them to consider the validityof the assumptions |
|
|
| • Specification of details of the approach to statistical analysis taken: hierarchical model, adjustedindirect approach or meta-regression, Frequentist or Bayesian framework, fixed or random effectsmodel, homogeneous or heterogeneous between-trial variance structure |
| • Specification of methods used to assess the degree of statistical heterogeneity and the potential forpublication bias within the treatment network |
| • Specification of methods used to evaluate for the presence of disagreement between direct and indirectevidence in the treatment network |
| • Description of statistical methods used to address clinical and methodologic homogeneity in theanalyses (e.g. subgroups, meta-regression including adjustments for baseline risk and the impactof risk of bias variations) |
Summary of Considerations for Reporting and Interpretation of Results.
|
|
| • Presentation of network diagram to summarize identified evidence |
| • Reporting information reflecting the amount of information in the network, e.g. sample sizes,numbers of studies per comparison and the presence of multi-arm studies |
| • Presentation of information allowing readers to assess clinical and methodological heterogeneitywithin the treatment network: e.g. information tables listing effect modifiers across studies andcomparisons, These can include patient characteristics and risk of bias assessements |
|
|
| • Information to summarize evaluations of statistical heterogeneity within the treatment network |
| • Information and approach to summarize analyses to assess agreement of direct and indirectsources of evidence (and efforts to improve agreement if discrepancies are found) |
|
|
| • What estimates to report: all possible pairwise comparisons? Only those which are comparisonsagainst the chosen reference group or a treatment of primary focus? |
| • Should findings from traditional pairwise analyses also be provided? |
| • Presentation of summary estimates and corresponding uncertainty (i.e. credible/confidence intervals) |
| • Presentation of summary estimates from sensitivity and subgroup analyses |
| • Optimal use of tables and figures to most easily convey results to readers |
| • Presentation of treatment rankings and corresponding probabilities: Should they be included?What should be presented? |
|
|
| • Commentary on the clinical and biologic plausibility of the observed findings |
| • Commentary relevant to any important concerns regarding the assumptions underlying thejoint synthesis that may play an important role in strength of interpretations drawn |