Literature DB >> 33875446

The methodological quality of individual participant data meta-analysis on intervention effects: systematic review.

Huan Wang1, Yancong Chen1, Yali Lin1, Julius Abesig1, Irene Xy Wu2, Wilson Tam3.   

Abstract

OBJECTIVE: To assess the methodological quality of individual participant data (IPD) meta-analysis and to identify areas for improvement.
DESIGN: Systematic review. DATA SOURCES: Medline, Embase, and Cochrane Database of Systematic Reviews. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: Systematic reviews with IPD meta-analyses of randomised controlled trials on intervention effects published in English.
RESULTS: 323 IPD meta-analyses covering 21 clinical areas and published between 1991 and 2019 were included: 270 (84%) were non-Cochrane reviews and 269 (84%) were published in journals with a high impact factor (top quarter). The IPD meta-analyses showed low compliance in using a satisfactory technique to assess the risk of bias of the included randomised controlled trials (43%, 95% confidence interval 38% to 48%), accounting for risk of bias when interpreting results (40%, 34% to 45%), providing a list of excluded studies with justifications (32%, 27% to 37%), establishing an a priori protocol (31%, 26% to 36%), prespecifying methods for assessing both the overall effects (44%, 39% to 50%) and the participant-intervention interactions (31%, 26% to 36%), assessing and considering the potential of publication bias (31%, 26% to 36%), and conducting a comprehensive literature search (19%, 15% to 23%). Up to 126 (39%) IPD meta-analyses failed to obtain IPD from 90% or more of eligible participants or trials, among which only 60 (48%) provided reasons and 21 (17%) undertook certain strategies to account for the unavailable IPD.
CONCLUSIONS: The methodological quality of IPD meta-analyses is unsatisfactory. Future IPD meta-analyses need to establish an a priori protocol with prespecified data syntheses plan, comprehensively search the literature, critically appraise included randomised controlled trials with appropriate technique, account for risk of bias during data analyses and interpretation, and account for unavailable IPD. © Author(s) (or their employer(s)) 2019. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ.

Entities:  

Mesh:

Year:  2021        PMID: 33875446      PMCID: PMC8054226          DOI: 10.1136/bmj.n736

Source DB:  PubMed          Journal:  BMJ        ISSN: 0959-8138


Introduction

Well conducted systematic reviews with meta-analysis of randomised controlled trials are considered to be the best source of evidence on intervention effects.1 Meta-analysis is generally done by collecting aggregate data from publications or investigators.1 The aggregated data provide an average estimation of the intervention effect (eg, risk ratio or mean difference) in a group of patients with average characteristics (eg, diabetes diagnosis, mean age 50 years, 40% women).2 This might limit the exploration of potential intervention-covariate interactions. Moreover, the validity of aggregate data meta-analysis is affected by the reporting quality of the randomised controlled trials and inconsistent definition of the outcomes across included trials.1 By collecting original data from the eligible primary studies, an individual participant data (IPD) meta-analysis has the ability to collect both published and unpublished data, derive standardised outcome definitions, use a consistent unit of analysis across included randomised controlled trials, and assess interactions between interventions and participants’ characteristics. These advantages have led to the IPD meta-analysis being regarded as the ideal approach for providing evidence on intervention effect estimation.3 4 IPD meta-analysis has shown substantial impact on clinical practice and research by informing the development of guidelines and design of randomised controlled trials.5 6 The number of yearly published IPD meta-analyses has increased over time, from eight in 1994 to 88 in 2014.4 These numbers, however, only refer to those IPD meta-analyses that were incorporated into systematic reviews—even larger numbers were published each year when IPD meta-analyses without systematic reviews were considered.4 The results from systematic reviews of meta-analysis, based on either aggregate data or IPD, are, however, not free from bias.7 8 Empirical evidence has indicated flaws when aggregate data meta-analysis are used for various medical conditions.9 10 11 12 Commonly reported problems include lack of a predefined protocol, comprehensive literature search, list of excluded studies with justifications, and check of funding information for the included studies.9 10 11 12 These in turn might threaten the validity of the evidence derived from systematic reviews. For instance, pre-established protocols and lists of excluded studies with justifications will prevent the exclusion of studies with unfavourable findings.1 7 It is, however, more difficult to design and conduct an IPD meta-analysis than aggregate data meta-analysis, and bias could affect the validity of the results.8 Published IPD meta-analyses have shown evidence of inconsistencies in methods used to estimate intervention effects (eg, the one stage method involving simultaneous analysis of IPD retrieved from eligible studies, and the two stage method, when IPD are first analysed separately for each study and then combined using a traditional meta-analysis method13 14), how participant level covariates are assessed (eg, by participant subgroups, by trial subgroups, or using meta-regression), and whether trial variation was accounted for when combining IPD (eg, some IPD meta-analyses treated IPD from different trials as a mega-trial).15 16 Such discrepancies suggest that evidence users such as researchers, clinicians, guideline developers, and policy makers should perform critical appraisal before applying evidence from IPD meta-analyses. Although IPD meta-analysis is a well established approach for synthesising evidence and having a direct impact on guideline development, methodological quality is still unclear. In their paper, Tierney and colleagues offered guidance to evidence users on how to critically appraise the scientific rigour of IPD meta-analyses.2 They proposed eight key questions (composed of 31 signalling questions): four applied to IPD meta-analyses (questions 3, 4, 7, and 8) and the remainder to systematic reviews (questions 1, 2, 5, and 6). The eight key questions did not, however, cover some important methodological quality related components of systematic reviews, such as conflicts of interest and risk of bias of included studies.2 AMSTAR-2 (A MeaSurement Tool to Assess systematic Reviews-2) is a well developed, validated, and widely used tool for assessing the methodological quality of systematic reviews.7 It has been used critically appraise systematic reviews of various medical conditions.9 17 18 19 AMSTAR-2 covers general methodological components of systematic reviews but has no specific item for assessing the unique methodological components of IPD meta-analysis. It has not been used to assess the methodological quality of IPD meta-analyses. We conducted a systematic review to describe the characteristics of an up-to-date sample of IPD meta-analyses, assess the methodological quality of the sampled IPD meta-analyses, and suggest areas for improvement in future IPD meta-analyses.

Methods

Eligibility criteria

An IPD meta-analysis was considered eligible for our study if it was included in a systematic review published before September 2019 and in English. We developed a practical criterion based on the definition of systematic review adopted in the Cochrane handbook.1 A systematic review had to provide eligibility criteria for study inclusion and conduct systematic literature searches in at least two databases. IPD meta-analyses of randomised controlled trials (considered the best source of evidence on intervention effect estimation) were considered eligible regardless of clinical areas studied.20 21 To be considered as an IPD meta-analysis, data should have been obtained for quantitative synthesis either from the authors of randomised controlled trials or through other strategies such as data extraction from published trials. We excluded IPD meta-analyses that summarised evidence from non-randomised controlled trials, quasi-randomised controlled trials, observational studies, diagnostic studies, prognostic studies, and economic evaluations, along with publications that focused on methodological issues with IPD, conference abstracts, and protocols. When an IPD meta-analysis was duplicated (ie, published in different journals) or one or more versions of the same IPD meta-analysis existed, we selected the most recent version; the others were used as supplementary documents for data extraction and critical appraisal.

Literature search

Using keywords related to IPD, we searched Medline, Embase, and the Cochrane Database of Systematic Reviews from inception to 30 August 2019. Specialised search filters for systematic reviews were adopted in Medline and Embase using the Ovid platform.22 23 Our search strategies were based on a recent publication by Nevitt and colleagues, and we also extracted and screened citations included in that research.4 Appendix 1 provides details of our literature search strategies.

Literature selection and data extraction

Citations retrieved from both databases and Nevitt and colleagues’ paper4 were screened and selected according to the eligibility criteria. We used a predeveloped and piloted data extraction form (see supplementary appendix 2) to retrieve data on basic characteristics (eg, year of publication) and other information such as IPD retrieval rate from each included IPD meta-analysis. Two researchers (HW, YL, JA, and YC) independently selected the literature and extracted data. Discrepancies were resolved by discussion and consensus or by referring to the original publications.

Methodological quality assessment

We are unaware of a specific tool for assessing the methodological quality of IPD meta-analyses on intervention effects. As such, we synthesised items from the two widely accepted criteria: AMSTAR-2 and Tierney and colleagues’ guidance.2 7 When two criteria overlapped, we adopted the AMSTAR-2 item if it captured the methodological components of IPD meta-analyses; otherwise we chose the item from Tierney and colleagues’ guidance. All the non-overlapping items from AMSTAR-2 or Tierney and colleagues’ guidance were included as they captured either the general methodological components of a systematic review or the specific methodological components of IPD meta-analysis. We also referred to other publications on methodological quality of IPD meta-analyses.24 25 26 A total of 22 items were included, with 15 adopted from AMSTAR-2, among these, six were considered as critical (table 1).
Table 1

Comparison of AMSTAR-2 and Tierney and colleagues’ criteria for assessing the methodological quality of IPD meta-analyses and criteria used in the current study

AMSTAR-27 Tierney and colleagues2 Wang et al
1. Did the research questions and inclusion criteria for the review include the components of PICO?1a. Does the IPD meta-analysis have a clear research question qualified by explicit eligibility criteria?I1. A clear research question is a general item for all systematic reviewsItem 1 from AMSTAR-2 was adopted
2. Did the report of the review contain an explicit statement that the review methods were established before conduct of the review, and did the report justify any significant deviations from the protocol?1c. Does it have a consistent approach to data collection?1e. Are all the methods prespecified in a protocol?1f. Has the protocol been registered or otherwise made available?5. Were the analyses prespecified in detail?I2. An a priori developed protocol is a general item for all systematic reviews. It is especially important for an IPD meta-analysis as it has the potential for a great number of analyses until desired results are obtainedItem 2 from AMSTAR-2 was adopted
3. Did the review authors explain their selection of the study designs for inclusion in the review?No related itemI3. Justification for inclusion of the study designs is a general item for all systematic reviewsItem 3 from AMSTAR-2 was adopted
4. Did the review authors use a comprehensive literature search strategy?1b. Does the IPD meta-analysis have a systematic and comprehensive search strategy?2a. Were fully published trials identified?2b. Were trials published in the grey literature identified?2c. Were unpublished trials identified?I4. Comprehensive literature search is a general item for all systematic reviewsItem 4 from AMSTAR-2 was adoptedHowever, whether the IPD meta-analysis conducted searches within 24 months of completion of the review was excluded from the checklist as IPD meta-analyses generally take longer time than aggregate data meta-analyses
5. Did the review authors perform study selection in duplicate?No related itemI5. Duplicated study selection is a general item for all systematic reviewsItem 5 from AMSTAR-2 was adopted
6. Did the review authors perform data extraction in duplicate?No related itemI6. When basic characteristics and/or aggregate data are extracted from the included trials, duplicated data extraction will reduce manual mistakes and bias from subjective judgmentItem 6 from AMSTAR-2 was adopted for all IPD meta-analyses unless it stated that no information was extracted from the publications
7. Did the review authors provide a list of excluded studies and justify the exclusions?No related itemI7. Providing a list of excluded studies with justifications will ensure transparency and it is a general item for all systematic reviewsItem 7 from AMSTAR-2 was adopted
8. Did the review authors describe the included studies in adequate detail?No related itemI8. Describing details about the PICO related information of included studies is a pre-requirement for judging whether the studies are appropriately selected for the research question. It is a general item for all systematic reviewsItem 8 from AMSTAR-2 was adopted
9. Did the review authors use a satisfactory technique for assessing the RoB in individual studies that were included in the review?1d. Does it assess the “quality” or RoB of included trials?6a. Were randomisation, allocation concealment, and blinding assessed?6b. Were the IPD checked to ensure all (or most) randomised participants were included?6c. Were all relevant outcomes included?6d. Was the quality of time-to-event-outcome data checked?I9. The RoB of included studies is the cornerstone for the quality of evidence generated from them. It is a general item for all systematic reviews. By combining the criteria from AMSTAR-2 and Tierney and colleagues, two questions were includedI9-1. Item 9 from AMSTAR-2 was adopted; I9-2. 6d from Tierney and colleagues was adopted
10. Did the review authors report on the sources of funding for the studies included in the review?No related itemI10. It is well established that certain funding source (eg, commercial funding) might introduce bias to the results. It is a general item for all systematic reviewsItem 10 from AMSTAR-2 was adopted
11. If meta-analysis was performed, did the review authors use appropriate methods for statistical combination of results?7a-i. Did researchers stratify or account for clustering of participants within trials using either a one or two stage approach to meta-analysis?7a-ii. Was the choice of one or two stage analysis specified in advance and/or results for both approaches provided?I11. An IPD meta-analysis requires certain special statistical techniques. Hence, Item 11 from AMSTAR-2 is not applicableCriteria 7a-i and 7a-ii from Tierney and colleagues were adopted as I11-1 and I11-2, respectively
12. If meta-analysis was performed, did the review authors assess the potential impact of RoB in individual studies on the results of the meta-analysis or other evidence synthesis?No related itemI12. Accounting for RoB of included studies during evidence synthesis is a general item for all systematic reviewsItem 12 from AMSTAR-2 was adopted
13. Did the review authors account for RoB in primary studies when interpreting/discussing the results of the review?No related itemI13. Accounting for RoB of included studies when interpreting results is a general item for all systematic reviewsItem 13 from AMSTAR-2 was adopted
14. Did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review?7b-i. Did researchers compare treatment effects between subgroups of trials or use meta-regression to assess whether the overall treatment effect varied in relation to trial characteristics?I14. Investigating possible source of trial level heterogeneity is important for identifying those patients who have the best chance to benefit from the intervention. It is a general item for all systematic reviewsItem 14 from AMSTAR-2 was adopted
15. If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review?3b. Was an assessment of the potential impact of missing trials undertaken?I15. Publication bias is a general item for all systematic reviews. Evidence suggested that many IPD meta-analyses neglected to examine or discuss publication biasItem 15 from AMSTAR-2 was adopted
16. Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review?No related itemI16. Potential sources of conflict of interest including funding sources is a general item for all systematic reviewsThe item from AMSTAR-2 was adopted
No related item3a. Were IPD obtained from a large proportion of the eligible trials?3c. Were the reasons for not obtaining IPD provided?I17. This item is specific for IPD meta-analysesCriteria 3a and 3c from Tierney and colleagues were adopted as I17-1 and I17-2, respectivelyFurthermore, suggestion on taking strategies to account for unavailable IPD from Riley and colleagues24 was adopted as I17-3
No related item4a. Were the data checked for missing, invalid, out of range, or inconsistent items?4b. Were there any discrepancies with the trial report (if available)?4c. Were any issues queried and, if possible, resolved?I18. These three items are specific to IPD meta-analysesCriteria 4a, 4b, and 4c from Tierney and colleagues were adopted as I18-1, I18-2, and I18-3, respectively
No related item7c. Were the methods of assessing whether effects of interventions vary by participant characteristics appropriate?I19. This item is specific for IPD meta-analysesCriterion 7c from Tierney and colleagues was adopted as I19-1Furthermore, suggestions from PRISMA-IPD statement26 and Fisher and colleagues25 were adopted as I19-2
No related item7d. If there was no evidence of a differential effect by trial or participant characteristic, was emphasis placed on the overall results?I20. This item is specific for IPD meta-analysesThe criterion from Tierney and colleagues was adopted. This item is assessed when variation in treatment effect was explored in either trial and participant characteristics level
No related item7e. Were exploratory analyses highlighted as such?I21. This item is specific for IPD meta-analysesThe criterion from Tierney and colleagues was adopted
No related item8. Does any report of the results adhere to the PRISMA-IPD statement?I22. This item is specific for IPD meta-analysesThe criterion from Tierney and colleagues was adopted and only applied to those IPD meta-analyses that were published after 2015 (when the PRISMA-IPD statement was published)

AMSATR-2=A MeaSurement Tool to Assess systematic Reviews version 2; IPD=individual participant data; PICO=participants, intervention, comparison, and outcome; PRISMA-IPD=Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Individual Participants Data extension; RoB=risk of bias.

Comparison of AMSTAR-2 and Tierney and colleagues’ criteria for assessing the methodological quality of IPD meta-analyses and criteria used in the current study AMSATR-2=A MeaSurement Tool to Assess systematic Reviews version 2; IPD=individual participant data; PICO=participants, intervention, comparison, and outcome; PRISMA-IPD=Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Individual Participants Data extension; RoB=risk of bias. Supplementary appendix 3 provides detailed operational guidelines that were adopted from AMSTAR-2, Tierney and colleagues’ guidance, or consensus among coauthors based on related publications.24 25 26 We tested the operational guidelines with a random sample of five IPD meta-analyses and revised accordingly. Two trained researchers (HW and YL) independently conducted the critical appraisal process. Disagreements were resolved by discussions and consensus. When agreement could not be reached, a senior researcher (IXYW) was consulted.

Data analysis

All collected data, including general information about the IPD meta-analyses and results of methodological quality assessments, are summarised descriptively. Basic characteristics of the IPD meta-analyses, detailed information, and critical appraisal results are presented as percentages with corresponding 95% confidence intervals, or medians with interquartile ranges or ranges, as appropriate. Based on AMSTAR-2 and Jüni and colleagues’ recommendations, we summarised the results for methodological quality assessments according to each item without generating an overall score.7 27 Compliance with each item is presented by year of publication to show the trends in methodological quality of the sampled IPD meta-analyses. IBM Statistical Package for Social Sciences (SPSS) 25 (IBM, Armonk, NY) was used for all data analyses.

Patient and public involvement

No patients were involved in conceiving the research question, choosing the outcome measures, or designing and implementing the study because of insufficient training, covid-19 related restrictions, and time constraints.

Results

A total of 15101 records were identified through database searches and reference lists (fig 1). Of these, 2197 remained after screening of the titles and abstracts, of which 1874 were excluded during full text assessment. The top three reasons for exclusion were not a systematic review (n=911), IPD meta-analyses included non-randomised controlled trials (n=369), and a conference abstract (n=295). Up to 323 (see supplementary appendix 4) IPD meta-analyses met the eligibility criteria and were included in this study.
Fig 1

Screening and selection process of individual participant data (IPD) meta-analyses. RCT=randomised controlled trials

Screening and selection process of individual participant data (IPD) meta-analyses. RCT=randomised controlled trials

Basic characteristics of IPD meta-analyses

The 323 IPD meta-analyses were published between 1991 and 2019 (median 2014; table 2). These covered 21 clinical areas according to ICD-11 (international classification of diseases, 11th revision) criteria (see supplementary appendix 5). The most studied conditions were neoplasia (n=67, 21%), diseases of the circulatory system (n=64, 20%), mental and behavioural or neurodevelopmental disorders (n=31, 10%), and diseases of the nervous system (n=26, 8%). Most of the sampled IPD meta-analyses were non-Cochrane reviews (n=270, 84%), published in the top quarter of high ranking impact factor journals (n=269, 84%), carried out by collaborative groups (n=281, 87%), and included a corresponding author from Europe (n=246, 76%). Among the 219 (68%) IPD meta-analyses with funding support, 155 (71%) were from Europe (table 2). The 323 IPD meta-analyses summarised evidence for drug interventions (n=199, 62%), non-drug interventions (n=112, 35%) (see supplementary appendix 6), or both (n=12, 4%).
Table 2

Characteristics of 323 included individual participant data (IPD) meta-analyses*

CharacteristicsNo of IPD meta-analyses% (95% CI)
Cochrane review5316 (12 to 20)
Non-Cochrane review27084 (80 to 88)
Update of a previous review8025 (20 to 30)
Median (range) publication year20141991-2019
Median (interquartile range) publication journal impact factor64-13†
Rank for journal impact factor (fourths):32099 (98 to 100)†
 1st (top)26984 (80 to 88)‡
 2nd3411 (7 to 14)‡
 3rd113 (1 to 5)‡
 4th (bottom)62 (0.4 to 3)‡
Location of corresponding author:
 Europe24676 (72 to 81)
 America4715 (11 to 18)
 Asia165 (3 to 7)
 Oceania144 (2 to 7)
Authorship:
 Collaborative group28187 (83 to 91)
 Individual authorship4213 (9 to 17)
Type of funding:
 Non-commercial16150 (44 to 55)
 Commercial3812 (8 to 15)
 Mixed206 (4 to 9)
 No funding4213 (9 to 17)
 Not reported6219 (15 to 24)
Funding location:21968 (63 to 73)
 Europe15571 (65 to 77)§
 America3014 (9 to 18)§
 Asia84 (1 to 6)§
 Oceania52 (0.3 to 4)§
 >1 location2110 (6 to 14)§
Most studied conditions:
 Neoplasia6721 (16 to 25)
 Diseases of the circulatory system6420 (15 to 24)
 Mental, behavioural, or neurodevelopmental disorders3110 (6 to 13)
 Diseases of the nervous system268 (5 to 11)
Type of intervention:
 Drug19962 (56 to 67)
 Non-drug11235 (30 to 40)
 Drug and non-drug124 (2 to 6)
IPD meta-analyses reported intervention harms:17454 (48 to 59)
 Drug intervention12362 (55 to 69)¶
 Non-drug intervention4137 (28 to 46)¶
 Drug and non-drug1083 (62 to 100)¶

Data are number of IPD meta-analyses, percentage (95% confidence interval) unless stated otherwise.

Three IPD meta-analyses did not publish in a journal with impact factor.

Denominator is 320.

Denominator is 219.

Percentages were calculated using total number of categories as denominator.

Characteristics of 323 included individual participant data (IPD) meta-analyses* Data are number of IPD meta-analyses, percentage (95% confidence interval) unless stated otherwise. Three IPD meta-analyses did not publish in a journal with impact factor. Denominator is 320. Denominator is 219. Percentages were calculated using total number of categories as denominator.

Performing and reporting of IPD meta-analyses

All IPD meta-analyses searched English databases, whereas only 17 (5%) searched non-English databases. The most popular methods for pooling data were a two stage approach (n=144, 45%), both one and two stage approach (n=96, 30%), and one stage approach (n=75, 24%). Three (1%) IPD meta-analyses combined data as a “mega” trial (table 3). Supplementary appendix 7 provides details of the methods used for IPD meta-analyses. Only around half (n=174, 54%) of the IPD meta-analyses reported on harms related to interventions, with more IPD meta-analyses on drug interventions (n=123, 62%) reporting harms than IPD meta-analyses on non-drug interventions (n=41, 37%) (table 2). Among the 310 IPD meta-analyses published in or after 2000 (a year after the publication of QUOROM (Quality Of Reporting Of Meta-analyses, the first reporting guideline for systematic reviews), only 91 (29%) mentioned following any reporting guidelines (table 3).
Table 3

Details on performing and reporting of the 323 included individual participant data (IPD) meta-analyses*

CharacteristicsNo of IPD meta-analyses% (95% CI)
Language of databases searched:
 English323100
 Non-English175 (3 to 8)
Proportion of IPD retrieved from eligible RCTs (%):
 1009031 (26to 36)†
 80-996723 (18 to 28)†
 50-799834 (28 to 39)†
 <503713 (9 to 16)†
Proportion of IPD retrieved from eligible participants (%):
 1007934 (28 to 40)‡
 80-998135 (29 to 41)‡
 50-795022 (16 to 27)‡
 <50209 (5 to 12)‡
Median No (range) of RCTs included in IPD meta-analyses72-287§
Median No (range) of RCTs included in systematic reviews112-287
Median No (range) of participants included in IPD meta-analyses194049-212 000¶
Median No (range) of participants included in systematic reviews242249-212 000**
Median proportion (range) of IPD retrieved from eligible participants (%)938-100††
Median proportion (range) of IPD retrieved from RCTs (%)818-100§
Eligibility criteria based on language of publication:
 Language criteria not reported13843 (37 to 48)
 English and non-English12639 (34 to 44)
 English publications only5918 (14 to 22)
Tools used for RoB assessment of included RCTs:
 Cochrane RoB tool14545 (39 to 50)
 Jadad scale124 (2 to 6)
 Other‡‡134 (2 to 6)
 Not assessed9730 (25 to 35)
 Not clear which tool was used5617 (13 to 22)
Followed any reporting guidelines for systematic reviews or IPD meta-analyses:9129 (24 to 34)§§
 QUOROM (1999)44 (0.1 to 9)¶¶
 PRISMA (2009)4650 (40 to 61)¶¶
 PRISMA-IPD (2015)3640 (29 to 50)¶¶
 Both PRISMA (2009) and PRISMA-IPD (2015)56 (1 to 10)¶¶
Methods used to combine IPD:31898 (97 to 100)***
 Two stage approach14445 (40 to 51)†††
 One stage approach7524 (19 to 28)†††
 One and two stage approach9630 (25 to 35)†††
 A “mega” trial31 (0 to 2)†††

RCT=randomised controlled trial; QUOROM=Quality Of Reporting Of Meta-analyses; PRISMA=Preferred Reporting Items for Systematic Reviews and Meta-analysis; PRISMA-IPD, Preferred Reporting Items for Systematic Review and Meta-Analyses of individual participant data; RoB=risk of bias.

Values are numbers of IPD meta-analyses, percentage (95% confidence interval) unless stated otherwise

Denominator is 292.

Denominator is 230.

31 did not report number of randomised controlled trials included in systematic reviews.

1 did not report number of participants included in IPD meta-analysis.

92 did not report number of participants included in systematic reviews.

93 did not report number of eligible participants.

Included tool for the assessment of study quality and reporting in exercise (n=3, 1%), Delphi list (n=2, 1%), Chalmer scale (n=2, 1%), Effective Public Health Practice Project Quality Assessment Tool (n=1, 0.3%), Jüni (n=1, 0.3%), Pedro scale (n=1, 0.3%), Method for Evaluating Research and Guideline Evidence (MERGE) criteria (n=1, 0.3%), Consolidated Standards of Reporting Trials (CONSORT) statement (n=1, 0.3%), and Cochrane RoB plus Pedro scale (n=1, 0.3%).

Denominator is 310 IPD meta-analyses published in or after 2000.

91 IPD meta-analyses followed one or more reporting guidelines.

5 IPD meta-analyses did not report the methods used to combine IPD.

318 IPD meta-analyses reported the methods used to combine IPD.

Details on performing and reporting of the 323 included individual participant data (IPD) meta-analyses* RCT=randomised controlled trial; QUOROM=Quality Of Reporting Of Meta-analyses; PRISMA=Preferred Reporting Items for Systematic Reviews and Meta-analysis; PRISMA-IPD, Preferred Reporting Items for Systematic Review and Meta-Analyses of individual participant data; RoB=risk of bias. Values are numbers of IPD meta-analyses, percentage (95% confidence interval) unless stated otherwise Denominator is 292. Denominator is 230. 31 did not report number of randomised controlled trials included in systematic reviews. 1 did not report number of participants included in IPD meta-analysis. 92 did not report number of participants included in systematic reviews. 93 did not report number of eligible participants. Included tool for the assessment of study quality and reporting in exercise (n=3, 1%), Delphi list (n=2, 1%), Chalmer scale (n=2, 1%), Effective Public Health Practice Project Quality Assessment Tool (n=1, 0.3%), Jüni (n=1, 0.3%), Pedro scale (n=1, 0.3%), Method for Evaluating Research and Guideline Evidence (MERGE) criteria (n=1, 0.3%), Consolidated Standards of Reporting Trials (CONSORT) statement (n=1, 0.3%), and Cochrane RoB plus Pedro scale (n=1, 0.3%). Denominator is 310 IPD meta-analyses published in or after 2000. 91 IPD meta-analyses followed one or more reporting guidelines. 5 IPD meta-analyses did not report the methods used to combine IPD. 318 IPD meta-analyses reported the methods used to combine IPD.

IPD retrieval rate

A median of 11 (range 2-287) randomised controlled trials were included in systematic reviews, whereas the IPD were obtained from a median of seven (range 2-287) trials, with a median proportion of 81% (range 8-100%) IPD obtained from included randomised controlled trials. A total of 90 (31%) IPD meta-analyses obtained 100% IPD from the included randomised controlled trials, 67 (23%), 98 (34%), and 37 (13%) IPD meta-analyses obtained 80-99%, 50-79%, and less than 50% IPD from the included randomised controlled trials, respectively. Among 230 IPD meta-analyses providing information on IPD retrieval based on participants level, 79 (34%) obtained 100% IPD from all eligible participants, 81 (35%), 50 (22%), and 20 (9%) IPD meta-analyses obtained 80-99%, 50-79%, and less than 50% IPD from all eligible participants, respectively (table 3).

Methodological quality

The methodological quality of the 323 sampled IPD meta-analyses was generally unsatisfactory either on general items for systematic reviews (table 4) or on items specific to IPD meta-analyses (table 5). However, improvements were seen over time in most of the methodological items, especially in pre-establishing protocol and data analysis plan, accounting for risk of bias and publication bias (fig 2 and supplementary appendix 8).
Table 4

Results on general methodological items of the sampled 323 individual participant data (IPD) meta-analyses

Methodological itemsYesPartiallyNo
No of IPD meta-analyses% (95% CI)No of IPD meta-analyses% (95% CI)No of IPD meta-analyses% (95% CI)
I1. Did the research questions and inclusion criteria for the review include the components of PICO?27485 (81 to 89)NA4915 (11 to 19)
I2. Did the report of the review contain an explicit statement that the review methods were established before conduct of the review and did the report justify any significant deviations from the protocol?*9931 (26 to 36)10934 (29 to 39)11536 (30 to 41)
I3. Did the review authors explain their selection of the study designs for inclusion in the review?3410 (7 to 14)NA28990 (86 to 93)
I4. Did the review authors use a comprehensive literature search strategy?*6119 (15 to 23)20162 (57 to 68)6119 (15 to 23)
I5. Did the review authors perform study selection in duplicate?15347 (42 to 53)NA17053 (47 to 58)
I6. Did the review authors perform data extraction in duplicate?†7122 (17 to 26)NA16752 (46 to 57)
I7. Did the review authors provide a list of excluded studies and justify the exclusions?*10432 (27 to 37)21 (0 to 2)21767 (62 to 72)
I8. Did the review authors describe the included studies in adequate detail?13843 (37 to 48)18156 (51 to 62)41 (0 to 2)
I9-1. Did the review authors use a satisfactory technique for assessing RoB in individual studies that were included in the review?*13943 (38 to 48)6721 (16 to 25)11736 (31 to 42)
I10. Did the review authors report on the sources of funding for the studies included in the review?5718 (14 to 22)NA26682 (78 to 86)
I12. If meta-analysis was performed, did the review authors assess the potential impact of RoB in individual studies on the results of the meta-analysis or other evidence synthesis?10733 (28 to 38)NA21667 (62 to 72)
I13. Did the review authors account for RoB in primary studies when interpreting or discussing the results of the review?*12840 (34 to 45)NA19560 (55 to 66)
I14. Did the review authors provide a satisfactory explanation for, and discussion of, any heterogeneity observed in the results of the review?26281 (77 to 85)NA6119 (15 to 23)
I15. If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review?*9931 (26 to 36)NA22469 (64 to 74)
I16. Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review?29792 (89 to 95)NA268 (5 to 11)

NA=not applicable; PICO=population, intervention, comparator, and outcome; RoB=risk of bias.

Critical item in AMSTAR-2.

Data in this item do not add equal to 323 because 85 IPD meta-analyses are not applicable to the item.

Table 5

Results on specific methodological items of the sampled 323 individual participant data (IPD) meta-analyses

Methodological itemsYesPartiallyNo
No of IPD meta-analyses% (95% CI)No of IPD meta-analyses% (95% CI)No of IPD meta-analyses% (95% CI)
I9-2. Was the quality of time-to-event-outcome data checked?*6219 (15 to 24)NA12338 (33 to 43)
I11-1. Did researchers stratify or account for clustering of participants within trials using either a one or two stage approach to meta-analysis?31598 (96 to 99)NA82 (1 to 4)
I11-2. Was the choice of one or two stage analysis specified in advance or results for both approaches provided, or both?14344 (39 to 50)NA18056 (50 to 61)
I17-1. Were IPD obtained from a large proportion of the eligible trials?†16651 (46 to 57)NA12639 (34 to 44)
I17-2. Were the reasons for not obtaining IPD provided?*60‡48 (39 to 56)NA66‡52 (44 to 61)
I17-3. Were there any strategies taken to account for unavailable IPD?*21‡17 (10 to 23)51‡40 (32 to 49)54‡43 (34 to 52)
I18-1. Were the data checked for missing, invalid, out of range, or inconsistent items?18056 (50 to 61)NA14344 (39 to 50)
I18-2. Did the author check any discrepancies with the trial report (if available)?18056 (50 to 61)NA14344 (39 to 50)
I18-3. Were any issues queried and, if possible, resolved?*17955 (50 to 61)NA3310 (7 to 14)
I19-1. Were the methods of assessing whether effects of interventions vary by participant characteristics appropriate?*22871 (66 to 76)NA5718 (14 to 22)
I19-2. Was the choice of participant level characteristics and methods of assessing participant level interactions specified in advance?*10131 (26 to 36)258 (5 to 11)15949 (44 to 55)
I20. If there was no evidence of a differential effect by trial or participant characteristic, was emphasis placed on the overall results?*11335 (30 to 40)NA41 (0 to 2)
I21. Were exploratory analyses highlighted as such?§15448 (42 to 53)NA165 (3 to 7)
I22. Does any report of the results adhere to the PRISMA-IPD?*41¶32 (24 to 40)NA86¶68 (60 to 76)

NA=not applicable; PRISMA-IPD=the Preferred Reporting Items for Systematic Reviews and Meta-analysis for Individual Participants Data extension.

Data in this item do not add equal to 323 because some IPD meta-analyses are not applicable to the item.

Data in this item do not add equal to 323 because 31 (10%, 6% to 13%) IPD meta-analyses did not report related information of the item.

Denominator is 126.

Data in this item do not add equal to 323 because 153 (47%, 42% to 53%) IPD meta-analyses did not report related information of the item.

Denominator is 127.

Fig 2

The methodological quality on six selected items of the 323 sampled individual participant data meta-analyses over time. Item 2: Did the report of the review contain an explicit statement that the review methods were established before conduct of the review and did the report justify any significant deviations from the protocol? Item 9-1: Did the review authors use a satisfactory technique for assessing the risk of bias in individual studies that were included in the review? Item 11-2: Was the choice of one stage or two stage analysis specified in advance or results for both approaches provided? Item 12: If meta-analysis was performed, did the review authors assess the potential impact of risk of bias in individual studies on the results of the meta-analysis or other evidence synthesis? Item 13: Did the review authors account for risk of bias in primary studies when interpreting or discussing the results of the review? Item 15: If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review?

Results on general methodological items of the sampled 323 individual participant data (IPD) meta-analyses NA=not applicable; PICO=population, intervention, comparator, and outcome; RoB=risk of bias. Critical item in AMSTAR-2. Data in this item do not add equal to 323 because 85 IPD meta-analyses are not applicable to the item. Results on specific methodological items of the sampled 323 individual participant data (IPD) meta-analyses NA=not applicable; PRISMA-IPD=the Preferred Reporting Items for Systematic Reviews and Meta-analysis for Individual Participants Data extension. Data in this item do not add equal to 323 because some IPD meta-analyses are not applicable to the item. Data in this item do not add equal to 323 because 31 (10%, 6% to 13%) IPD meta-analyses did not report related information of the item. Denominator is 126. Data in this item do not add equal to 323 because 153 (47%, 42% to 53%) IPD meta-analyses did not report related information of the item. Denominator is 127. The methodological quality on six selected items of the 323 sampled individual participant data meta-analyses over time. Item 2: Did the report of the review contain an explicit statement that the review methods were established before conduct of the review and did the report justify any significant deviations from the protocol? Item 9-1: Did the review authors use a satisfactory technique for assessing the risk of bias in individual studies that were included in the review? Item 11-2: Was the choice of one stage or two stage analysis specified in advance or results for both approaches provided? Item 12: If meta-analysis was performed, did the review authors assess the potential impact of risk of bias in individual studies on the results of the meta-analysis or other evidence synthesis? Item 13: Did the review authors account for risk of bias in primary studies when interpreting or discussing the results of the review? Item 15: If they performed quantitative synthesis did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review?

Critical appraisal results on general items

Table 4 provides results for the critical appraisal on general items. The sampled IPD meta-analyses showed more than 80% compliance in three items—stating conflict of interests (92%, 95% confidence interval 89% to 95%), including PICO (population, intervention, comparator, and outcome) components in the research question and inclusion criteria (85%, 81% to 89%), and explaining the observed heterogeneity (81%, 77% to 85%). None of the items are, however, critical ones in AMSTAR-2. The sampled IPD meta-analyses showed unsatisfactory performance for the six critical items in AMSTAR-2 that were applicable to IPD meta-analyses. Only 43% (38% to 48%) of IPD meta-analyses used a satisfactory technique for assessing the risk of bias of included randomised controlled trials (table 4). Ninety seven IPD meta-analyses (30%, 25% to 35%) did not perform any critical appraisal of the included randomised controlled trials, and 56 (17%, 13% to 22%) did not report the tool they used for critical appraisal (table 3). The sampled IPD meta-analyses showed no more than 40% compliance for the remaining five critical items—accounting for risk of bias when interpreting results (40%, 34% to 45%), providing a list of excluded studies with justifications (32%, 27% to 37%), establishing an a priori protocol and justifying any deviations (31%, 26% to 36%), assessing and considering the potential of publication bias (31%, 26% to 36%), and conducting a comprehensive literature search (19%, 15% to 23%; table 4). The remaining five non-critical items in AMSTAR-2 that were applicable to IPD meta-analyses showed less than 50% compliance for the sampled IPD meta-analyses. Two items had no more than 20% of sampled IPD meta-analyses rated as yes: explaining the selection of the study design (10%, 7% to 14%) and reporting sources of funding for the included randomised controlled trials (18%, 14% to 22%).

Critical appraisal results on items specific to IPD meta-analyses

Except for stratifying or accounting for clustering of participants within trials (98%, 96% to 99%) and using appropriated methods to assess whether effects of interventions varied by participant characteristics (71%, 66% to 76%), the performance of the sampled IPD meta-analyses on specific items were generally unsatisfactory, with less than 60% showing compliance with each item. A relatively low proportion of the sampled IPD meta-analyses prespecified methods either for assessing the overall effects (44%, 39% to 50%) or for assessing participant-intervention interactions (31%, 26% to 36%). Furthermore, only 48% (42% to 53%) of the sampled IPD meta-analyses labelled the analyses as prespecified or post hoc, and 5% (3% to 7%) of exploratory analyses were not labelled as such. The remaining 47% (42% to 53%) did not provide related information (table 5). Up to 126 (39%, 34% to 44%) IPD meta-analyses failed to obtain IPD from 90% or more of eligible participants or trials, or both. Among them, only 60 (48%, 39% to 56%) provided reasons for not obtaining IPD, and 21 (17%, 10% to 23%) undertook certain strategies to account for the unavailable IPD. Only 56% (50% to 61%) of IPD meta-analyses checked for missing, invalid, out of range, or inconsistent items, and only 55% (50% to 61%) contacted trial authors for clarifications. Among the 127 IPD meta-analyses published after 2015, only 41 (32%, 24% to 40%) reported that the PRISMA-IPD (Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Individual Participants Data extension) was followed (table 5).

Discussion

This study identified and appraised 323 IPD meta-analyses of randomised controlled trials that focused on intervention effects. This sample of IPD meta-analyses (to August 2019) covered 21 different clinical areas and comprised Cochrane and non-Cochrane reviews as well as drug and non-drug interventions. According to our criteria, the methodological quality of the sampled IPD meta-analyses was far from satisfactory. In future IPD meta-analyses much consideration is needed of both general methodological components of systematic reviews (eg, establishing an a priori protocol, using a comprehensive literature search strategy, assessing the risk of bias of included randomised controlled trials with a satisfactory approach as well as accounting for risk of bias when interpreting results, and addressing potential publication bias) and IPD meta-analyses specific components (eg, prespecifying methods used for data analyses and putting a label for exploratory analyses, providing reasons for not obtaining IPD and taking strategies to account for unavailable IPD, checking data integrity, and clarifying uncertainties in need). Future IPD meta-analyses should also justify the study design for inclusion and report funding information for the included randomised controlled trials, as less than 20% of the sampled IPD meta-analyses complied with these two items.

A priori protocol and exploratory analyses

A predeveloped protocol will help increase objectivity and reduce bias in systematic reviews.7 28 A prespecified protocol with a detailed plan for data analyses is of importance to an IPD meta-analysis, because raw data collected from randomised controlled trials enables reviewers to perform many analyses and this poses the risk of data being repeatedly interrogated until desired results are obtained.1 However, only 31% of the sampled IPD meta-analyses established an a priori protocol and justified important deviations from the protocol. Conducting exploratory analyses to identify potential effect modifiers, at either the trial or the participant level is, however, a recognised advantage of IPD meta-analyses.1 Exploratory analysis therefore is not prohibited in IPD meta-analyses, and it enables the collection of extra information on certain subgroups of participants, who might benefit more from the intervention and contributes to better clinical decision making. Nonetheless, appropriate interpretation of the results from IPD meta-analyses requires full presentation of all the analyses, with exploratory analyses being labelled as such. It has been suggested that future IPD meta-analyses should predevelop the research protocol, register it in PROSPERO or the Cochrane Library, and label exploratory analyses as such.2 26 29 30

Literature searches and publication bias

The importance of a comprehensive literature search is well established in systematic reviews.1 Theoretically, IPD meta-analyses have advantages of comprehensively identifying literature, especially unpublished randomised controlled trials through collaboration with multiple research groups and consultation with trialists.1 However, we found that only 19% of our sampled IPD meta-analyses fulfilled the revised criterion in AMSTAR-2 for a comprehensive literature search. Evidence suggests that studies with positive results had a higher probability of getting published in English journals31 and that only 5% of IPD meta-analyses searched non-English databases and 39% considered non-English publications in their eligibility criteria. IPD meta-analyses therefore might not identify a representative sample of randomised controlled trials. Although the impact of a non-representative sample of randomised controlled trials on the effect of IPD meta-analyses is unpredictable and depends on the research topic, publication bias can potentially affect the results and conclusions.8 More than two thirds of the sampled IPD meta-analyses in our study did not fulfil the methodological item related to investigation of publication bias. Future researchers should focus on reducing the risk of publication bias through a comprehensive literature search and dealing with its potential impact on the results of IPD meta-analyses.

Risk of bias of included trials

One recognised advantage of IPD meta-analyses is being able to contact trial investigators when assessing risk of bias to determine the validity of the results. Nonetheless, only 43% of the sampled IPD meta-analyses used a satisfactory technique to assess the risk of bias in included randomised controlled trials.7 The reliability of the evidence derived from an IPD meta-analysis depends on the validity of the included primary studies.32 33 Therefore, accounting for risk of bias of included trials during data synthesis, and discussing its potential impact when interpreting the results will help evidence users to judge the confidence of the findings. Furthermore, stratified data synthesis based on risk of bias will provide results from trials exclusively with low risk of bias, which will also facilitate evidence based decision making. However, the sampled IPD meta-analyses in our study performed unsatisfactorily when assessing risk of bias and accounting for it during data synthesis and interpretation of results. Future researchers should deal with these methodological shortcomings when conducting IPD meta-analyses. The updated Cochrane risk of bias tool (RoB2) can be an optimal choice for critical appraisal of included randomised controlled trials.1

IPD retrieval

Maximising IPD retrieval is generally considered to have the potential to reduce selection bias, and will likely provide more reliable results.2 8 It is not always easy to obtain IPD from all eligible randomised controlled trials or participants; hence, 90% IPD retrieval, which was used as the cut-off for a large proportion of IPD retrieval, has been proposed as an acceptable target.34 Nonetheless, whether the unavailable IPD introduces bias—which might not only relate to the retrieval proportion but also to other factors, such as whether the unavailable IPD is associated with the direction of the effect estimation and whether sufficient power has been reached with the available IPD. These factors have also been considered according to the criteria used in this study. Although it is not always possible to obtain IPD from 90% or more of eligible participants or trials, authors can at least provide reasons for it and undertake strategies, such as combining aggregate data with IPD as sensitivity analyses and comparing the differences between trials that provide IPD and those that do not.24 Such performance was not, however, common among the sampled IPD meta-analyses that did not obtain IPD from 90% or more of eligible participants or trials. Future studies should address these methodological flaws.

Strengths and limitations of this review

This study has several strengths. Firstly, we assessed the methodological quality of IPD meta-analysis by including both general methodological items of systematic reviews and IPD meta-analysis specific methodological items. Secondly, we did not restrict the type of disease or intervention during the sampling process, and as such the sampled IPD meta-analyses covered a wide range of clinical areas and interventions. Thirdly, the performance of each individual methodological item was reported in detail to inform where improvement is required for future studies. Several limitations are worth mentioning. Firstly, no IPD meta-analysis specific critical appraisal tool exists. In this study, we combined the criteria from AMSTAR-2, Tierney and colleagues’ guidance, and other related publications.2 24 25 26 This enabled us to capture the general methodological components of systematic reviews as well as those of IPD meta-analysis.35 Secondly, the operational guideline for the criteria we used in our study was developed by adopting rules from AMSTAR-2 as well as group discussion and consensus. We did not collect any external experts’ opinions, nor did we conduct a formal validation process. However, we provided detailed assessment rules for each methodological item to facilitate judgments and strictly followed them. An IPD meta-analysis specific version of the AMSTAR tool is needed in the future. Thirdly, we only considered the execution of the methodological items, without further inspection of their actual achievements. For example, for the risk of bias assessment, we only assessed whether a satisfactory technique was used—such an approach to assess the risk of bias is not equal to assessing the risk of bias appropriately, which was beyond the scope of this study. Likewise, information on statistical methods used for IPD meta-analyses were solely based on the description of the publications. In our study we did not conduct further investigation on whether the statistical methods were applied correctly. Two of the studies indicated that the use of one stage methods were substandard among many IPD meta-analyses.15 36 Further assessments are warranted on whether clustering was correctly accounted for when the one stage method was used, whether within trial interactions were appropriately separated from across trial interactions to reduce ecological bias when investigating effect modifiers, and whether model assumptions (eg, choice of random or fixed effects) were properly checked.13 14 37 38 In addition, no assessment of publication bias is not necessarily equal to the existence of publication bias. The authors of IPD meta-analyses are, however, asked to provide related information to facilitate evidence based decision making. Otherwise, a reassessment of publication bias is needed for evidence users.8 Fourthly, owing to limited resources and time, we only sampled and critically appraised IPD meta-analyses that included randomised controlled trials on intervention effects. Conclusions from this study might not apply to IPD meta-analyses including non-randomised controlled trials or IPD meta-analyses on diagnoses, prognoses, or causes of diseases. Further studies are needed to assess the methodological quality of IPD meta-analyses in these research areas. Fifthly, a common drawback of this type of study is that the critical appraisal process relies solely on the reporting of the publications.9 Hence, some of the results might be a reflection of the reporting quality instead of methodological quality, especially for the early published IPD meta-analyses. The release of PRISMA-IPD might help improve reporting and facilitate the critical appraisal of future IPD meta-analyses.26 Assessing the reporting of the sampled IPD meta-analyses comprehensively is beyond the scope of this study, but it has been covered previously.39 Finally, we focused on methodological quality, which is distinguished from the quality of evidence derived from the IPD meta-analyses. The latter is not only affected by the methodological quality of the IPD meta-analyses, but also depends on the features of the primary studies, such as the risk of bias and precision of the effect estimation.20

Comparisons with similar studies

We did not identify any study that comprehensively assessed the methodological quality of IPD meta-analyses. Compared with the methodological quality of aggregate data meta-analyses, the sampled IPD meta-analyses showed better performance in synthesising data with an appropriate method, conducting a comprehensive literature search, and stating conflicts of interests of the review.9 10 11 12 However, compared with aggregate data meta-analyses, the sampled IPD meta-analyses showed lower compliance in conducting literature selection and data extraction in duplicate, providing adequate details about included randomised controlled trials, assessing the risk of bias with a satisfactory technique and accounting for it during data analyses and interpretation of results, and investigating and discussing publication bias.9 10 11 12 Discrepancy in the sampling time frame (the past 10 years for aggregated data meta-analyses versus no time restriction for IPD meta-analyses) could have contributed to the observed differences. Reporting might be another reason, as IPD meta-analyses may focus on presenting IPD meta-analyses related details given the word limits of traditional journals; hence, some of the lower compliance might be due to lack of reporting (eg, performance on literature selection and data extraction). The recently developed online appendices policy of many journals and the release of PRISMA-IPD can improve reporting and facilitate the critical appraisal of future IPD meta-analyses. This might contribute to the trends of improvements in the several items observed in this study. Ahmed and colleagues evaluated publication bias, selection bias, and unavailable data in 31 IPD meta-analyses of randomised controlled trials published between 2007 and 2009.8 As with our study, they found similar unsatisfactory performance of comprehensive literature search (29% v 19%), unsatisfactory consideration of publication bias (32% v 31%), and a low proportion (52% v 51%) of meta-analyses collecting IPD from 90% or more of eligible participants or trials.8 We did, however, find trends of improvements in these items, although there is still room for improvement. Compared with the study by Ahmed and colleagues, our study used a much larger sample and comprehensively assessed the methodological quality of included IPD meta-analyses.8

Implications for clinical practice and future research

Compared with aggregate data meta-analysis, the distinguished features of IPD meta-analysis have made it ideal for systematic review.3 It also has a direct impact on healthcare practice and guideline development.5 However, results from this study, together with previous related studies, indicated that IPD meta-analysis might not necessarily be free from bias.8 15 16 Therefore clinicians and guideline developers should assess the methodological quality of IPD meta-analyses before making use of the evidence. Researchers should follow the Cochrane Handbook as well as other guidelines for conducting and reporting IPD meta-analysis to ensure the quality of resulting IPD meta-analysis.1 2 24 26 An extension specifically for IPD meta-analysis of AMSTAR-2 is needed.

Conclusions

The methodological quality of IPD meta-analyses is unsatisfactory, either on general items for systematic reviews or on items specific to IPD meta-analyses. Much effort is needed for future IPD meta-analysis in establishing an a priori protocol, prespecifying methods used for data analyses, labelling exploratory analyses, searching literature comprehensively, using an adequate approach to assess the risk of bias of included randomised controlled trials and accounting for risk of bias in data analyses and results interpretation, investigating and discussing publication bias, providing reasons for not obtaining IPD and taking strategies to account for the unavailable IPD, checking data integrity, and clarifying uncertainties. The Cochrane Handbook as well as other methodological guidelines and the PRISMA-IPD statement could be used as reference for future IPD meta-analyses.1 2 24 26 It is suggested that the rigour of IPD meta-analyses should be assessed before the results are considered. Individual participant data (IPD) meta-analysis is regarded as the ideal approach for providing evidence on intervention effect estimation but is susceptible to bias from methodological flaws Evidence has shown that the published IPD meta-analyses were conducted based on inconsistent standards Regardless of increasing numbers of published IPD meta-analyses, methodological quality has not been comprehensively evaluated This study found that the methodological quality of IPD meta-analyses was unsatisfactory IPD meta-analyses showed poor performance in general methodological items, especially in assessing and accounting for risk of bias of included trials, establishing an a priori protocol, assessing and considering the potential of publication bias, and conducting a comprehensive literature search IPD meta-analyses showed poor performance in IPD specific methodological items, especially in prespecifying methods for data synthesis, checking data integrity, and undertaking strategy to account for unavailable IPD
  38 in total

1.  The hazards of scoring the quality of clinical trials for meta-analysis.

Authors:  P Jüni; A Witschi; R Bloch; M Egger
Journal:  JAMA       Date:  1999-09-15       Impact factor: 56.272

2.  Meta-analysis of individual participant data: rationale, conduct, and reporting.

Authors:  Richard D Riley; Paul C Lambert; Ghada Abo-Zaid
Journal:  BMJ       Date:  2010-02-05

3.  Trials number, funding support, and intervention type associated with IPDMA data retrieval: a cross-sectional study.

Authors:  Irene X Y Wu; Fang Xiao; Huan Wang; Yancong Chen; Zixuan Zhang; Yali Lin; Wilson Tam
Journal:  J Clin Epidemiol       Date:  2020-10-22       Impact factor: 6.437

4.  Methodological quality of systematic reviews on interventions for osteoarthritis: a cross-sectional study.

Authors:  Irene Xy Wu; Huan Wang; Lin Zhu; Yancong Chen; Charlene Hl Wong; Chen Mao; Vincent Ch Chung
Journal:  Ther Adv Musculoskelet Dis       Date:  2020-09-23       Impact factor: 5.346

5.  Why prospective registration of systematic reviews makes sense.

Authors:  Lesley Stewart; David Moher; Paul Shekelle
Journal:  Syst Rev       Date:  2012-02-09

6.  Individual Participant Data (IPD) Meta-analyses of Randomised Controlled Trials: Guidance on Their Use.

Authors:  Jayne F Tierney; Claire Vale; Richard Riley; Catrin Tudur Smith; Lesley Stewart; Mike Clarke; Maroeska Rovers
Journal:  PLoS Med       Date:  2015-07-21       Impact factor: 11.069

Review 7.  Exploring changes over time and characteristics associated with data retrieval across individual participant data meta-analyses: systematic review.

Authors:  Sarah J Nevitt; Anthony G Marson; Becky Davie; Sally Reynolds; Lisa Williams; Catrin Tudur Smith
Journal:  BMJ       Date:  2017-04-05

8.  Meta-analysis using individual participant data: one-stage and two-stage approaches, and why they may differ.

Authors:  Danielle L Burke; Joie Ensor; Richard D Riley
Journal:  Stat Med       Date:  2016-10-16       Impact factor: 2.373

Review 9.  A decade of individual participant data meta-analyses: A review of current practice.

Authors:  Mark Simmonds; Gavin Stewart; Lesley Stewart
Journal:  Contemp Clin Trials       Date:  2015-06-17       Impact factor: 2.226

10.  The nuts and bolts of PROSPERO: an international prospective register of systematic reviews.

Authors:  Alison Booth; Mike Clarke; Gordon Dooley; Davina Ghersi; David Moher; Mark Petticrew; Lesley Stewart
Journal:  Syst Rev       Date:  2012-02-09
View more
  3 in total

1.  The importance of adherence to international standards for depositing open data in public repositories.

Authors:  Diego A Forero; Walter H Curioso; George P Patrinos
Journal:  BMC Res Notes       Date:  2021-11-02

2.  New horizons in evidence-based care for older people: individual participant data meta-analysis.

Authors:  Andrew Clegg; Karen Bandeen-Roche; Amanda Farrin; Anne Forster; Thomas M Gill; John Gladman; Ngaire Kerse; Richard Lindley; Richard J McManus; Rene Melis; Ruben Mujica-Mota; Parminder Raina; Kenneth Rockwood; Ruth Teh; Danielle van der Windt; Miles Witham
Journal:  Age Ageing       Date:  2022-04-01       Impact factor: 12.782

3.  Development of a checklist of standard items for processing individual participant data from randomised trials for meta-analyses: Protocol for a modified e-Delphi study.

Authors:  Kylie E Hunter; Angela C Webster; Mike Clarke; Matthew J Page; Sol Libesman; Peter J Godolphin; Mason Aberoumand; Larysa H M Rydzewska; Rui Wang; Aidan C Tan; Wentao Li; Ben W Mol; Melina Willson; Vicki Brown; Talia Palacios; Anna Lene Seidler
Journal:  PLoS One       Date:  2022-10-11       Impact factor: 3.752

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.