Literature DB >> 35213097

Quality Reporting of Systematic Review and Meta-Analysis According to PRISMA 2020 Guidelines: Results from Recently Published Papers in the Korean Journal of Radiology.

Ho Young Park1, Chong Hyun Suh2, Sungmin Woo3, Pyeong Hwa Kim1, Kyung Won Kim1.   

Abstract

OBJECTIVE: To evaluate the completeness of the reporting of systematic reviews and meta-analyses published in a general radiology journal using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines.
MATERIALS AND METHODS: Twenty-four articles (systematic review and meta-analysis, n = 18; systematic review only, n = 6) published between August 2009 and September 2021 in the Korean Journal of Radiology were analyzed. Completeness of the reporting of main texts and abstracts were evaluated using the PRISMA 2020 statement. For each item in the statement, the proportion of studies that met the guidelines' recommendation was calculated and items that were satisfied by fewer than 80% of the studies were identified. The review process was conducted by two independent reviewers.
RESULTS: Of the 42 items (including sub-items) in the PRISMA 2020 statement for main text, 24 were satisfied by fewer than 80% of the included articles. The 24 items were grouped into eight domains: 1) assessment of the eligibility of potential articles, 2) assessment of the risk of bias, 3) synthesis of results, 4) additional analysis of study heterogeneity, 5) assessment of non-reporting bias, 6) assessment of the certainty of evidence, 7) provision of limitations of the study, and 8) additional information, such as protocol registration. Of the 12 items in the abstract checklists, eight were incorporated in fewer than 80% of the included publications.
CONCLUSION: Several items included in the PRISMA 2020 checklist were overlooked in systematic review and meta-analysis articles published in the Korean Journal of Radiology. Based on these results, we suggest a double-check list for improving the quality of systematic reviews and meta-analyses. Authors and reviewers should familiarize themselves with the PRISMA 2020 statement and check whether the recommended items are fully satisfied prior to publication.
Copyright © 2022 The Korean Society of Radiology.

Entities:  

Keywords:  Meta-analysis; PRISMA 2020; Reporting quality; Systematic review

Mesh:

Year:  2022        PMID: 35213097      PMCID: PMC8876652          DOI: 10.3348/kjr.2021.0808

Source DB:  PubMed          Journal:  Korean J Radiol        ISSN: 1229-6929            Impact factor:   3.500


INTRODUCTION

Systematic reviews and meta-analyses have several strengths over individual studies because they provide estimated outcomes with higher precision, address questions that cannot be asked in individual studies, and provide evidence-based guidance from conflicting results [1]. As a result, an increasing number of systematic reviews and meta-analyses are published every year in various medical fields [2]. Accordingly, the quality of reporting has been emphasized in systematic reviews and meta-analyses in order to provide clarity and transparency regarding study conduct procedures [3]. This is particularly important for systematic reviews and meta-analyses because the synthesized results are influenced by the results from individual studies and therefore can be misleading if the individual results are biased [4]. In 2009, the first Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was published with the aim to improve the quality of reporting [3]. Since then, methodological approaches, such as result synthesis and risk of bias assessment, have advanced, thereby necessitating update of the guidelines; thus, an updated version of the PRISMA statement was published in 2020 [5]. Despite the publication of the PRISMA statement, the quality of systematic reviews and meta-analyses still varies between individual articles and journals [6]. Moreover, there has been only a modest improvement in the quality of reporting in radiology articles since the publication of the PRISMA 2009 statement, suggesting that there is still room for further improvement [6]. To the best of our knowledge, the number of studies in the field of radiology that evaluated the quality of reporting in systematic reviews and meta-analyses using the PRISMA 2020 statement has been limited. Therefore, the goal of our study was to assess the reporting quality of recent publications in the Korean Journal of Radiology using the PRISMA 2020 statement. Based on the assessment, we aimed to provide suggestions for authors on how to improve the quality of their reports.

MATERIALS AND METHODS

Search Strategy and Study Selection

Using the MEDLINE database, we identified all potentially relevant systematic reviews, with or without meta-analysis, published in a single peer-reviewed journal, the Korean Journal of Radiology, between August 2009 and September 2021. Because the first PRISMA statement was published in July 2009 [3], we did not include studies published earlier than that date. The search terms were (“Korean Journal of Radiology”[Journal]) AND ((systematic review) OR (meta-analysis)). A total of 31 records (i.e., abstracts and titles) were retrieved from the MEDLINE database and two reviewers evaluated the eligibility of each article. Among them, two records [78] were removed before screening because they were published before 2009. Three records [91011] were excluded while screening because they were guidelines for systematic reviews and meta-analyses. As a result, full texts from 26 publications were retrieved and assessed for eligibility, two of which [1213] were excluded because they were not systematic reviews. Finally, 24 publications were included in our analysis (Fig. 1) [141516171819202122232425262728293031323334353637].
Fig. 1

Study selection process using the PRISMA 2020 flow diagram.

Data Extraction

Data extraction from the included studies was performed by two reviewers (with 2 and 8 years of experience in systematic review and meta-analysis studies, respectively), as shown in detail in the Supplement.

Major Changes in the PRISMA 2020 Statement

Several changes have been made in the PRISMA 2020 statement compared with the PRISMA 2009 statement. Although the number of main items in the checklists was unchanged (27 items), a large number of sub-items were added (42 in total, including sub-items) to provide more comprehensive guidelines. In addition, checklists for the abstracts (12 items) were included in the guidelines. Table 1 shows a brief summary of the major updates made in 2020 [5]. Two reviewers reviewed the checklists and agreed to a consensus for each item, as detailed in the Supplement.
Table 1

Summary of the Major Updates in the PRISMA 2020 Statement

Major Updates
• Inclusion of checklists for abstract (item #2)
• Requiring full search strategies for all databases modified from full search strategy for at least one database (item #7)
• Emphasis on study selection process and data extraction, requiring how many reviewers evaluated for study eligibility and data extraction, and whether they worked independently (item #8). In addition, recommendation for authors to cite studies that seemed to meet inclusion criteria but excluded in the final stage and explain the reason for exclusion (item #16b)
• Detailed specification on result synthesis methods, providing subitems regarding data handling, visual data presentation (e.g., forest plot), statistical methods for pooling results and rationale for choosing one, methods to explore study heterogeneity, and sensitivity analysis used to evaluate robustness of the pooled results (items #13a-13f and #20a-20d)
• Addition of new items regarding the assessment of certainty of the evidence for an outcome (items #15 and #20)
• Emphasis on study registration and protocol (items #24a-c)

Data Analysis

We extracted the PRISMA 2020 checklist items that were satisfied by fewer than 80% of the articles and grouped them into eight relevant domains. Suggestions for better quality of systematic reviews and meta-analyses were provided based on these domains. Evaluation of the adherence of the included articles to the PRISMA 2020 statement is decribed in the Supplement.

RESULTS

Characteristics of the Included Studies

The characteristics of the 24 included studies are summarized in Figure 2 and Supplementary Table 1. Briefly, 18 studies (14 univariate and four bivariate) were systematic reviews with meta-analyses [141519202122232425262728293132333435] and six studies were systematic reviews without meta-analyses [161718303637]. In terms of the type of data used for analyses, 13 studies used dichotomous data to measure the following outcomes: 1) efficacy or safety of an intervention (proportion of tumor response, recurrence, or treatment-related complications), 2) efficacy of a diagnostic test (proportion of technical failure and unreliable measurement), 3) imaging features in a certain disease (proportion of specific imaging findings), 4) evaluation of study quality or reporting quality (proportion of studies that met the specific criteria), and 5) diagnostic yield [14151819222324283233343536]; six studies used time-to-event data to calculate the efficacy of a new intervention or the reliability between overall survival and imaging surrogate markers [152231323334]; six studies used diagnostic test data to pool the diagnostic performance of index tests [162526272937]; two studies used continuous data to evaluate the agreement and reliability of measurements between imaging methods [2021]; one study used descriptive data from imaging protocols in randomized controlled trials of acute ischemic stroke [30]; and one study used qualitative and quantitative data to assess the health-related quality-of-life in patients with hepatocellular carcinoma [17]. The number of included studies ranged from 4 to 516, with the majority (83%, 20 out of 24) of the articles including more than 10 studies. The statistical methods used in the included articles are summarized in Supplementary Table 2.
Fig. 2

Summary charts of the included studies according to (A) study type, (B) data type, (C) main outcome, and (D) number of included studies.

“Other” in (B) data type was descriptive data regarding imaging protocols. “Others” in (C) main outcome were study or reporting quality, HRQoL score, imaging finding, imaging protocol, and diagnostic yield. DTA = diagnostic test accuracy, HRQoL = health-related quality of life

Assessment Using PRISMA 2020 Checklists

Overall Results

Each item in the PRISMA 2020 checklist and abstract checklist was evaluated for the included articles (Tables 2, 3). Of the 12 items in the abstract checklist, eight were reported in fewer than 80% of the articles (Fig. 3). To generate abstracts with better quality, exclusion criteria, risk of bias assessment tools, statistical methods, and limitations of evidence should be included.
Table 2

PRISMA 2020 Checklist for Abstract and the Number of Reported Articles in the Korean Jouranl of Radiology Since 2015

Section and TopicItem #Checklist ItemNumber of Reported Articles (n/n, %)
TITLE
Title1Identify the report as a systematic review21/24 (88)
BACKGROUND
Objectives2Provide an explicit statement of the main objective(s) or question(s) the review addresses22/24 (92)
METHODS
Eligibility criteria3Specify the inclusion and exclusion criteria for the review0/24 (0)
Information sources4Specify the information sources (e.g., databases, registers) used to identify studies and the date when each was last searched17/24 (71)
Risk of bias5Specify the methods used to assess the risk of bias in the included studies3/23 (13)*
Synthesis of results6Specify the methods used to present and synthesize results8/20 (40)
RESULTS
Included studies7Give the total number of included studies and participants and summarise relevant characteristics of studies13/24 (54)
Synthesis of results8Present results for main outcomes, preferably indicating the number of included studies and participants for each If meta-analysis was done, report the summary estimate and confidence/credible interval. If comparing groups, indicate the direction of the effect (i.e., which group is favored)22/24 (92)
DISCUSSION
Limitations of evidence9Provide a brief summary of the limitations of the evidence included in the review (e.g., study risk of bias, inconsistency, and imprecision)4/24 (17)
Interpretation10Provide a general interpretation of the results and important implications24/24 (100)
OTHER
Funding11Specify the primary source of funding for the review12/24 (50)
Registration12Provide the register name and registration number0/24 (0)

*One study was not applicable for item #5 [18], †Four studies were not applicable for item #6 [17183036].

Table 3

PRISMA 2020 Checklist and the Number of Reported Articles in the Korean Jouranl of Radiology Since 2015

Section and TopicItem #Checklist ItemNumber of Reported Articles (n/n, %)
TITLE
Title1Identify the report as a systematic review22/24 (92)
ABSTRACT
Abstract2See the PRISMA 2020 for Abstracts checklist-
INTRODUCTION
Rationale3Describe the rationale for the review in the context of existing knowledge24/24 (100)
Objectives4Provide an explicit statement of the objective(s) or question(s) the review addresses24/24 (100)
METHODS
Eligibility criteria5Specify the inclusion and exclusion criteria for the review and how studies were grouped for the syntheses23/24 (96)
Information sources6Specify all databases, registers, websites, organizations, reference lists, and other sources searched or consulted to identify studies. Specify the date when each source was last searched or consulted24/24 (100)
Search strategy7Present the full search strategies for all databases, registers, and websites, including any filters and limits used20/24 (83)
Selection process8Specify the methods used to decide whether a study met the inclusion criteria of the review, including how many reviewers screened each record and each report retrieved, whether they worked independently, and if applicable, details of automation tools used in the process17/24 (71)
Data collection process9Specify the methods used to collect data from reports, including how many reviewers collected data from each report, whether they worked independently, any processes for obtaining or confirming data from study investigators, and if applicable, details of automation tools used in the process21/24 (88)
Data items10aList and define all outcomes for which data were sought. Specify whether all results that were compatible with each outcome domain in each study were sought (e.g., for all measures, time points, analyses), and if not, the methods used to decide which results to collect23/24 (96)
10bList and define all other variables for which data were sought (e.g., participant and intervention characteristics, funding sources). Describe any assumptions made about any missing or unclear information23/24 (96)
Study risk of bias assessment11Specify the methods used to assess the risk of bias in the included studies, including details of the tool(s) used, how many reviewers assessed each study and whether they worked independently, and if applicable, details of automation tools used in the process17/22 (77)*
Effect measures12Specify for each outcome the effect measure(s) (e.g., risk ratio, mean difference) used in the synthesis or presentation of results23/24 (96)
Synthesis methods13aDescribe the processes used to decide which studies were eligible for each synthesis (e.g., tabulating the study intervention characteristics and comparing against the planned groups for each synthesis [item #5])17/22 (77)*
13bDescribe any methods required to prepare the data for presentation or synthesis, such as handling of missing summary statistics, or data conversions10/20 (50)*†
13cDescribe any methods used to tabulate or visually display the results of individual studies and syntheses8/20 (40)*†
13dDescribe any methods used to synthesize results and provide a rationale for the choice(s) If meta-analysis was performed, describe the model(s), method(s) to identify the presence and extent of statistical heterogeneity, and software package(s) used7/20 (35)*†
13eDescribe any methods used to explore possible causes of heterogeneity among study results (e.g., subgroup analysis, meta-regression)13/19 (68)*†‡
13fDescribe any sensitivity analyses conducted to assess the robustness of the synthesized results5/18 (28)*†§
Reporting bias assessment14Describe any methods used to assess the risk of bias due to missing results in a synthesis (arising from reporting biases)13/20 (65)*†
Certainty assessment15Describe any methods used to assess certainty (or confidence) in the body of evidence for an outcome2/22 (9)*
RESULTS
Study selection16aDescribe the results of the search and selection process, from the number of records identified in the search to the number of studies included in the review, ideally using a flow diagram23/24 (96)
16bCite studies that might appear to meet the inclusion criteria, but which were excluded, and explain why they were excluded6/24 (25)
Study characteristics17Cite each included study and present its characteristics21/22 (96)*
Risk of bias in studies18Present assessments of the risk of bias for each included study7/22 (32)*
Results of individual studies19For all outcomes, present, for each study: 1) summary statistics for each group (where appropriate) and 2) an effect estimate and its precision (e.g., confidence/credible interval), ideally using structured tables or plots22/22 (100)*
Results of syntheses20aFor each synthesis, briefly summarise the characteristics and risk of bias among contributing studies0/20 (0)*
20bPresent results of all statistical syntheses conducted. If meta-analysis was done, present for each the summary estimate and its precision (e.g., confidence/credible interval) and measures of statistical heterogeneity. If comparing groups, describe the direction of the effect20/20 (100)*†
20cPresent results of all investigations of possible causes of heterogeneity among study results14/19 (74)*†‡
20dPresent results of all sensitivity analyses conducted to assess the robustness of the synthesized results5/18 (28)*†§
Reporting biases21Present assessments of risk of bias due to missing results (arising from reporting biases) for each synthesis assessed13/20 (65)*†
Certainty of evidence22Present assessments of certainty (or confidence) in the body of evidence for each outcome assessed2/22 (9)*
DISCUSSION
Discussion23aProvide a general interpretation of the results in the context of other evidence24/24 (100)
23bDiscuss any limitations of the evidence included in the review24/24 (100)
23cDiscuss any limitations of the review processes used14/24 (58)
23dDiscuss implications of the results for practice, policy, and future research23/24 (96)
OTHER INFORMATION
Registration and protocol24aProvide registration information for the review, including register name and registration number, or state that the review was not registered0/24 (0)
24bIndicate where the review protocol can be accessed, or state that a protocol was not prepared0/24 (0)
24cDescribe and explain any amendments to the information provided at registration or in the protocol0/24 (0)
Support25Describe sources of financial or non-financial support for the review, and the role of the funders or sponsors in the review13/24 (54)
Competing interests26Declare any competing interests of review authors15/24 (63)
Availability of data, code and other materials27Report which of the following are publicly available and where they can be found: template data collection forms; data extracted from included studies; data used for all analyses; analytic code; any other materials used in the review0/24 (0)

*Kim et al. [18] and Kang et al. [36] were not applicable for items #11, #13–15, #17–22 because these studies only calculated the proportions of the articles that met the certain criteria and no statistical method or modeling was used, †Kang et al. [17] and Suh et al. [30] were not applicable for items #13b–13f, #14, #20–21 because qauntitative synthesis was not performed in these studies, ‡Wang et al. [33] was not applicable to items #13e and #20c because the study heterogeneity was minimal (supported by Cochran’s Q-test and Higgins inconsistency index [I2] test), thereby not requiring an additional analysis, §Chung et al. [16] and Kim et al. [37] were not applicable for item #13f and #20d because meta-analysis was not performed in these studies and only narrative syntheses of results (i.e., providing ranges of values) were avaiable.

Fig. 3

Bar chart of the proportion of articles that satisfied each item of the PRISMA 2020 abstract checklist.

Blue bars indicate the items that were satisfied by fewer than 80% of the studies. Gray bars indicate the items that were satisfied by greater than 80% of the studies.

Of the 42 items (including sub-items) included in the guidelines for the main text, 24 were reported in fewer than 80% of the articles. While most studies satisfied the items in the Title, Introduction, and Discussion, incomplete reports were frequently observed in the Methods and the Results, especially in result synthesis (Fig. 4). The 24 items were grouped into eight domains for further exploration: 1) assessment of the eligibility of potential articles (items #8, #16b), 2) assessment of the risk of bias (items #11, #18), 3) synthesis of results (items #13a, #13b, #13c, #13d, #20a), 4) additional analysis (items #13e, #13f, #20c, #20d), 5) assessment of the non-reporting bias (items #14, #21), 6) assessment of the certainty of evidence (items #15, #22), 7) provision of limitations of the study (item #23c), and 8) additional information (items #24–#27).
Fig. 4

Bar chart of the proportion of articles that satisfied each item of the PRISMA 2020 checklist.

Blue bars indicate the items that were satisfied by fewer than 80% of the studies. Gray bars indicate the items that were satisfied by greater than 80% of the studies.

Assessment of the Eligibility of Potential Articles (Items #8, #16b)

Seven articles [16242730313236] did not report how many reviewers participated in the evaluation of study eligibility or whether they worked independently (item #8). Eighteen articles [161718192021222324252627282932343637] did not cite the studies that seemed to meet the inclusion criteria, but were excluded in the final stage or did not explain the reason for exclusion (item #16b). The PRISMA 2020 guidelines emphasize transparency in the study selection process. In addition, the newly added item #16b requires authors to provide the reasons for exclusion of potentially eligible studies [5].

Assessment of the Risk of Bias (Items #11, #18)

Four articles [24293031] did not evaluate the risk of bias in the studies and one article [33] did not report how many reviewers assessed the risk of bias. Various assessment tools were used in the remaining articles. For randomized controlled trials (RCTs), the Risk of Bias (RoB) tool or revised Jadad scale were implemented [323334]. For non-RCTs, Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 [16192021252628], QUADAS [37], Risk of Bias Assessment tool for non-randomized studies [1435], Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) [17], Newcastle Ottawa Scale [153234], and the National Institute of Health (NIH) assessment tool were used [2223]. Among the articles that evaluated the risk of bias, only seven provided the full assessment results of the individual studies [17192223263334]. Evaluation of the risk of bias in studies is essential for authors to understand result synthesis or to search for possible heterogeneity among the included studies, as well as for readers to evaluate the transparency of pooled results [5]. We advise authors to provide visual representation of assessment results for each study, rather than the overall results of whole studies. Among the various options of assessment tools, the Cochrane guidelines [1] recommend the RoB, ROBINS-I, and QAUDAS-2 as the preferred methods for assessing RCTs, non-RCTs on interventions, and diagnostic test accuracy (DTA) studies, respectively. Although there is no universally accepted tool for the evaluation of observational studies without interventions, Newcastle Ottawa Scale, the NIH assessment tool, and Joanna Briggs Institute critical appraisal checklists may be suitable options [38].

Synthesis of Results (Items #13a, #13b, #13c, #13d, #20a)

Among the articles that reported multiple pooled results, five [1720212235] did not clearly report which studies were included for each outcome synthesis (item #13a). Ten articles [14212425262829323437] did not report how missing data were handled or how the data were converted for result synthesis (item #13b). Twelve articles [141519222324273233343537] did not mention the methods used for the visual representation of the results from individual studies and syntheses, although forest plots were presented in the Results, except for one article [37] (item #13c). Thirteen articles with meta-analysis [14151920212223252627283135] did not report the rationale for choosing a specific statistical model (ex. fixed- vs. random-effects model) (item #13d). Three studies [32333435] selected fixed- or random-effects models based on statistical values of study heterogeneity. Two studies [1920] used the Dersimonian-Laird random-effects model for pooling rare events (e.g., complication rate). In the Results section, none of the studies reported a brief summary of the study characteristics and the risk of bias for each synthesis (item #20a). The PRISMA 2020 checklist has elaborated the “synthesis of results” item to provide a more comprehensive evaluation regarding data preparation (item #13b), data visualization (item #13c), and statistical methods used for result synthesis (item #13d) [5]. When multiple results are pooled, authors are advised to cite the studies and report the number of the included studies for each outcome analysis (item #13a). When a meta-analysis is performed, authors should explain the rationale for choosing a statistical model. Choosing between fixed- and random-effects models should not be based on statistical methods for heterogeneity (i.e., Cochran’s Q-test or Higgins inconsistency index test) [1]; rather, it depends on the authors’ decision of whether effect sizes are truly identical between studies [1]. Therefore, the random-effects model is recommended when there is heterogeneity in study designs, which is very common when performing meta-analyses in the field of radiology. Currently, the Cochrane guideline does not recommend a concensus method for result synthesis [1]. However, inverse-variance methods (including the DerSimonian and Laird method) should be avoided in meta-analyses of rare events [39]. Because these methods are based on the assumption of normal distribution of effect sizes, significant bias in pooled results may occur in meta-analyses of rare events [3940]. In such cases, other methods such as the Peto method, Mantel–Haenszel method without zero-cell corrections, or generalized linear mixed models are preferred, although there is no generally accepted optimal method for dealing with rare events [394142]. When reporting syntheses of multiple-effect sizes, authors should consider within-study covariance (i.e., correlation between outcomes). However, none of the articles included in this study considered within-study covariance despite the evident risk of correlation (e.g., pooling overall survival at multiple time points) [15202122313234]. The potential risks of correlation in these studies are summarized in Supplementary Table 3. When multiple-effect sizes are synthesized from data from the same participants, statistical dependency may occur and produce erroneous standard errors in the pooled results [43]. Suggestions for managing within-study covariance are provided in the Supplement.

Additional Analysis (Items #13e, #13f, #20c, #20d)

Five articles [1629323537] did not perform subgroup analysis or meta-regression. One article [34] did not mention the method used to explore study heterogeneity in the Materials and Methods section although subgroup analysis was provided in the Results section (item #13e,#20c). Thirteen articles [14152021222325282931323435] did not perform sensitivity analysis (item #13f, #20d). The PRISMA 2020 guideline requires that authors perform subgroup analysis or meta-regression to evaluate the source of study heterogeneity, and sensitivity analysis to assess the robustness of the synthesized results.

Assessment of Non-Reporting Bias (Items #14, #21)

Seven articles [16182931323537] did not evaluate the non-reporting bias. Among the 13 articles that used funnel plots, four [15212427] did not further explore the source of bias, although asymmetry was observed. Two articles [3334] performed a statistical test for funnel plot asymmetry, although fewer than 10 studies were included. Non-reporting bias refers to the fact that reporting of the research findings is influenced by the p value and magnitude or direction of the results [44]. Although non-reporting is a broad term encompassing publication bias, time-lag bias, and selective non-reporting bias, publication bias has long been the focus of interest [1]. Funnel plots and statistical tests for asymmetry are frequently performed to evaluate non-reporting bias; however, the test for asymmetry has low statistical power and thus should not be used when fewer than 10 studies are included [4546]. Moreover, it should be noted that asymmetry in funnel plots is not always due to non-reporting bias [46]. As a result, a contour-enhanced funnel plot may be preferred, because it may indicate whether the asymmetry is due to non-reporting bias or other factors [47]. Other potential sources of asymmetry include poor methodological quality in small-sized studies or true heterogeneity between studies [45]. For example, heterogeneity in the characteristics of study population or implementation of intervention between small vs. large-sized studies may cause asymmetry in the funnel plots [46]. When asymmetry is observed, authors should search for potential sources of asymmetry.

Assessment of the Certainty of Evidence (Items #15, #22)

Only two studies [2433] used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to evaluate the quality of evidence. The PRISMA 2020 guidelines included new items regarding the certainty of evidence for pooled results [5]. To evaluate the certainty of evidence, Cochrane has adopted the GRADE approach [1], which is composed of five domains: risk of bias, inconsistency, indirectness, imprecision, and publication bias [48]. By incorporating the evaluation for each domain, the final assessment for pooled results was classified into four categories: high, moderate, low, and very low quality. The certainty of evidence should be evaluated for each outcome, because the level of certainty often varies between outcomes [49]. Lastly, a “summary of findings” table should be presented by including the outcome of interest and its pooled result as well as the quality of evidence. Although the GRADE approach was first developed to evaluate studies on therapeutic intervention, it can be applied to DTA studies as well [5051]. Supplementary Table 4 is an example of a “summary of findings” table, which may be produced using the GRADEpro GDT software (www.gradepro.org).

Provision of Limitations of the Study (Item #23c)

The PRISMA 2020 guidelines require that author provide limitations of not only the evidence but also the review process. Of all the included articles, 58% (14 out of 24) [1617192022242627303233343637] described the limitations of the reviewing process in the Discussion section, which included: 1) limitation of search terms: "because we found HRQoL studies using the search term‘quality of life,’ we might have missed studies using other terminology" [17], 2) limitation in the study selection process: “we included studies that were available only in the abstract form, and the reported data may not be as accurate and complete as those reported in the corresponding full text publication” [33], 3) limitations in data extraction: “there were limitations in extracting the exact survival data from the study regarding censored subjects and how these might affect the results” [22], and 4) inability to perform planned analysis due to lack of data: “because of the lack of sufficient data, we were unable to perform subgroup analyses to compare the effect of TACE plus RFA and surgical resection” [32].

Additional Information (Items #24–#27)

None of the studies reported the registration information (item #24) or which data in the review were publicly available to the readers (item #27). Eleven studies [1416172324282932333435] did not report any financial or non-financial support (item #25), and nine studies [151822273233343637] did not declare any competing interest for the authors (item #26). The PRISMA 2020 guidelines require that authors provide registration information for the review (item #24a), a statement regarding accessibility of the registered protocol (item #24b), or any amendment made in the protocol (item#24c) [5]. PROSPERO is a database that authors can use to register their protocols [52]. Registering the protocol before conducting the systematic review enables the readers to evaluate whether the article properly followed the protocol and search for any differences between the pre-specified information and the finally reported information [5]. If the protocol was not registered, it should be stated so, and we suggest that authors discuss the potential limitations of not doing so. In addition, authors should report any financial or non-financial support received during the study, and their competing interests. Public sharing of the data used in the review is encouraged but is not widely performed in medical research [53]. Currently, there are several public data sharing platforms, such as Open Science Framework (https://osf.io) or Systematic Review Data Repository (https://www.ahrq.gov/cpi/about/otherwebsites/srdr.ahrq.gov/index.html).

DISCUSSION

Our study demonstrated that a substantial number of published systematic reviews with or without meta-analysis required further improvements to satisfy the PRISMA 2020 guidelines. These areas for improvement could be divided into eight domains for which thorough explanations and suggestions can be made: 1) assessment of the eligibility of potential articles, 2) assessment of the risk of bias, 3) synthesis of results, 4) additional analysis to explain study heterogeneity, 5) assessment of the non-reporting bias, 6) assessment of the certainty of evidence, 7) provision of limitations of the study, and 8) additional information such as protocol registration. In addition, for better quality abstracts, authors should report the exclusion criteria, the assessment tool for the risk of bias, the statistical methods, and limitations of the evidence. Based on our results, we developed a double-check list consisting of the items in PRISMA 2020 guidelines that had been frequently missed in published articles (Table 4). In the checklist, we made specific suggestions for each domain and provided further comments regarding the errors in statistical analyses identified in some published articles (e.g., determining fixed- vs. random-effects model based on statistical values of study heterogeneity). To help authors properly utilize the statistical models and assessment tools, we summarized the recommended methods in Table 5. These recommended methods are mainly based on the Cochrane Handbook for Systematic Reviews of Interventions and previous guideline articles for DTA studies [1911545556].
Table 4

Checklist for Writing Good Quality Systematic Review and Meta-Analysis

DomainPRISMA ItemsChecklist✓ or X
I. Assessing the eligibility of potential articles#8, #16b1. Report how many reviewers participated in article screening and whether they worked independently
2. Cite studies that seemed to meet inclusion criteria but excluded in the final stage and explain the reason for exclusion
II. Assessing risk of bias in the studies#11, #183. Report which method was used for evaluating the risk of bias*
4. Present results regarding the assessment of the risk of bias for each included study
III. Synthesis of results#13a, #13b, 13c, 13d, #20a5. When performing syntheses of multiple outcomes, cite and report the number of studies included for each outcome synthesis
6. Consider within-study covariance when reporting syntheses of multiple outcomes (i.e., correlation between outcomes)
7. Report how the missing data were handled or how the data were converted for synthesis
8. Report methods used for visual display of results of individual studies and syntheses (e.g., forest plot)
9. Report rationale of choosing one of the statistical models
- Choosing between fixed vs. random-effects models should not be based on statistical values of study heterogeneity (i.e., Cochran’s Q-test or Higgins inconsistency index test)
- Check if reporting outcome is a rare event: in case of rare events, the Dersimonian-Laird method should be avoided
10. For each synthesis, present a brief summary of the characteristics and risk of bias regarding the contributing studies
IV. Additional analysis#13e, #13f, #20c, #20d11. Present methods and results of additional analyses (i.e., subgroup analysis, meta-regression) to explain study heterogeneity
12. Present sensitivity analysis to assess robustness of the synthesized results
V. Assessing non-reporting bias (i.e., publication bias)#14, #2113. Report any methods for evaluation of non-reporting bias (e.g., funnel plot)§
- To interpret funnel plot, at least 10 studies needed
VI. Assessing the certainty of evidence#15, 2214. Report certainty of evidence using the GRADE approach
VII. Providing the limitation of the study#23c15. Provide any limitations of reviewing process as well as limitations of the evidence
VIII. Additional information#24–2716. Provide registration information for the review
- Register name (PROSPERO) and registration number
- If unregistered, state that the review was not registered and indicate as a potential limitation
17. Report funding source and any competing interests

*Based on the Cochrane guidelines, RoB, ROBINS-I, or QAUDAS-2 are preferred methods for assessing RCT, non-RCT of interventions, or DTA studies, respectively, †If one assumes a significant correlation between outcomes, a multivariate meta-analysis may be a preferred method over multiple independent univariate meta-analysis. There are a few other ways to deal with correlation between outcomes; however, these are beyond the scope of this paper. Refer to López-López et al. for more information [43], ‡The Cochrane Handbook for Systematic Reviews of Interventions does not suggest universally preferred methods for result synthesis. However, the inverse-variance methods (including the DerSimonian and Laird method) should be avoided with rare events meta-analysis (e.g., complication rates, technical failure rates) [39]. Regarding the meta-analysis of DTA, the bivariate model and HSROC model are recommended methods [10], §It should be noted that asymmetry in funnel plot is not always due to non-reporting bias. As a result, a contour-enhanced funnel plot may be preferred because it can suggest that whether the asymmetry is due to non-reporting bias or other factors. DTA = diagnostic test accuracy, HSROC = hierarchical summary receiver operating characteristic, QAUDAS = Quality Assessment of Diagnostic Accuracy Studies, RCT = randomized controlled trial, RoB = Risk of Bias in randomized trials, ROBINS-I = Risk of Bias in Non-randomized Studies of Interventions

Table 5

Recommended Statistical Methods for Meta-Analysis

Meta-Analysis of Usual ProportionMeta-Analysis of Diagnostic Accuracy Test
Result synthesisFixed-effects modelInverse variance methodNot recommended
Mantel-Haenszel method
Peto method
Random-effects modelDersimonian Laird methodBivariate model HSROC model
REML method
Paule-Mandel method
Non-reporting/publication bias assessment toolFunnel plotDeeks’ funnel plot
Begg’s test or Egger’s testDeeks’ asymmetry test
Risk of bias assessment tool*RCT: RoB 2 toolQUADAS-2 tool
Non-RCT: ROBINS-I tool
Evaluation of study heterogeneityChi-squared test (Cochrane Q statistics)Chi-squared test (Cochrane Q statistics)
Higgins I2 statisticHiggins I2 statistic
Analysis of threshold effect
- Visual evaluation of coupled forest plot
- Spearman correlation analysis between sensitivity and specificity
Additional analysis for study heterogeneitySubgroup analysis or meta-regression
Sensitivity analysis
Certainty of evidence evaluationGRADE approach

*The Cochrane Handbook for Systematic Reviews of Interventions recommends the RoB-2 tool and ROBINS-I tool as bias assessment tools in RCT and non-RCT, respectively. GRADE = Grading of Recommendations, Assessment, Development and Evaluations, HSROC = hierarchical summary receiver operating characteristic, QAUDAS = Quality Assessment of Diagnostic Accuracy Studies, REML = restricted maximum likelihood, RoB = Risk of Bias in randomized trials, ROBINS-I = Risk of Bias in Non-randomized Studies of Interventions

A substantial proportion of meta-analysis in the field of radiology is DTA research. In 2018, an extention of PRISMA 2009 statement has been developed for systematic reviews of DTA studies (PRISMA-DTA statement) [57]. When compared to PRISMA 2020 statement, PRISMA-DTA statement requires specific information regarding index test, including the clinical role of the index test and 2 × 2 data (true positive, false positive, false negative, and true negative) for each study. However, the PRISMA 2020 statement provides more comprehensive checklists in the remaining fields such as data extraction, data handling, statistical analysis, and result presentation. Thus, authors who conduct systematic reviews of DTA studies should follow the PRISMA 2020 statement in general and refer to the PRISMA-DTA statement for DTA specific requirements [5]. There are several limitations to our study. First, the articles used in our study were from a single journal, which may impair extrapolation of the results. However, the Korean Journal of Radiology may serve as a proper representative sample given its reputation in the field of radiology, nuclear medicine, and imaging (rank: 36 out of 452 journals in Scopus) and wide coverage of topics as a general journal of radiology. Second, detailed statistical background for each method in meta-analyses was not provided. In addition, we did not cover advanced techniques in meta-analysis, such as individual participant data meta-analysis or network meta-analysis [5859]. Third, while our study focused on the reporting qualities of systematic reviews and meta-analysis, the reporting quality does not necessarily indicate the quality of the study itself. Proper research questions based on the population, intervention, comparison, outcome (PICO) framework and the purpose of conducting the systematic review or meta-analysis should be well established beforehand [1]. Despite these limitations, our study clearly identified which items should be improved for high-quality systematic review articles. Authors and reviewers who are interested in systematic reviews or meta-analyses should be familiar with the PRISMA 2020 statement. Our checklists may help authors to identify which items of the PRISMA 2020 statement should be reinforced prior to submission.
  58 in total

1.  An international registry of systematic-review protocols.

Authors:  Alison Booth; Mike Clarke; Davina Ghersi; David Moher; Mark Petticrew; Lesley Stewart
Journal:  Lancet       Date:  2010-07-12       Impact factor: 79.321

2.  GRADE guidelines: 3. Rating the quality of evidence.

Authors:  Howard Balshem; Mark Helfand; Holger J Schünemann; Andrew D Oxman; Regina Kunz; Jan Brozek; Gunn E Vist; Yngve Falck-Ytter; Joerg Meerpohl; Susan Norris; Gordon H Guyatt
Journal:  J Clin Epidemiol       Date:  2011-01-05       Impact factor: 6.437

3.  Bias in meta-analysis detected by a simple, graphical test.

Authors:  M Egger; G Davey Smith; M Schneider; C Minder
Journal:  BMJ       Date:  1997-09-13

4.  Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses.

Authors:  Kristian Thorlund; Jørn Wetterslev; Tahany Awad; Lehana Thabane; Christian Gluud
Journal:  Res Synth Methods       Date:  2012-02-14       Impact factor: 5.273

5.  PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews.

Authors:  Matthew J Page; David Moher; Patrick M Bossuyt; Isabelle Boutron; Tammy C Hoffmann; Cynthia D Mulrow; Larissa Shamseer; Jennifer M Tetzlaff; Elie A Akl; Sue E Brennan; Roger Chou; Julie Glanville; Jeremy M Grimshaw; Asbjørn Hróbjartsson; Manoj M Lalu; Tianjing Li; Elizabeth W Loder; Evan Mayo-Wilson; Steve McDonald; Luke A McGuinness; Lesley A Stewart; James Thomas; Andrea C Tricco; Vivian A Welch; Penny Whiting; Joanne E McKenzie
Journal:  BMJ       Date:  2021-03-29

6.  Practical guide to the meta-analysis of rare events.

Authors:  Orestis Efthimiou
Journal:  Evid Based Ment Health       Date:  2018-04-12

7.  Ultrasonography in the diagnosis of appendicitis: evaluation by meta-analysis.

Authors:  Seung Hum Yu; Chun Bae Kim; Joong Wha Park; Myoung Soo Kim; David M Radosevich
Journal:  Korean J Radiol       Date:  2005 Oct-Dec       Impact factor: 3.500

Review 8.  Data sharing and reanalysis of randomized controlled trials in leading biomedical journals with a full data sharing policy: survey of studies published in The BMJ and PLOS Medicine.

Authors:  Florian Naudet; Charlotte Sakarovitch; Perrine Janiaud; Ioana Cristea; Daniele Fanelli; David Moher; John P A Ioannidis
Journal:  BMJ       Date:  2018-02-13

9.  Radiofrequency Ablation Combined with Transcatheter Arterial Chemoembolization Therapy Versus Surgical Resection for Hepatocellular Carcinoma within the Milan Criteria: A Meta-Analysis.

Authors:  Wei-Dong Wang; Li-Hua Zhang; Jia-Yan Ni; Xiong-Ying Jiang; Dong Chen; Yao-Ting Chen; Hong-Liang Sun; Jiang-Hong Luo; Lin-Feng Xu
Journal:  Korean J Radiol       Date:  2018-06-14       Impact factor: 3.500

10.  Cardiac CT for Measurement of Right Ventricular Volume and Function in Comparison with Cardiac MRI: A Meta-Analysis.

Authors:  Jin Young Kim; Young Joo Suh; Kyunghwa Han; Young Jin Kim; Byoung Wook Choi
Journal:  Korean J Radiol       Date:  2020-04       Impact factor: 3.500

View more
  3 in total

1.  Diagnostic yield of MR myelography in patients with newly diagnosed spontaneous intracranial hypotension: a systematic review and meta-analysis.

Authors:  So Jeong Lee; Dana Kim; Chong Hyun Suh; Hwon Heo; Woo Hyun Shim; Sang Joon Kim
Journal:  Eur Radiol       Date:  2022-05-11       Impact factor: 5.315

2.  Diagnostic performance of hippocampal volumetry in Alzheimer's disease or mild cognitive impairment: a meta-analysis.

Authors:  Ho Young Park; Chong Hyun Suh; Hwon Heo; Woo Hyun Shim; Sang Joon Kim
Journal:  Eur Radiol       Date:  2022-05-04       Impact factor: 7.034

3.  Guides for the Successful Conduct and Reporting of Systematic Review and Meta-Analysis of Diagnostic Test Accuracy Studies.

Authors:  Seong Ho Park
Journal:  Korean J Radiol       Date:  2022-03       Impact factor: 3.500

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.