Literature DB >> 33326429

Reporting and methodological quality of systematic reviews and meta-analysis with protocols in Diabetes Mellitus Type II: A systematic review.

Daniel Christopher Rainkie1, Zeinab Salman Abedini1, Nada Nabil Abdelkader1.   

Abstract

BACKGROUND: Systematic reviews with or without meta-analyses (SR/MAs) are strongly encouraged to work from a protocol to facilitate high quality, transparent methodology. The completeness of reporting of a protocol (PRISMA-P) and manuscript (PRISMA) is essential to the quality appraisal (AMSTAR-2) and appropriate use of SR/MAs in making treatment decisions.
OBJECTIVES: The objectives of this study were to describe the completeness of reporting and quality of SR/MAs, assess the correlations between PRISMA-P, PRISMA, and AMSTAR-2, and to identify reporting characteristics between similar items of PRISMA-P and PRISMA.
METHODS: We performed a systematic review of Type 2 Diabetes Mellitus SR/MAs of hypoglycemic agents with publicly available protocols. Cochrane reviews, guidelines, and specific types of MA were excluded. Two reviewers independently, (i) searched PubMed and Embase between 1/1/2015 to 20/3/2019; (ii) identified protocols of included studies by searching the manuscript bibliography, supplementary material, PROSPERO, and Google; (iii) completed PRISMA-P, PRISMA, and AMSTAR-2 tools. Data analysis included descriptive statistics, Pearson correlation, and multivariable linear regression.
RESULTS: Of 357 relevant SR/MAs, 51 had available protocols and were included. The average score for PRISMA-P was 15.8±3.3 (66%; maximum 24) and 25.2±1.1 (93%; maximum 27) for PRISMA. The quality of SR/MAs assessed using the AMSTAR-2 tool identified an overall poor quality (63% critically low, 18% low, 8% moderate, 12% high). The correlation between the PRISMA-P and PRISMA was not significant (r = 0.264; p = 0.06). Correlation was significant between PRISMA-P and AMSTAR-2 (r = 0.333; p = 0.02) and PRISMA and AMSTAR-2 (r = 0.555; p<0.01). Discrepancies in reporting were common between similar PRISMA-P and PRISMA items.
CONCLUSION: Adherence to protocol reporting guidance was poor while manuscript reporting was comprehensive. Protocol completeness is not associated with a completely reported manuscript. Independently, PRISMA-P and PRISMA scores were weakly associated with higher quality assessments but insufficient as a surrogate for quality. Critical areas for quality improvement include protocol description, investigating causes of heterogeneity, and the impact of risk of bias on the evidence synthesis.

Entities:  

Year:  2020        PMID: 33326429      PMCID: PMC7743973          DOI: 10.1371/journal.pone.0243091

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Systematic reviews are considered the highest form of evidence for all types of clinical questions to inform evidence-based practice [1]. When it is possible and relevant, various meta-analytic methods can be used to aggregate the results of included studies into a single point estimate and 95% confidence interval to make a conclusion of benefit, no difference, or harm. Ultimately, clinicians use the information from systematic reviews with or without meta-analyses (SR/MAs) to inform their treatment decisions. Making treatment decisions based on limited or unreliable information puts patients at risk of harm. Therefore, as part of the evidence-based process, clinicians should critically review their SR of interest to determine the accuracy and reliability of its conclusions [2]. Several checklists and tools have been developed to aid clinicians better understand the internal and external validity of their article of interest and aid authors in producing valid, reproducible results. The 2017 “A MeaSurement Tool to Assess systematic Reviews” (AMSTAR-2) tool, an update to the AMSTAR tool originally published in 2009, provides readers a systematic process to evaluate the quality of SR/MAs that includes randomized controlled trials or non-randomized studies of interventions [3]. Without adequate reporting within a published manuscript, readers of SR/MAs are unable to make accurate quality judgements. The Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) statement, formerly QUality Of Reporting of Meta-analyses (QUORUM), provides instructions to authors to transparently detail their methodology and approach to coalescing the available data [4, 5]. The PRISMA checklist is required in many journals when submitting a SR/MA for publication [6]. A cross sectional study of therapeutic non-Cochrane SR/MAs in 2016 found that approximately 2 of every 3 published SR/MAs did not use a protocol or did not use PRISMA as a reporting guideline [7]. It is recommended that a pre-specified protocol should be used as the foundation for completing a SR/MA for transparent science and to identify the risk of selection and reporting bias. In 2015, the PRISMA group published several extensions to aid authors in the transparent reporting of their manuscripts. One of these extension checklists is the PRISMA-protocol (PRISMA-P) which is to be used to enhance the completeness of information presenting in a protocol of SR/MAs [8, 9]. Pre-registered systematic reviews have been associated with higher revised-AMSTAR scores compared to nonregistered reviews [10]. Although the availability of protocols for SR/MAs are rare but the presence of a protocol is associated with higher quality studies, it remains unknown how well protocols are reported in the literature and if more complete protocols associated with higher quality studies as assessed by newer and more comprehensive quality tools. The primary outcomes of this study were to 1) assess the completeness of reporting of available protocols according to the PRISMA-P checklist, 2) the completeness of reporting of SR/MAs according to the PRISMA checklist, and 3) the quality of the SR/MAs according to the AMSTAR-2 tool. The secondary outcomes for this study were to 1) determine the correlations between PRISMA-P, PRISMA, and the overall SR/MA quality conclusions and 2) describe the discrepancies in reporting between PRISMA-P and PRISMA.

Methods

We conducted and reported this systematic review in accordance with the PRISMA statement (S1 Checklist) [5]. A pre-specified protocol was submitted for publishing and can be found in the supplementary material in accordance with the PRISMA-P statement (S1 File) [11].

Study population

The focus of this review was on SR/MAs of pharmacological interventions used in the management of Type 2 Diabetes Mellitus (T2DM). We chose this study population due to the widespread availability of SR/MAs assessing the plethora of medications and classes of medications on the market.

Search strategy

A systematic search of MEDLINE (using PubMed) and Embase databases for all systematic reviews or meta-analyses investigating pharmacological interventions used in the management of T2DM for any outcome. The MeSH terms used for PubMed database search were: ("Diabetes Mellitus, Type 2"[Mesh] AND "Hypoglycemic Agents"[Mesh] AND ((Meta-Analysis[ptyp] OR systematic[sb]) AND ("2015/01/01"[PDAT]: "2019/03/20"[PDAT]) AND "humans"[MeSH Terms] AND English[lang])). In Embase, the Emtree terms and limits used were: ('non insulin dependent diabetes mellitus'/exp/mj AND ’antidiabetic agent'/exp/mj AND ([systematic review]/lim OR [meta analysis]/lim) AND [2015-2019]/py). The filters applied to the searches in both databases were human only studies; and studies published in English. The language restriction of English only was applied as no translation services were available and English systematic reviews comprise the majority of the available literature. We limited our search to include only articles published between 1st January 2015 and 20th March 2019 as the PRISMA-P checklist was published in 2015.

Eligibility criteria

The pre-specified eligibility criteria comprised of either SR/MAs with accessible protocols. We identified protocols through a search of the manuscript; bibliography; journal website for supplementary materials or appendices; the PROSPERO database; and a Google search. We did not include SR/MAs which stated that a protocol was available but the authors must be contacted. The rationale for this is that it would be rare for clinicians to access this information to completely appraise the article before interpreting and applying the results. Cochrane reviews were excluded from our search since a peer-reviewed protocol is mandatory to publish. Other types of studies excluded from the search included published guidelines, select article meta-analyses (i.e. where no systematic review was conducted), network meta-analyses, and individual patient meta-analyses because the reporting and quality indicators specific to each of these designs are not addressed in the PRISMA-P, PRISMA or AMSTAR-2 tools.

Study selection

The systematic search of the databases and screening of articles by title and abstract were completed independently by all three reviewers NA, ZA, and DR. After removing duplicates, if two or more reviewers identified an article for inclusion on screening, the full text was retrieved. If only a single reviewer identified an article for inclusion on screening, a final decision to retrieve the full text was based on a discussion between all the reviewers. Full texts of the screened articles were collected, and the complete pre-specified inclusion (e.g. protocol availability) and exclusion criteria were applied. We initially intended to use a random sample of 100 eligible articles to be included in our analysis; however, given the limited availability of SR/MAs with available published protocols, we included all relevant articles.

Data management and extraction

All authors read and studied the explanation and elaboration documents associated with each of the PRISMA-P, PRISMA, and AMSTAR-2 checklists/tools in order to ensure the original intents were kept [3–5, 8, 9, 11–14]. The PRISMA-P checklist has 17-items for assessing protocols, while PRISMA checklist has 27-items for assessing the manuscript itself. The AMSTAR-2 checklist has 16-items, including 7 critical domains, used to assessing the quality of SR/MAs. Strict guidance is provided on how to summarize the critical domains and non-critical domains to determine the overall study quality. A standardization activity was performed by three members of the research team. Six articles (10% of the included studies) were evaluated independently by each reviewer using the three checklists. The three reviewers then met to review and discuss their scores of 1–2 articles at a time before assessing the remaining included articles. The remaining articles were assigned two reviewers (DR and ZA or NA) for independent review and data extraction according to the PRISMA-P, PRISMA, and AMSTAR-2 checklists. In the case of discrepancies between assessors, all conflicts were managed by a discussion with a third reviewer to reach consensus. Each checklist item was given 1 mark if it was completed or 0 marks if the item was not completed. The AMSTAR-2 tool allows for “partial yes” as an assessment for some items; for analysis purposes we counted these as a full yes if there was consensus that it was not a major limitation or a full no if there were major concerns of how this could impact the interpretation of the study. The reason for this is that the guidance document does not consider partial yes assessments in the final judgement of the article (critically low, low, medium or high confidence). For the purpose of generalizability, items which were not relevant to the study being assessed were given a full mark (for example in the PRISMA checklist: if the article was a systematic review; item 21 “present results of each meta-analysis done; including confidence intervals and measures of consistency” would be deemed not a relevant criterion, therefore, it would be given a full mark).

Outcomes

The primary outcomes were to 1) assess the completeness of reporting of available protocols according to the PRISMA-P checklist, 2) assess the completeness of reporting of SR/MAs according to the PRISMA checklist, and 3) assess the quality of the SR/MAs according to the AMSTAR-2 tool. The secondary outcomes were 1) to determine the correlation between completeness of reporting and quality of the protocol (PRISMA-P), study (PRISMA), and the overall SR/MA quality (defined by AMSTAR-2 as critically low, low, moderate or high) and 2) describe the frequency of discrepancies in reporting between PRISMA-P and PRISMA. A post-hoc analysis was completed after it was apparent that the rates of overall quality assessments were considered “low” or “critically low” for most articles and the reporting of protocols was poor. The AMSTAR-2 tool item 2 assesses the reporting and content of protocols and is defined as a critical item for high quality SR/MAs. We hypothesized that regardless of the presence or absence of a protocol, the results and interpretation of SR/MAs could be deemed appropriate if the other critical criteria of the AMSTAR 2 tool were met. This is to say that despite what the authors had planned, what was performed could be reasonable and adequate. We performed a sensitivity analysis assuming item 2 was given a ‘yes’ to determine the change in the summary assessment.

Data synthesis

Descriptive statistics were used to summarize the results of each of the 3 checklists. The test for normality was completed using the Kolmogrov-Smirnov statistical test. The association between PRISMA-P and AMSTAR-2 and between PRISMA and AMSTAR-2, were assessed using linear regression. Multiple linear regression of all 3 checklists was completed using PRISMA and PRISMA-P scores as independent variables. Interrater reliability using Cohen’s kappa coefficient statistical test and the percent agreement between raters were used to describe agreement between assessors. Statistical significance was considered with a two tailed p-value of less than 0.05. All data were analyzed using the statistical package SPSS software version 25.0 and tables and figures were configured using Microsoft Excel 2016.

Results

Search results

A total of 1,023 articles were identified from the initial search of the 2 databases. Upon screening 357 unique articles, 51 (14.3%) met the inclusion criteria and were included in the analysis. The most common reason for exclusion was that protocols were not available (n = 210, 68.6% of excluded articles). Fig 1 illustrates the process.
Fig 1

Flow diagram of included studies.

It was observed that the availability of protocols has increased steadily since 2015 (Table 1). Protocols were overwhelmingly located through registration in the PROSPERO database. PROSPERO citations were often clearly stated in the manuscript. Less common locations for protocols included the supplementary material or journal website and the bibliography as a reference to a previously published protocol. Seven articles stated that they worked from a pre-specified protocol that was only available through contacting the authors. All excluded articles, with reasons, are referenced in S2 File.
Table 1

Characteristics of included studies (n = 51).

Year Published20152016201720182019
Protocol Available (n)31017174
Met Inclusion Criteria and Protocol Not Available (n, %)40 (8%)63 (16%)54 (33%)46 (37%)7 (57%)
Protocol Access*
Journal Website / Supplementary Material12440
Bibliography00010
PROSPERO3816154

*Some manuscripts had protocols available from multiple sources.

*Some manuscripts had protocols available from multiple sources.

Reporting and quality results

The average PRISMA-P score for the completeness of the included protocols was 15.7 +/- 3.3 (65.6%) with a minimum of 7 and a maximum of 23 out of a possible 24 points. Items which were not reported in more than 50% of protocols were 3b, 10, 11b, 15a, 15d, and 17. Details for each individual item of the PRISMA-P results are found in Fig 2. The summary results from each included SR/MA are described in Table 2.
Fig 2

Completeness of reporting of protocols as assessed by the PRISMA-P checklist.

Table 2

PRISMA-P, PRISMA and AMSTAR-2 quality for each included article, organized alphabetically.

ArticlePRISMA-P Checklist Score (max = 24)PRISMA Checklist Score (max = 27)AMSTAR-2 Number of Critical Weaknesses (max = 9)AMSTAR-2 Number of Minor Weaknesses (max = 7)AMSTAR-2 Quality
Adil M Clinical Epidemiology and Global Health 2018 [15]172523CL
Andreadis P Diabetes Obes Metab 2018 [16]162512L
Anyanwagu U Diabetes Res Clin Pract 2016 [17]142620M
Black CD Diabetes Therapy 2017 [18]232700H
Cai X Diabetes Investig 2017 [19]182432CL
Cai X Diabetes Technol Ther 2016 [20]182326CL
Cai X Expert Opin Pharmacother 2016 [21]182423CL
Cai X Expert Opin Pharmacother 2017 [22]192441CL
Cai X J Diabetes Investig 2018 [23]182511L
Cai X Obesity 2018 [24]172321CL
Cai X PLoS ONE 2016 [25]172532CL
Campbell JM Ageing Res Rev 2017 [26]182704M
Castellana M Diabetes/Metabolism Research and Reviews 2019 [27]202521CL
Crowley MJ Ann Intern Med 2017 [28]212700H
de Wit HM Br J Clin Pharmacol 2016 [29]202622CL
Dicembrini I Acta Diabetol 2017 [30]142623CL
Elgebaly A Experimental and Clinical Endocrinol 2018 [31]162611L
Elgendy IY Am J Cardiovasc Drugs 2017 [32]122424CL
Farah D Diabetes Research and Clin Pract 2019 [33]142244CL
Giugliano D Endocrine 2016 [34]102433CL
Glechner A Diabetologia 2015 [35]142622CL
Gray LJ Diabetes Obes Metab 2015 [36]132543CL
Hansen M Diabetes Complications 2017 [37]182602H
Khunti K Diabetes Care 2017 [38]112624CL
Khunti K Diabetes Obes Metab 2018 [39]172346CL
Li X Frontiers in Pharmacology 2018 [40]152331CL
Li X Endocrine 2018 [41]142434CL
Liao HW Endocrionology Diabetes and Metabolism 2018 [42]132622CL
Liu X Lipids Health Dis 2016 [43]122622CL
Maiorino MI Diabetes Care 2017 [44]82622CL
Maiorino MI Diabetes Obes Metab 2018 [45]72224CL
Mazidi M J Am Heart Assoc 2017 [46]152613L
Mazidi M J Diabetes Complications 2017 [47]132512L
McGovern A Diabetes Obes Metab 2018 [48]232622CL
Meng Q J Diabetes Investig 2016 [49]142511L
Min SH J Diabetes Investig 2018 [50]152423CL
Mishriky BM Diabetes and Metabolism 2018 [51]162433CL
Monami M Acta Diabetol 2017 [52]142522CL
Monami M Diabetes Obes Metab 2018 [53]152523CL
Monami M Diabetes Res Clin Pract 2017 [54]152533CL
Ostawal A Diabetes Therapy 2016 [55]152713L
Pang B Diabetes Ther 2017 [55]182634CL
Peter EL Ethnopharmacol 2019 [56]162502M
Price HI BMJ Open 2015 [57]162701H
Saad M Int J Cardiol 2017 [58]142402M
Sharma M BMJ Open 2017 [59]172421CL
Shi F Frontiers in Pharmacology 2018 [60]152700H
Storgaard H PLoS One 2016 [61]212701H
Tang GH Cancer Epidemiology Biomarkers and Prevention 2018 [62]182511L
Wang C Diabet Obes Metab 2018 [63]172711L
Wang X Medicine (Baltimore) 2018 [64]142622CL

†AMSTAR-2 items 1, 3, 5, 6, 8, 10, 12, 14, 16;

‡AMSTAR-2 items 2, 4, 7, 9, 11, 13, 15;

CL = critically low, L = low, M = moderate, H = high quality.

AMSTAR-2 items 1, 3, 5, 6, 8, 10, 12, 14, 16; AMSTAR-2 items 2, 4, 7, 9, 11, 13, 15; CL = critically low, L = low, M = moderate, H = high quality. Regarding the published manuscripts, adherence to the PRISMA reporting guidelines was high with an average of 25.1 ± 1.1 out of a possible 27 points (93.2%; minimum 23, maximum 27). All items were reported in more than 50% of the published manuscripts. Further details of each individual item can be found in Fig 3.
Fig 3

Completeness of reporting of protocols as assessed by the PRISMA checklist.

The overall quality of published manuscripts evaluated using the AMSTAR-2 tool was poor. Using the AMSTAR-2 guidance to assess the quality of the manuscripts, including supplementary material, 63% were classified as critically low (n = 32), 18% as low (n = 9), 8% as moderate (n = 4), and 12% as high quality (n = 6). Further details of the critical appraisal can be found in Figs 4 and 5.
Fig 4

Quality assessment according to AMSTAR-2 tool.

Fig 5

Summary quality assessment according to AMSTAR-2 tool.

We performed a post-hoc sensitivity analysis where item number 2 (authors provided a detailed protocol) on the AMSTAR-2 tool was given a yes. The manuscripts rated as critically low decreased to 17 (33%), 20 (39%) were rated as low, 7 (14%) as moderate, and 7 (14%) as high.

Correlations

There was no statistically significant correlation between the PRISMA-P and PRISMA (r = 0.264; p = 0.06). The strength of association was statistically significant between PRISMA-P and AMSTAR-2 (r = 0.333; r2 = 0.11; p = 0.02) in addition to PRISMA and AMSTAR-2 (r = 0.555; r2 = 0.31; p<0.01). While the bivariate models of PRISMA-P or PRISMA scores with AMSTAR-2 quality were statistically significant, the correlation was weak. This highlights that factors outside of the completeness of reporting, and not measured in this study, contribute to these observations. Visualization of the scatterplot diagrams imply a linear relationship and can be viewed in S1 Diagrams. However, provided the smaller sample size, the clustering of PRISMA scores at the higher end with the AMSTAR-2 quality assessment being overwhelmingly critically low may threaten the linearity assumptions. When combined in multiple linear regression, there was a lack of collinearity between AMSTAR-2 quality category and the independent variables PRISMA (p<0.01) and PRISMA-P (p = 0.10). A review of the scatterplot and the histogram of residuals (S1 Diagrams) suggest the linear regression model is reasonable.

Agreement

Interrater agreement for individual items was significant for PRISMA-P (κ = 0.823, p<0.001; percent agreement 91.9%), PRISMA (κ = 0.427, p<0.001; percent agreement 91.2%) and AMSTAR-2 (κ = 0.623, p<0.001; percent agreement 80.3%).

Discrepancies in PRISMA and PRISMA-P reporting

The results, shown in Table 3, indicates that items reported in the manuscript according to the PRISMA checklist were not reported in the protocol according to the PRISMA-P checklist, or vice-versa, thus indicating discrepancies in reporting.
Table 3

Comparison of similar items reported in manuscripts and protocols according to PRISMA and PRISMA-P checklists.

Described in Only One Checklist (PRISMA or PRISMA-P)Described in Both Checklists
Rationale22 (43%)29 (57%)
Objectives13 (25%)38 (75%)
Eligibility051 (100%)
Information sources3 (6%)48 (94%)
Search strategy25 (49%)26 (51%)
Study selection26 (51%)25 (49%)
Data collection process19 (37%)32 (63%)
Data items26 (51%)25 (49%)
Risk of bias of included studies10 (20%)41 (80%)
Meta analysis methods20 (39%)31 (61%)
Additional analyses31 (61%)20 (39%)
Risk of bias across studies25 (49%)26 (51%)

Discussion

The results of our study show that while there is good adherence in the reporting of SR/MA manuscripts, the adherence to reporting of protocols is poor. The statistically insignificant correlation between PRISMA-P and PRISMA would suggest that if a protocol is reported well, it does not mean that the manuscript will also be reported well. Our results highlight the overall low quality of the included studies, as assessed by the AMSTAR-2 tool, and emphasizes that we must be selective when using this literature as part of evidence-based practice. Individually, higher scores of the PRISMA-P and PRISMA checklists results in an increased quality categorization, however, this correlation is weak. The existence of this correlation is intuitive as authors must provide adequate details in order to accurately assess the quality of the SR/MA. However, when adjusted, only the PRISMA score was associated with higher quality SR/MAs. This lack of collinearity identifies that other factors are responsible for increasing the quality of the evidence and is separate from reporting. These results implore users of the SR/MA literature to be thorough in their critical appraisal as the completeness of reporting of a protocol or the manuscript is an insufficient surrogate marker of quality. Our results indicate that manuscripts were generally reported well according to the PRISMA scores. Many journals require authors to submit the PRISMA checklist along with the manuscript to demonstrate the completeness of reporting. However, some discrepancies were present when reviewing the provided checklists and performing our own assessment. Insight into the lack of correlation between the PRISMA and PRISMA-P scores in our sample of studies may be due to two possibilities: the first is due to how new the PRISMA-P statement is, published in 2015, relative to the PRISMA statement, published in 2009 (and updated from QUOROM originally published in 1999), and the unfamiliarity of what constitutes a high quality protocol. We observed that protocols are being more frequently provided and freely accessible over time exemplifying the importance of transparent research. These results are consistent with the citation rates for each of these tools [65]. Given that there are several similar items between the PRISMA-P and PRISMA, the second possibility may that authors are not adequately detailing their plans prior to embarking on their data collection and analyses. This would result in either deliberate or undeliberate selective reporting bias. Our results are similar to those found by Tunis and coworkers who performed a review of 130 articles from 11 of the top radiology journals and found high adherence rates to the PRISMA checklist and a strong association (r = 0.86) between PRISMA and quality assessment according to the AMSTAR tool [66]. Work by Zhang and colleagues also found a strong association between PRISMA and AMSTAR rating (r2 = 0.793) in 197 surgical SR/MAs [67]. Previously, the AMSTAR tool had not prescribed a standard methodology to assess the overall quality of a SR/MA, thus common practice was to gauge the quality of a SR/MA using a total score (out of 11). The AMSTAR-2 tool highlights that not all items should be equally weighted. Each item is defined as a major or a minor criterion and uses this to provide the user clear guidance on how to assign papers to one of the four categories [3, 5]. This fundamentally different approach to defining the quality of studies could be a reason why our results found a weak association between PRISMA score and AMSTAR-2 assessed study quality, but previous work demonstrated a strong association between PRISMA score and AMSTAR score. The methodology section of protocols contained the most common deficiencies. More than 50% of protocols did not report adequate details of the search strategy, selection process, how and if quantitative synthesis is appropriate, meta biases such as publication bias or selective reporting, and an overall summary of the evidence (e.g. using the GRADE criteria). Other deficiencies were present but were less common. Contrary to the reporting of protocols, authors closely followed the PRISMA reporting criteria with all items being reported by at least 50% of the manuscripts. The most common missing information was an explicit PICOS statement (population, intervention, comparison, outcomes, and study design), a detailed search strategy used for at least 1 database, and the results of the risk of bias analysis across studies (e.g. publication bias or selective reporting bias). There are several similarities in reporting items between the PRISMA and PRISMA-P checklists, although each item is found in their corresponding documents. We compared similar items to determine if the reporting was present in each document. We did not go into the details of what was provided but simply if it was present or not. For example, if the authors reported their search strategy in both the protocol and manuscript, we did not check to determine if there were discrepancies between these two information sources. We found that the most common areas for discrepancies between similar items were in the methodology of the search strategy, study selection, data items, additional analyses, and meta-biases across studies. These items are considered critical items in the AMSTAR-2 tool and directly impact the trustworthiness of the results. The AMSTAR-2 tool identified that more than 50% of studies did not fulfill major criteria 2 (adequately described protocol) and minor criteria 10 (sources of funding). Our AMSTAR-2 results are consistent with several other published studies which indicate overall critically low or low quality SR/MAs. A review of 64 SR/MAs for pharmacological and non-pharmacological interventions in insomnia found that 40 were rated as low or critically low [68]. A review of 5 SR/MAs for acupuncture in primary dysmenorrhea found all 5 were rated as critically low [69]. As of October 2019, the PROSPERO database no longer accepts protocols in which data extraction has already started. Users of the literature should expect to see new SR/MA protocols published as supplementary material. This could be a possible methodological concern as authors could change the protocol before publishing without a clear audit trail unless protocols are meticulously documented. Our study did not differentiate between those registered or updated in the PROSPERO database, those available in the supplementary material, or those which were previously published. Our study is not without its limitations. First, our search strategy used type 2 diabetes as a MeSH term while using it as a keyword may have produced more results to evaluate, thus potentially limiting the generalizability of the study. We also limited our results to English which may have decreased our available sample. Second, we did not contact study authors for their protocol which may have increased the sample size. However, our approach follows a practical approach that most users of the literature would follow. Third, our assessment of overall SR/MA study quality may be an overestimate as all items counted as “partial yes” were considered a full yes. We acknowledge our only deviation from the AMSTAR-2 tool which was on item 7 where if a clear, complete PRISMA flow diagram was included, we gave the authors a partial yes. Only 3 articles would have met the criteria for full yes. Finally, despite high interrater agreement ranging from 80% to over 90%, lower Kappa values were found. Lower Kappa values are common for ratings which are binary and have an uneven distribution (e.g. 95% yes, 5% no). The risk of agreement due to chance alone is not accurate in this scenario and should not be interpreted in isolation.

Conclusion

Since the inception of the PRISMA-Protocol in 2015, there has been a steady increase in the number of available SR/MA protocols over the years. However, still less than 1 in 3 SR/MAs have available protocols. Even if protocols are available, the average rate of completion is 65.6% according to reporting guidelines. The quality of recently published SR/MAs are surprisingly poor, even when disregarding the quality of protocols. Journals should encourage authors to follow PRISMA-P guidance as closely as authors currently follow PRISMA reporting guidelines. The most critical areas for improvement are in the details of the provided protocol, investigating the causes of heterogeneity, and the impact of risk of bias on the evidence synthesis. This study should serve as an alert to both authors and users of the medical literature. The risk of using low or critically low SR/MAs in the decision-making process for patients with T2DM is surprisingly high. Users should take care to critically appraise articles to assess the reliability and accuracy of SR/MAs before applying those results to their clinical practice. Authors should be aware of how their research will be assessed and prepare their research appropriately by incorporating clear and complete protocols, along with their typically well reported manuscripts. It is clear that there are still many areas for improvements in the literature that is currently available. Authors should start with the development of a protocol prior to embarking on a systematic review. (PDF) Click here for additional data file. (DOCX) Click here for additional data file. (DOCX) Click here for additional data file.

PRISMA 2009 checklist.

(PDF) Click here for additional data file. 10 Aug 2020 PONE-D-20-16992 Reporting and Methodological Quality of Systematic Reviews and Meta-Analysis with Protocols in Diabetes Mellitus Type II: A Systematic Review PLOS ONE Dear Dr. Rainkie, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Sep 24 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Ahmed Negida, MD Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes Reviewer #3: No ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In this review, the authors assessed the quality of published SRs in the domain of type 2 diabetes treatments. They did this by assessing fulfillment of the PRISMA, PRISMA-P and AMSTAR2 checklists. Given the importance and prevalence of diabetes as well as the emerging treatment options for this condition, and the fact that SRs are generally placed at the top of the evidence hierarchy, an evaluation of the quality of published reviews is an excellent addition to the existing body of literature, especially given the relative recency of the AMSTAR2 checklist. Overall, the manuscript is quite well-written and the authors seem to have done a thorough job with the review; nevertheless, there are a few recommendations I’ve attached below that I believe would further improve the quality of what is an already impressive manuscript. All comments were attached in the Word file. Reviewer #2: This is a systematic review in which the authors investiagte the reporting and methodological quality of the different systematic reviews, meta-analyses, and protocols related to Type 2 DM. Although the study is well-written, well-structured, and scientifically-sound, many other literatures investigated the same topic with similar results. Despite the good quality of the manuscript, it lacks the novelity to be published in PLOS ONE. Reviewer #3: The authors performed this study to evaluate (1) the completeness of reporting of available protocols according to the PRISMA-P checklist, (2) the completeness of reporting of SR/MAs according to the PRISMA checklist, and (3) the quality of the SR/AMs according to the AMSTAR-2 tool. Although the overall approach to the review is proper, I have a few comments: 1. General - There is an improper use of abbreviations in both Abstract and Manuscript file. Abbreviations should be defined at first mention and used consistently thereafter. 2. Abstract - Objectives are not clearly outlined, please modify. 3. Methods - Search Strategy: using PubMed filters, such as Human, is not preferred, because some relevant articles may have been missed. 4. Results - Search Results are inconsistent with the numbers in Figure 1. For example, Line 178-180: 52 studies met the inclusion criteria; however, it is "51" in Figure 1. Also, protocols were not available in 209 studies; however, it is 203 in Figure 1. Please recheck!! 5. Language: The entire manuscript needs extensive professional revision for grammatical errors and stylistic editing to improve the quality of English. For example, - Line 20-22: Consider using a comma instead of a semicolon. - Line 28, 33, etc.: When the sentence contains a series of three or more words, phrases, or clauses. Consider inserting a comma to separate the elements. - Line 55: replace “to produce” with “in producing” - Line 57: replace “to evaluating” with “to evaluate” - Line 67-68: “for completing a SR/MA for transparent science and to minimize”, Faulty parallelism. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. Submitted filename: Type 2 Diabetes Review of SRs.docx Click here for additional data file. 1 Sep 2020 Dear Dr Negida, Peer Reviewers, and the editorial team, On behalf of my research team, we would like to thank you all for the time and effort you have made to review our manuscript titled “Reporting and Methodological Quality of Systematic Reviews and Meta-Analysis with Protocols in Diabetes Mellitus Type II: A Systematic Review”. We would like to offer a special thanks to those who provided detailed feedback for how to further improve our manuscript. Below you will find our responses to each comment and their associated actions written in blue text. We hope that this will meet your expectations. Review Comments to the Author Reviewer 1 We thank the reviewer for their attention to detail and feedback provided to us. We appreciate the time that was spent on their detailed report. This provides a great model feedback to those students involved in this project. We thank you for your kind words regarding the quality of this study. We spent many hours reviewing and revising this manuscript prior to the submission. • General Advice: o Periods should be placed after -not before- citations. � Thank you for highlighting this. We have revised the entire manuscript and double checked that all citations are provided before the end of each sentence. These updates have not been highlighted in the track changes of the manuscript. o Although generally well-written, the manuscript would probably benefit from a revision of the grammar throughout. I have pointed out some examples of linguistic errors below, but there are a few others I haven’t included below. � We thank the review for their attention to detail regarding this matter. In addition to the comments provided below, we have also applied the Microsoft Editor, Grammarly, and an independent third party to review the grammar of the manuscript. Major Revisions: • Intro o Page 5, line 77: Linguistic error “The secondary outcomes for this study was to” � Was updated to were • Methods: o Page 6, line 92 and 95: PubMed is not a database, it is a search engine designed to browse MEDLINE, which itself is the database. � Thank you for this correct. We have updated the sentence to read the following: “A systematic search was performed across MEDLINE (using PubMed) and Embase databases…” o Page 6, line 96 to 100: The search strategy does not seem very comprehensive (only two terms for diabetes were used), which may be a concerning limitation as it may limit the representativeness of the retrieved SRs. This is addressed in the limitations section of the discussion, and I commend the authors for acknowledging this point. � We thank you for this comment and agree that this is indeed a limitation and is important to note. In the development of our protocol we debated the use of using keywords or the MeSH term for type 2 diabetes. We finalized our choice on the MeSH term to make the search result more specific. On review, using keywords for type 2 diabetes instead of the MeSH term we found that the number of results does not increase substantially. Repeating the search on Aug 24, 2020, we found that the search results increase from 428 to 469. If we extrapolate our inclusion/exclusion criteria results to these 41 articles, we may have included an additional 5-6 articles. o Eligibility Criteria: � Page 7, line 113-114: “meta-analysis” should be changed to “analyses” (Plural) • Thank you, this has been corrected. The sentence now reads, “Other types of excluded studies included guidelines, select article meta-analyses (i.e. no systematic review), network meta-analyses, and individual patient meta-analyses because the issues specific to each of these designs are not addressed in the PRISMA-P, PRISMA or AMSTAR2 tools”. o Study Selection: � Page 7, line 117. Linguistic error: “By independently by” • Thank you, we have removed the first “by” to now read, “The selection of the studies was completed independently by all three reviewers NA, ZA, and DR”. � I believe clarifying two points in the screening process would benefit the manuscript. • First, were all records identified screened by all 3 reviewers (IE triple screening?) o This is correct, there was triple screening. We have updated the sentence as noted above. • Second, how were conflicts settled upon? (Discussion? Referral to an independent individual? (Obviously not a fourth author since the manuscript only has three authors) o Thank you for identifying this. We have added 2 sentences to clarify the screening process as follows: “If two or more reviewers identified an article for inclusion on screening, the full text was retrieved. If only a single reviewer identified an article for inclusion on screening, a final decision to retrieve the full text was based on discussion between all the reviewers.” o Data Management & Extraction: � Page 7, line 131: Linguistic error: “Checklist to the 1-2 articles at a time” • This has been updated to read the following: “All three reviewers independently applied the checklists to 1-2 articles at a time.” � Page 8, line 145: Linguistic error: “items which were not relevant to the study being assessed were given we gave the authors a full mark” • This has been updated to read the following: “For the purpose of generalizability, items which were not relevant to the study being assessed were given a full mark” � Page 8, line 147-148: Linguistic error: “Given full mark” to “Given a full mark” • This has been updated in the text. o Outcomes: � Line 154-155: Linguistic error: “Correlation between….compared” to “Correlation between….and” • Thank you for noting this. We have updated the sentence to read: “The secondary outcomes were to determine the correlation between completeness of reporting and quality of the protocol (PRISMA-P), study (PRISMA), and the overall SR/MA quality (defined by AMSTAR-2 as critically low, low, moderate or high) and describe the frequency of discrepancies in reporting between PRISMA-P and PRISMA.” � Line 157: Linguistic error: “After it was it was” • We have removed the duplication � Line 159-160: Change to “Hypothesized that regardless of the absence or presence” (Rather than “Despite the absence or presence”) • Thank you for this, we have updated the sentence to read: “On review, we hypothesized that regardless of the presence or absence of a protocol, the results and interpretation of SR/MAs could be deemed appropriate if the other critical criteria of the AMSTAR 2 tool were met.” o Data Synthesis: � Line 170: I believe it would benefit the clarity of the manuscript if the authors were to clarify what variables were entered into the multiple linear regression model (Presumably they entered PRISMA and PRISMA-P as the independent variables, but this should be made sufficiently clear to the readers). • We thank you for this comment and you are correct. We have added an additional sentence to clarify this point “PRISMA and PRISMA-P scores were used as independent variables.” o Results � Search Results • Page 10, Line 191: Change “7” to “seven”. o This has been updated in the text. • The stated number of included studies is 52; however, the PRISMA diagram shows 51 included studies. It is probably a slight error, but it should be corrected both for transparency’s sake and as it affects the calculation of proportions/percentages (See later) • The percentage of included studies given is 14.6% (which corresponds to 51 in the PRISMA diagram, but does not correspond to 52 (14.9%) as written in the Results section) o Thank you for your detailed review and catching this typo. The results text has been updated to 51 and now matches the PRISMA flow diagram shown in Figure 1. � Reporting and Quality Results • Page 11, line 198: “Items in which” to “Items which” o This has been updated in the text. • Line 214 to 217: I believe the authors may have made a slight mistake when calculating the percentages. The stated total number of included articles is 52. The number of articles of critically low quality is 32. The stated percentage is 63%, while the actual percentage (32/52) is 61.5%. I believe the same error is present in the sensitivity analysis. This may be due to the aforementioned error in stating the number of included studies. o Thank you for your detailed review of our calculations. With the result number corrected to 51 studies, the percentages now match. � Correlations: • Page 17, line 230-231: I believe the phrase “Can be used to independently predict” should be removed, as it implies a stronger correlation than there really is. o Thank you for this comment. We agree that reading this part of the sentence by itself is misleading. We have updated the sentence to read the following: “While the independent scores of PRISMA-P and PRISMA could both be used to predict AMSTAR2 quality category, the correlation was weak.” • Line 233: I believe the manuscript would benefit if the authors clarified what variables were entered into the multiple linear regression model. Was it a bivariate model where PRISMA and PRISMA-P were the independent variables? If so, the authors should clarify that the correlation score between PRISMA and PRISMA-P suggested a lack of collinearity and therefore justified entering both variables into the same model, as most readers would (wrongly, as it turns out) assume a high degree of collinearity between these two scores. o Thank you for this comment. We agree that this should be detailed out in the text. We have updated the sentence to read the following: “However, when combined in multiple linear regression, there was a lack of collinearity between AMSTAR2 quality category and the independent variables PRISMA (p<0.01) and PRISMA-P (p=0.10).” • I believe it the reliability of the manuscript would be improved if the authors clarified how the linearity assumption was verified (e.g. By providing a scatter plot), as well as, in the case of the multiple linear model, the assumptions of homoscedasticity and the normal distribution of the residuals. o We thank the reviewer for this comment. We have included the scatter plots for each of the bivariate and multivariate analyses in Supplementary material 4. We have included the following modifications to further satisfy the linearity assumption: While the bivariate models of PRISMA-P or PRISMA scores with AMSTAR2 quality were statistically significant, the correlation was weak. This highlights that other factors are responsible rather than completeness of reporting. Visualization of the scatterplot diagrams imply a possible linear relationship and can be viewed in Supplementary File 4. However, provided the smaller sample size, the clustering of PRISMA scores at the higher end with the AMSTAR2 quality assessment being overwhelmingly critically low may threaten the linearity assumptions. When combined in multiple linear regression, there was a lack of collinearity between AMSTAR2 quality category and the independent variables PRISMA (p<0.01) and PRISMA-P (p=0.10). A review of the scatterplot and the histogram of residuals (Supplementary File 4) suggest the linear regression model is reasonable. � Discrepancies in PRISMA and PRISMA-P Reporting • Table 3: “Only 1” to “Only one” o This has been updated in the text. • Table 3: Numbers add up to 51 (As stated in the PRISMA diagram), not 52 (As stated in the Methods section) o This has been addressed in our previous updates. • Table 3: I believe the quality of the manuscript would be improved by including the %s next to the raw numbers, as that would facilitate the reading and interpretation of the table. o We thank you for this comment. We agree that this would help readers understand this table easier to understand. All numbers in this table have been updated using the total number of included studies as 51. o Discussion � Line 250-251: “Lack of correlation between PRISMA and PRISMA-P • I believe the authors should not emphasize the “lack of correlation”, as a P-value of greater than 0.05 does not definitively prove the lack of a correlation, merely the absence of sufficient evidence for said correlation. Instead, the authors should emphasize a tone of statistical insignificance rather than a definitive lack. They should also emphasize the weakness of the correlation (even if it were statistically significant) to better deliver home the point. o Thank you for your comment. We agree with this statement and that it is important to drive home that even if this were to be a type 2 error, it is possible that the correlation will remain weak. We have updated these sentences to read the following: “The statistically insignificant correlation between PRISMA-P and PRISMA would suggest that if a protocol is reported well, it does not mean that the manuscript will also be reported well. This result, along with the weakness of the correlation, implore that users of the literature must look at these aspects independently.” • I believe exploring potential reasons for the lack of correlation would improve the quality of the paper. For instance, the authors state that the dates of publication of the PRISMA and PRISMA-P may cause the lack of said correlation. Could other factors, such as a difference in points covered by either checklist, account for the lack of correlation? Another implication may be that authors simply do not invest equivalent amounts of time and/or effort into fulfilling both checklists. o We thank you for this addition to the discussion. Provided that there are many similarities between the PRISMA-P and PRISMA checklists (table 3), it is our hypothesis that it is either that authors do not spend adequate time detailing their protocol, thus potentially leading to selective reporting bias, or that authors are unfamiliar with the expectations of a high quality protocol. We have updated the first and second paragraphs of our discussion to read the following: The results of our study show that while there is good adherence in the reporting of SR/MA manuscripts, the adherence to reporting of protocols is poor. The statistically insignificant correlation between PRISMA-P and PRISMA would suggest that if a protocol is reported well, it does not mean that the manuscript will also be reported well. This result, along with the weakness of the correlation, implore that users of the literature must look at each of these aspects independently. Our results highlight the overall low quality of the included studies, as assessed by the AMSTAR-2 tool, emphasizes that we must be selective when using this literature as part of evidence-based practice. Individually, higher scores of the PRISMA-P and PRISMA checklists results in an increased quality categorization, however, this correlation is weak. The existence of this correlation is intuitive as authors must provide adequate details in order to accurately assess the quality of the SR/MA. However, when adjusted, only the PRISMA score was associated with higher quality SR/MAs. This lack of collinearity identifies that other factors are responsible for increasing the quality of the evidence and is separate from reporting. o Our results indicate that manuscripts were generally reported well according to the PRISMA scores. Many journals require authors to submit the PRISMA checklist along with the manuscript to demonstrate the completeness of reporting. However, some discrepancies were present when reviewing the provided checklists and performing our own assessment. Insight into the lack of correlation between the PRISMA and PRISMA-P scores in our sample of studies may be due to two possibilities: the first is due to how new the PRISMA-P statement is, published in 2015, relative to the PRISMA statement, published in 2009 (and updated from QUOROM originally published in 1999), and the unfamiliarity of what constitutes a high quality protocol. We observed that protocols are being more frequently provided and freely accessible over time exemplifying the importance of transparent research. These results are consistent with the citation rates for each of these tools [65]. Given that there are several similar items between the PRISMA-P and PRISMA, the second possibility may that authors are not adequately detailing their plans prior to embarking on their data collection and analyses. This would result in either deliberate or undeliberate selective reporting bias � Line 258-259: Linguistic error: “The higher the score of PRISMA-P and PRISMA tools results in a higher quality categorization according to the AMSTAR-2 tool” • Thank you. We have updated the first paragraph to hopefully be clearer. � Line 260: Should be changed to “The presence of a correlation is intuitively correct”, as it is not intuitive that the correlation would be relatively weak as found by the authors (But the existence of a statistically significant correlation would be expected). • We thank you for this comment and certainly agree with this. We have updated the sentence to read: “The existence of this correlation is intuitive as authors must provide adequate details in order to accurately assess the quality of the SR/MA.” � Starting from line 289, the point about limiting discrepancies to the mere presence/absence of details in the protocol/manuscript rather than actually checking for differences in the details provided is a major limitation. This is because many areas, such as statistical analysis, may be significantly modified between the protocol and the original study. The mere fact that they’re present both in the manuscript and the protocol would not warrant a “No discrepancy” judgement. Nevertheless, I commend the authors for acknowledging this limitation. • We thank the reviewer for their comment. We agree that this is important for readers to understand. While this was outside of the intended scope of this project, we are conducting additional research on these discrepancies and the risk of selective reporting bias. � Line 320-321: Regarding the point about Kappa coefficients, it would be more accurate to state that low kappa ratings are common when responses to a binary variable fall within a single category (e.g. 95% Yes 5% No), had the binary variable answers been more evenly distributed (e.g. 50% Yes 50% No) the kappa coefficient would have been more reasonable. It is not simply that binary variables give a low kappa coefficient as implied in lines 320-321. • We thank the reviewer for this comment. We have clarified the last sentence of the limitations to read the following: “Finally, despite high interrater agreement ranging from 80% to over 90%, lower Kappa values were found. Lower Kappa values are common for ratings which are binary and have an uneven distribution (e.g. 95% yes, 5% no). The risk of agreement due to chance alone is not accurate in this scenario and should not be interpreted in isolation.’ Reviewer 2 This is a systematic review in which the authors investiagte the reporting and methodological quality of the different systematic reviews, meta-analyses, and protocols related to Type 2 DM. Although the study is well-written, well-structured, and scientifically-sound, many other literatures investigated the same topic with similar results. Despite the good quality of the manuscript, it lacks the novelity to be published in PLOS ONE. We would like to thank the reviewer for taking the time to review our manuscript and their positive comments. We believe that this research will provide readers a better understanding of the interplay between reporting checklists and the quality of articles. There are some key disparities between AMSTAR and AMSTAR-2 which are highlighted here. Additionally, there are few studies which advance the understanding of the PRISMA extensions. We believe that our study promotes a better understanding of how PRISMA-P is used or not used, and where this science can help improve the quality of the literature being produced. Reviewer 3 The authors performed this study to evaluate (1) the completeness of reporting of available protocols according to the PRISMA-P checklist, (2) the completeness of reporting of SR/MAs according to the PRISMA checklist, and (3) the quality of the SR/AMs according to the AMSTAR-2 tool. Although the overall approach to the review is proper, I have a few comments: 1. General - There is an improper use of abbreviations in both Abstract and Manuscript file. Abbreviations should be defined at first mention and used consistently thereafter. We thank the reviewer for their comments. We have reviewed the text in full and identified the first case of abbreviation that is used and used the said abbreviation in the remainder of the manuscript. Changes have been made throughout. 2. Abstract - Objectives are not clearly outlined, please modify. We thank the author for their comment. Due to the word limit, we have attempted to summarize the objectives of our work in as few words as possible. We have updated the objectives to read the following: “The objectives of this study were to describe the completeness of reporting and quality of SR/SRMAs, assess the correlations between PRISMA-P, PRISMA and AMSTAR-2 and identify reporting characteristics between similar items of PRISMA-P and PRISMA.” 3. Methods - Search Strategy: using PubMed filters, such as Human, is not preferred, because some relevant articles may have been missed. We thank the reviewer for this comment. On review of excluding Human as a search limit, it increased the number of articles by 2. 4. Results - Search Results are inconsistent with the numbers in Figure 1. For example, Line 178-180: 52 studies met the inclusion criteria; however, it is "51" in Figure 1. Also, protocols were not available in 209 studies; however, it is 203 in Figure 1. Please recheck!! We thank the reviewer for their time in taking a detailed look at our manuscript. We have fixed the typo in the number of included studies and is now correct at 51 included articles. On detailed review of Figure 1 and our database, we have also identified 3 other unintentional errors. The number of excluded articles due to no protocol being freely available = 210, the total number of full text articles missed the addition of “available as abstract” (n=7), and protocols available by contacting authors should be 7, not 8. Thus bringing the total number of excluded articles to 306. The entirety of Figure 1 has been updated. The first paragraph of the results section has been updated to reflect this change. The abstract has also been updated. We have double checked these results with our database and the supplementary material provided. We can confidently say that these numbers are now accurate. 5. Language: The entire manuscript needs extensive professional revision for grammatical errors and stylistic editing to improve the quality of English. For example, - Line 20-22: Consider using a comma instead of a semicolon. o Thank you for your comment, we have updated the Objectives section of the abstract to be clearer. - Line 28, 33, etc.: When the sentence contains a series of three or more words, phrases, or clauses. Consider inserting a comma to separate the elements. o Thank you for your comment. We have updated the grammar of the Methods section of the abstract to be clearer. - Line 55: replace “to produce” with “in producing” o This has been updated in the text. - Line 57: replace “to evaluating” with “to evaluate” o This has been updated in the text. - Line 67-68: “for completing a SR/MA for transparent science and to minimize”, Faulty parallelism. o Thank you for your comment. We have changed the word “minimize” to “identify” to clarify the intended purpose of this sentence. It now reads as the following: “It is recommended that a pre-specified protocol should be used as the foundation for completing a SR/MA for transparent science and identify the risk of selection and reporting bias.” Once again we would like to take this opportunity to thank the reviewers for their detailed remarks. We hope that this manuscript is now meets the high quality of science that is expected in PLOS ONE. Sincerely, Dr Daniel Rainkie, on behalf of the investigators Submitted filename: 1 Response to Reviewers.docx Click here for additional data file. 30 Sep 2020 PONE-D-20-16992R1 Reporting and Methodological Quality of Systematic Reviews and Meta-Analysis with Protocols in Diabetes Mellitus Type II: A Systematic Review PLOS ONE Dear Dr. Rainkie, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. ============================== ACADEMIC EDITOR: Please insert comments here and delete this placeholder text when finished. Be sure to: Indicate which changes you require for acceptance versus which changes you recommend Address any conflicts between the reviews so that it's clear which advice the authors should follow Provide specific feedback from your evaluation of the manuscript Please ensure that your decision is justified on PLOS ONE’s publication criteria and not, for example, on novelty or perceived impact. ============================== Please submit your revised manuscript by Nov 14 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols We look forward to receiving your revised manuscript. Kind regards, Hazel Bautista Academic Editor PLOS ONE Journal Requirements: Additional Editor Comments (if provided): Your submission requires substantial editing for English grammar and usage. We ask that you please have the manuscript copyedited by either a colleague or a professional copy-editing service. While you may approach any qualified individual or any professional scientific editing service of your choice, PLOS has partnered with American Journal Experts (AJE) to provide discounted services to PLOS authors. AJE has extensive experience helping authors meet PLOS guidelines and can provide language editing, translation, manuscript formatting, and figure formatting to ensure your manuscript meets our submission guidelines. If the PLOS editorial team finds any language issues in text that AJE has edited, AJE will re-edit the text for free. To take advantage of this special partnership, use the following link: https://www.aje.com/go/plos/. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors addressed all my comments and this version of the manuscript is well-written, well-presented, and support their conclusion. I recommend the acceptance of this manuscript. Reviewer #3: The authors have addressed all my comments/suggestions. I found their responses quite satisfactory and the revised version has been much improved. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 29 Oct 2020 Dear Dr Negida, Peer Reviewers, and the editorial team, On behalf of my research team, we would like to thank you all for the time and effort you have made to once again review our manuscript titled “Reporting and Methodological Quality of Systematic Reviews and Meta-Analysis with Protocols in Diabetes Mellitus Type II: A Systematic Review”. We are very pleased that we have addressed all of your comments regarding the scientific merit of our manuscript in our first revision. We thank you for the opportunity to continue to improve our work with a thorough review of grammar and English usage. We have spent considerable time and effort revising the manuscript for grammatical errors and wording. In our efforts, we have had two colleagues review this manuscript. The first was Dr Zachariah Nazar, an assistant professor with a PhD in pharmacy and a native English speaker. He identified many areas for improvement regarding our English use in our manuscript. The second was Ms Kara Schultz, an English teacher with a Masters of Arts in Language Studies and a native English speaker. She further identified stylistic and grammar errors. The entire manuscript has been reviewed once again in full by the research team, one native English speaker from Canada, and two who have completed their training in Qatar. We hope that this revised manuscript will meet the high standards of PLOS ONE and thank you once again for the opportunity to improve our manuscript. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: The authors addressed all my comments and this version of the manuscript is well-written, well-presented, and support their conclusion. I recommend the acceptance of this manuscript. Reviewer #3: The authors have addressed all my comments/suggestions. I found their responses quite satisfactory and the revised version has been much improved. • We would like to thank the reviewers for their time and effort in reviewing our first revised manuscript. We are pleased to hear that our revisions have met their high standards. • An adjustment was made to Table 1, the characteristics of the included studies. It was identified that the line with protocols not available did not match the PRISMA flow diagram in figure 1 (n=210). The numbers in Table 1 have been updated and accurately represent the studies that met inclusion criteria but did not have a protocol available (n=210). Sincerely, Dr Daniel Rainkie, on behalf of the investigators Submitted filename: 1 Response to Reviewers - Oct 29.docx Click here for additional data file. 16 Nov 2020 Reporting and Methodological Quality of Systematic Reviews and Meta-Analysis with Protocols in Diabetes Mellitus Type II: A Systematic Review PONE-D-20-16992R2 Dear Dr. Rainkie, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Ahmed Negida, MD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: 23 Nov 2020 PONE-D-20-16992R2 Reporting and Methodological Quality of Systematic Reviews and Meta-Analysis with Protocols in Diabetes Mellitus Type II: A Systematic Review Dear Dr. Rainkie: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Ahmed Negida Academic Editor PLOS ONE
  61 in total

1.  Reporting and methodological qualities of published surgical meta-analyses.

Authors:  Han Zhang; Jun Han; Ying-Bo Zhu; Wan-Yee Lau; Myron E Schwartz; Guo-Qiang Xie; Shu-Yang Dai; Yi-Nan Shen; Meng-Chao Wu; Feng Shen; Tian Yang
Journal:  J Clin Epidemiol       Date:  2015-06-24       Impact factor: 6.437

2.  A safety and tolerability profile comparison between dipeptidyl peptidase-4 inhibitors and sulfonylureas in diabetic patients: A systematic review and meta-analysis.

Authors:  Daniela Farah; Graziella Malzoni Leme; Freddy Goldberg Eliaschewitz; Marcelo Cunio Machado Fonseca
Journal:  Diabetes Res Clin Pract       Date:  2019-01-30       Impact factor: 5.602

Review 3.  Cardiovascular Safety of Dipeptidyl-Peptidase IV Inhibitors: A Meta-Analysis of Placebo-Controlled Randomized Trials.

Authors:  Islam Y Elgendy; Ahmed N Mahmoud; Amr F Barakat; Akram Y Elgendy; Marwan Saad; Ahmed Abuzaid; Siddarth A Wayangankar; Anthony A Bavry
Journal:  Am J Cardiovasc Drugs       Date:  2017-04       Impact factor: 3.571

Review 4.  Predictors of response to glucagon-like peptide-1 receptor agonists: a meta-analysis and systematic review of randomized controlled trials.

Authors:  Matteo Monami; Ilaria Dicembrini; Besmir Nreu; Francesco Andreozzi; Giorgio Sesti; Edoardo Mannucci
Journal:  Acta Diabetol       Date:  2017-09-20       Impact factor: 4.280

Review 5.  Cardiovascular outcomes with sodium-glucose cotransporter-2 inhibitors in patients with type II diabetes mellitus: A meta-analysis of placebo-controlled randomized trials.

Authors:  Marwan Saad; Ahmed N Mahmoud; Islam Y Elgendy; Ahmed Abuzaid; Amr F Barakat; Akram Y Elgendy; Mohammad Al-Ani; Amgad Mentias; Ramez Nairooz; Anthony A Bavry; Debabrata Mukherjee
Journal:  Int J Cardiol       Date:  2016-11-09       Impact factor: 4.164

6.  Semaglutide for type 2 diabetes mellitus: A systematic review and meta-analysis.

Authors:  Panagiotis Andreadis; Thomas Karagiannis; Konstantinos Malandris; Ioannis Avgerinos; Aris Liakos; Apostolos Manolopoulos; Eleni Bekiari; David R Matthews; Apostolos Tsapas
Journal:  Diabetes Obes Metab       Date:  2018-06-10       Impact factor: 6.577

7.  Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation.

Authors:  Larissa Shamseer; David Moher; Mike Clarke; Davina Ghersi; Alessandro Liberati; Mark Petticrew; Paul Shekelle; Lesley A Stewart
Journal:  BMJ       Date:  2015-01-02

Review 8.  Effectiveness of sitagliptin compared to sulfonylureas for type 2 diabetes mellitus inadequately controlled on metformin: a systematic review and meta-analysis.

Authors:  Manuj Sharma; Nicholas Beckley; Irwin Nazareth; Irene Petersen
Journal:  BMJ Open       Date:  2017-10-30       Impact factor: 2.692

9.  Comparative effectiveness and safety of pharmacological and non-pharmacological interventions for insomnia: an overview of reviews.

Authors:  Patricia Rios; Roberta Cardoso; Deanna Morra; Vera Nincic; Zahra Goodarzi; Bechara Farah; Sharada Harricharan; Charles M Morin; Judith Leech; Sharon E Straus; Andrea C Tricco
Journal:  Syst Rev       Date:  2019-11-15

10.  Epidemiology and Reporting Characteristics of Systematic Reviews of Biomedical Research: A Cross-Sectional Study.

Authors:  Matthew J Page; Larissa Shamseer; Douglas G Altman; Jennifer Tetzlaff; Margaret Sampson; Andrea C Tricco; Ferrán Catalá-López; Lun Li; Emma K Reid; Rafael Sarkis-Onofre; David Moher
Journal:  PLoS Med       Date:  2016-05-24       Impact factor: 11.069

View more
  1 in total

1.  An Evaluation of Evidence Underpinning Management Recommendations in Tobacco Use Disorder Clinical Practice Guidelines.

Authors:  Sam Streck; Ryan McIntire; Lawrence Canale; J Michael Anderson; Micah Hartwell; Trevor Torgerson; Kelly Dunn; Matt Vassar
Journal:  Nicotine Tob Res       Date:  2022-04-28       Impact factor: 5.825

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.