Literature DB >> 33795016

PRISMA-DTA for Abstracts: a new addition to the toolbox for test accuracy research.

Daniël A Korevaar1, Patrick M Bossuyt2, Matthew D F McInnes3,4, Jérémie F Cohen5,6.   

Abstract

Entities:  

Year:  2021        PMID: 33795016      PMCID: PMC8017829          DOI: 10.1186/s41512-021-00097-4

Source DB:  PubMed          Journal:  Diagn Progn Res        ISSN: 2397-7523


× No keyword cloud information.

Introduction: reporting guidelines

Complete reporting of biomedical research is essential to ensure that readers can reproduce the study methodology, are informed about quality concerns such as potential sources of bias, and understand to which patients the results are applicable. There are ongoing concerns about the quality of study reports in many fields of biomedical research [1, 2]. Test accuracy research, in which the ability of signs and symptoms, biomarkers, or medical tests to identify a target disease is evaluated, is not exempt from this problem. Numerous evaluations have shown that reports of test accuracy studies and systematic reviews thereof often lack crucial information, mostly about the methods applied and the results found [3]. This leads to research waste and threatens research integrity. Currently, several hundreds of reporting guidelines are available in the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) network’s library, where researchers can identify which of these is most suitable for their specific study design [4]. The first and most well-known is CONSORT (Consolidated Standards of Reporting Trials) for reports of clinical trials, first published in 1996, and updated several times after this [5]. Since then, reporting guidelines have been developed for many kinds of study designs, including TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis) for prediction model studies [6], STARD (Standards for Reporting of Diagnostic Accuracy Studies) for test accuracy studies [7-9], and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) for systematic reviews [10]. These reporting guidelines consist of a list of essential items (sometimes referred to as a “checklist”) that should be reported to ensure optimal informativeness and transparency. This will allow for easy identification of the study in online libraries and databases and for adequate assessment of study methodology, applicability, and results. Many reporting guidelines are accompanied by an “Explanation and Elaboration” document, which provides specific and detailed guidance on how to report each essential item on the list, along with examples of good reporting practices. The recent publication of PRISMA-DTA for Abstracts and its accompanying Explanation and Elaboration document provides a new addition to the toolbox for reporting test accuracy research [11-13]. We here set out which steps led to the development of PRISMA-DTA for Abstracts and which tools are currently available to improve and assess the reporting and the methodological quality of test accuracy research.

Reporting guidelines for abstracts

Earlier versions of reporting guidelines primarily provided guidance for full-text articles, but there has also been growing attention for reporting journal and conference abstracts over the years. This started with CONSORT for Abstracts, which was published in 2008 as an extension of CONSORT and provided guidance for reporting abstracts of clinical trials [14]. Since then, extensions of other reporting guidelines specifically focusing on the reporting of abstracts have been developed. Currently reporting guidelines for abstracts are available for at least five types of study designs: clinical trials, observational studies, systematic reviews, test accuracy studies, overviews of systematic reviews, and multivariable prediction models [15, 16]. More are likely to follow. The abstract has become a fundamental part of a study report, which may have a considerable impact on the interpretation of a study for the average reader. Many users of the biomedical literature only read the abstract, either due to time constraints or because they do not have access to the full text. In addition, systematic reviewers and guideline developers rely on accurate information in the abstract because they often need to screen large amounts of them for potential eligibility. Also, if a study is presented at a scientific conference, the abstract is often the only bit of information available about the study, and many studies reported as conference abstracts are never published in full [17]. It has been shown numerous times that reporting in abstracts, also in test accuracy research, is frequently incomplete, which could lead to misinterpretation and overinterpretation of the study findings [18-20]. This may be the case if crucial design elements resulting in potential sources of bias or generalizability concerns are not evident, or if the authors “spin” their findings, which has shown to be more frequent in abstracts than in full texts [21-23].

STARD 2015 and STARD for Abstracts

Test accuracy studies evaluate the performance of medical tests by comparing their results with a reference standard, where results are expressed in estimates of diagnostic accuracy such as sensitivity and specificity. In 2003, the STARD reporting guideline was published for these studies, and an updated version was launched in 2015 [7-9]. STARD 2015 contains a list of 30 essential items. As for most reporting guidelines, some of these items are “general,” applying to any biomedical study involving patients. However, test accuracy studies have a number of design features and outcomes that are very typical of this type of research. In addition, research has shown that these studies are sensitive to several sources of bias and variation [24]. Items on STARD 2015 that are specific to test accuracy studies are, for example, the instruction to report the intended use and clinical role of the index test (item 3), which reference standard was used and how it was applied (item 10b), the definition of test positivity cut-offs or result categories (item 12), whether test readers were masked (item 13), how missing test data were handled (item 16), and estimates of diagnostic accuracy with confidence intervals (item 24). Evaluations have shown that completeness of reporting improved over the years after the dissemination of STARD [25]. In response to empirical evidence of incomplete reporting of abstracts of test accuracy studies, STARD for Abstracts was additionally published in 2017, providing specific guidance for writing journal and conference abstracts [18, 19, 26].

PRISMA-DTA and PRISMA-DTA for Abstracts

The PRISMA guideline was first published in 2009 as a guiding tool for authors writing reports of systematic reviews [10]. In 2013, a subsequent extension for abstracts of systematic reviews was published, PRISMA for Abstracts [27]. Although PRISMA can be used as a basis for reporting systematic reviews of any type of research, it mainly focuses on reviews of randomized trials of interventions. With the number of systematic reviews of test accuracy studies growing rapidly over the past years, an extension explicitly focusing on this study design was deemed useful. Like for primary test accuracy research, systematic reviews of test accuracy studies have typical design and results characteristics that are, to some extent, unique to this type of research [28]. This resulted in the PRISMA-DTA reporting guideline, published in 2018 [11, 12]. PRISMA-DTA also provides guidance for reporting abstracts (Table 1). A baseline assessment of adherence to PRISMA-DTA for Abstracts in 100 systematic reviews of test accuracy studies showed that, on average, only 5.5 of 11 guideline items had been reported. Crucial items such as study characteristics used as criteria for eligibility (item 3, reported by 57%), literature search dates (item 4, 42%), characteristics of included studies including the reference standard (item 6, 13%), methods of assessing the risk of bias (item 5, 38%), and study registration number (item 12, 5%) were often not reported in the abstract [29].
Table 1

PRISMA-DTA for Abstracts checklist

Section and topicItem no.Description
Title and purpose
 Title1Identify the report as a systematic review (+/− meta-analysis) of diagnostic test accuracy studies.
 Objectives2Indicate the research question, including components such as participants, index test, and target conditions.
Methods
 Eligibility criteria3Include study characteristics used as criteria for eligibility.
 Information sources4List the key databases searched and the search dates.
 Risk of bias and applicability5Indicate the methods of assessing risk of bias and applicability.
 Synthesis of resultsA1Indicate the methods for the data synthesis.
Results
 Included studies6Indicate the number and type of included studies and the participants and relevant characteristics of the studies (including the reference standard).
 Synthesis of results7Include the results for the analysis of diagnostic accuracy, preferably indicating the number of studies and participants. Describe test accuracy including variability; if meta-analysis was done, include summary results and confidence intervals.
Discussion
 Strengths and limitations9Provide a brief summary of the strengths and limitations of the evidence.
 Interpretation10Provide a general interpretation of the results and the important implications.
Other
 Funding11Indicate the primary source of funding for the review.
 Registration12Provide the registration number and the registry name.

The PRISMA-DTA for Abstracts list is also available in the EQUATOR network’s library (https://www.equator-network.org/)

PRISMA-DTA for Abstracts checklist The PRISMA-DTA for Abstracts list is also available in the EQUATOR network’s library (https://www.equator-network.org/) The original length of the PRISMA for Abstracts guidelines was maintained in PRISMA-DTA for Abstracts: it also consists of 12 items. Some items apply to any type of systematic review and were unchanged, such as the key databases searched and the search dates (item 4), the number and type of included studies (item 6), and the primary source of funding (item 11). Eventually, one item (item 8, calling for the description of effect size) was removed as it does not apply to test accuracy studies, one item (item A1, calling for reporting of statistical methods used for data synthesis) was added, and updated phrasing was used in six additional items, reflecting language and methods more typically used in test accuracy research. The PRISMA-DTA group has now published an extensive Explanation and Elaboration document, with detailed guidance along with examples on how to report each item in an abstract [13].

Other initiatives to improve test accuracy research

With the publication of PRISMA-DTA for Abstracts, the “toolbox” that can be used in the field of test accuracy research is expanding further. The abovementioned reporting guidelines can be used for primary test accuracy studies and systematic reviews thereof (Table 2), which may be evaluations of diagnostic tests, but is also relevant for medical tests used for screening, staging, prognosis, and monitoring. In addition to these reporting guidelines, multiple other tools have been developed over the past years to improve the quality of this type of research.
Table 2

Available reporting guidelines for diagnostic test accuracy research

Primary diagnostic test accuracy studiesSystematic reviews of diagnostic test accuracy studies
Full-text articlesSTARD 2015 [7, 8]PRISMA-DTA [11, 12]
Journal and conference abstractsSTARD for Abstracts [26]PRISMA-DTA for Abstracts [11, 13]
Prospective registrationSTARD for Registration [33]Not yet available
Available reporting guidelines for diagnostic test accuracy research Prospective registration of biomedical studies is increasingly encouraged to reduce unnecessary duplicate research efforts, increase transparency, and prevent selective reporting [30]. Where registration of clinical trials of interventions has become commonplace and a requirement for many institutions and journals, researchers evaluating medical tests less often register their study protocol [31, 32]. To improve this, STARD for Registration was developed, providing guidance for informative registration of primary test accuracy studies in trial registries such as clinicaltrials.gov [33]. Systematic reviews should also be prospectively registered before data extraction starts, which can be done in PROSPERO [34], or, alternatively, full protocols and other research materials can be uploaded on online platforms such as Open Science Framework (available at https://osf.io/). For systematic reviewers, QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies) provides a tool for the assessment of potential sources of bias or applicability concerns within four domains of primary test accuracy studies [35]. These domains were previously identified as the main sources of quality concerns in test accuracy studies and cover (1) patient selection, (2) the index test under evaluation, (3) the reference standard used, and (4) the flow of patients and the timing of testing. The Cochrane Handbook for Diagnostic Test Accuracy Reviews provides specific guidance for each step in the review process such as developing criteria for including studies, searching for studies, and assessing methodological quality (by applying QUADAS-2) [36]. Later in the potential process of adopting medical tests in clinical practice, developers of clinical guidelines of diagnostic tests and strategies may need to grade the quality of evidence and strength of recommendations, for example by using GRADE [37].

Discussion: Diagnostic and Prognostic Research and reporting guidelines

Incomplete reporting is a significant and avoidable source of research waste [1, 2]. To improve this situation for test accuracy research, reporting guidelines such as STARD 2015, STARD for Abstracts, PRISMA-DTA, and PRISMA-DTA for Abstracts are available. These guidelines are particularly relevant for Diagnostic and Prognostic Research, because the journal aims at publishing high-quality diagnostic research addressing studies of medical tests and markers, including systematic reviews thereof. Diagnostic and Prognostic Research advocates complete and transparent reporting of research and explicitly highlights in the submission guidelines that “using these guidelines to write the report, completing the checklist, and constructing a flow diagram are likely to optimize the quality of reporting and make the peer review process more efficient.” Therefore, authors are required to upload a populated reporting checklist from the applicable reporting guidelines during the submission process, and editors are instructed to ensure that this is done. There is evidence that such editorial policies improve adherence to reporting guidelines [38], and hence, we encourage journals to consider implementing them if not already in place.
  37 in total

1.  Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes.

Authors:  Isabelle Boutron; Susan Dutton; Philippe Ravaud; Douglas G Altman
Journal:  JAMA       Date:  2010-05-26       Impact factor: 56.272

Review 2.  Increasing value and reducing waste in biomedical research: who's listening?

Authors:  David Moher; Paul Glasziou; Iain Chalmers; Mona Nasser; Patrick M M Bossuyt; Daniël A Korevaar; Ian D Graham; Philippe Ravaud; Isabelle Boutron
Journal:  Lancet       Date:  2015-09-27       Impact factor: 79.321

Review 3.  Overinterpretation of Research Findings: Evidence of "Spin" in Systematic Reviews of Diagnostic Accuracy Studies.

Authors:  Trevor A McGrath; Matthew D F McInnes; Nick van Es; Mariska M G Leeflang; Daniël A Korevaar; Patrick M M Bossuyt
Journal:  Clin Chem       Date:  2017-06-12       Impact factor: 8.327

4.  Overinterpretation and misreporting of diagnostic accuracy studies: evidence of "spin".

Authors:  Eleanor A Ochodo; Margriet C de Haan; Johannes B Reitsma; Lotty Hooft; Patrick M Bossuyt; Mariska M G Leeflang
Journal:  Radiology       Date:  2013-01-29       Impact factor: 11.105

5.  Reporting guidelines for journal and conference abstracts.

Authors:  Jérémie F Cohen; Daniël A Korevaar; Isabelle Boutron; Constantine A Gatsonis; Sally Hopewell; Matthew D F McInnes; David Moher; Erik von Elm; Patrick M Bossuyt
Journal:  J Clin Epidemiol       Date:  2020-04-18       Impact factor: 6.437

6.  A history of the evolution of guidelines for reporting medical research: the long road to the EQUATOR Network.

Authors:  Douglas G Altman; Iveta Simera
Journal:  J R Soc Med       Date:  2016-02       Impact factor: 5.344

7.  Reducing waste from incomplete or unusable reports of biomedical research.

Authors:  Paul Glasziou; Douglas G Altman; Patrick Bossuyt; Isabelle Boutron; Mike Clarke; Steven Julious; Susan Michie; David Moher; Elizabeth Wager
Journal:  Lancet       Date:  2014-01-08       Impact factor: 79.321

Review 8.  A systematic review classifies sources of bias and variation in diagnostic test accuracy studies.

Authors:  Penny F Whiting; Anne W S Rutjes; Marie E Westwood; Susan Mallett
Journal:  J Clin Epidemiol       Date:  2013-08-17       Impact factor: 6.437

9.  Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement.

Authors:  David Moher; Alessandro Liberati; Jennifer Tetzlaff; Douglas G Altman
Journal:  PLoS Med       Date:  2009-07-21       Impact factor: 11.069

10.  Preferred reporting items for journal and conference abstracts of systematic reviews and meta-analyses of diagnostic test accuracy studies (PRISMA-DTA for Abstracts): checklist, explanation, and elaboration.

Authors:  Jérémie F Cohen; Jonathan J Deeks; Lotty Hooft; Jean-Paul Salameh; Daniël A Korevaar; Constantine Gatsonis; Sally Hopewell; Harriet A Hunt; Chris J Hyde; Mariska M Leeflang; Petra Macaskill; Trevor A McGrath; David Moher; Johannes B Reitsma; Anne W S Rutjes; Yemisi Takwoingi; Marcello Tonelli; Penny Whiting; Brian H Willis; Brett Thombs; Patrick M Bossuyt; Matthew D F McInnes
Journal:  BMJ       Date:  2021-03-15
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.