Literature DB >> 32641085

Outcome reporting recommendations for clinical trial protocols and reports: a scoping review.

Nancy J Butcher1, Emma J Mew2, Andrea Monsour2, An-Wen Chan3, David Moher4,5, Martin Offringa2,6,7.   

Abstract

BACKGROUND: Clinicians, patients, and policy-makers rely on published evidence from clinical trials to help inform decision-making. A lack of complete and transparent reporting of the investigated trial outcomes limits reproducibility of results and knowledge synthesis efforts, and contributes to outcome switching and other reporting biases. Outcome-specific extensions for the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT-Outcomes) and Consolidated Standards of Reporting Trials (CONSORT-Outcomes) reporting guidelines are under development to facilitate harmonized reporting of outcomes in trial protocols and reports. The aim of this review was to identify and synthesize existing guidance for trial outcome reporting to inform extension development.
METHODS: We searched for documents published in the last 10 years that provided guidance on trial outcome reporting using: an electronic bibliographic database search (MEDLINE and the Cochrane Methodology Register); a grey literature search; and solicitation of colleagues using a snowballing approach. Two reviewers completed title and abstract screening, full-text screening, and data charting after training. Extracted trial outcome reporting guidance was compared with candidate reporting items to support, refute, or refine the items and to assess the need for the development of additional items.
RESULTS: In total, 1758 trial outcome reporting recommendations were identified within 244 eligible documents. The majority of documents were published by academic journals (72%). Comparison of each recommendation with the initial list of 70 candidate items led to the development of an additional 62 items, producing 132 candidate items. The items encompassed outcome selection, definition, measurement, analysis, interpretation, and reporting of modifications between trial documents. The total number of documents supporting each candidate item ranged widely (median 5, range 0-84 documents per item), illustrating heterogeneity in the recommendations currently available for outcome reporting across a large and diverse sample of sources.
CONCLUSIONS: Outcome reporting guidance for clinical trial protocols and reports lacks consistency and is spread across a large number of sources that may be challenging to access and implement in practice. Evidence and consensus-based guidance, currently in development (SPIRIT-Outcomes and CONSORT-Outcomes), may help authors adequately describe trial outcomes in protocols and reports transparently and completely to help reduce avoidable research waste.

Entities:  

Keywords:  CONSORT; Endpoint; Outcome; Reporting guideline; SPIRIT; Trial; Trial protocols

Mesh:

Year:  2020        PMID: 32641085      PMCID: PMC7341657          DOI: 10.1186/s13063-020-04440-w

Source DB:  PubMed          Journal:  Trials        ISSN: 1745-6215            Impact factor:   2.279


Background

Clinical trials, when appropriately designed, conducted, and reported, are a gold-standard study design for generating primary evidence on treatment efficacy, effectiveness, and safety. In clinical trials, outcomes (sometimes referred to as endpoints or outcome measures) are measured to examine the effect of the intervention on trial participants. The findings of the trial thus rest critically on the trial outcomes. As data accumulate across different clinical trials for specific interventions and outcomes, the outcome data published in clinical trial reports are ideally synthesized through systematic reviews and meta-analyses into a single estimate of effect that can inform clinical and policy-making decisions. This evidence generation and knowledge synthesis process enables the practice of evidence-based medicine. This process is facilitated by the complete and prospective definition of trial outcomes. Appropriate outcome selection and description are important for obtaining ethical and regulatory approvals, ensuring the trial team conducts the trial consistently and, ultimately, provides transparency of methods and facilitates the interpretation of the trial results. Despite the importance of trial outcomes, it is well established in the biomedical literature that key information about how trial outcomes were selected, defined, measured, and analysed is often missing or poorly reported across trial documents and information sources [1-8]. A lack of complete and transparent reporting of trial outcomes limits critical appraisal, reproducibility of results, and knowledge synthesis efforts, and enables the introduction of bias into the published literature by leaving room for outcome switching and selective reporting. There is evidence that up to 60% of trials change, omit, or introduce a new primary outcome between the planned trial protocol and the published trial report [3, 9–12]. Secondary outcomes have been less studied, but may be even more prone to bias and inadequate reporting [12, 13]. Deficient outcome reporting, either through selective reporting of the measured outcomes or incompletely pre-specifying and defining essential components of the reported outcome, facilitates undetectable data “cherry-picking” in the primary reports and has the potential to impact the conclusions of systematic reviews and meta-analyses [14, 15]. Although there is an established need among the scientific community to improve the reporting of trial outcomes [5, 16–19], it remains unknown what actually constitutes useful, complete reporting of trial outcomes to knowledge users. The well-established Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) [20] and Consolidated Standards of Reporting Trials (CONSORT) [21] reporting guidelines provide guidance on what to include in clinical trial protocols and reports, respectively. Yet although SPIRIT and CONSORT provide general guidance on how to report trial outcomes [20, 21], and have been extended to cover patient-reported outcomes [22, 23] and harms [24], there remains no standard evidence-based guidance that is applicable to all outcome types, disease areas, and populations for trial protocols and published reports. An international group of experts and knowledge users [25] has therefore convened to develop outcome-specific reporting extensions for the SPIRIT and CONSORT reporting guidelines. Originally referred to as the SPIRIT-InsPECT and CONSORT-InsPECT (Instrument for reporting Planned Endpoints in Clinical Trials) reporting extensions, the final products will be referred to as the SPIRIT-Outcomes and CONSORT-Outcomes extensions in response to stakeholder and end-user input. These extensions will be complementary to the work of the Core Outcome Measures in Effectiveness Trials (COMET) Initiative and core outcome sets; core outcome sets standardize which outcomes should be measured for particular health conditions, whereas SPIRIT-Outcomes and CONSORT-Outcomes will provide standard harmonized guidance on how outcomes should be reported [26]. The SPIRIT-Outcomes and CONSORT-Outcomes extensions are being developed in accordance with the methodological framework created by members of the Enhancing Quality and Transparency of Health Research Quality (EQUATOR) Network for reporting guideline development, including a literature review to identify and synthesize existing reporting guidance [27]. The protocol to develop these guidelines has been published previously [28]. An initial list of 70 candidate trial outcome reporting items was first developed through an environmental scan of academic and regulatory publications, and consultations with methodologists and knowledge users including clinicians, guideline developers, and trialists [28-30]. These 70 items were organized into ten descriptive categories: What: description of the outcome; Why: rationale for selecting the outcome; How: the way the outcome is measured; Who: source of information of the outcome; Where: assessment location and setting of the outcome; When: timing of measurement of the outcome; Outcome data management and analyses; Missing outcome data; Interpretation; and Modifications. The purpose of this scoping review was to identify and synthesize existing guidance for outcome reporting in clinical trials and protocols to inform the development of the SPIRIT-Outcomes and CONSORT-Outcomes extensions. The results of this scoping review were presented during the web-based Delphi study and the in-person consensus meeting. A scoping review approach, which is a form of knowledge synthesis used to map concepts, sources, and evidence underpinning a research area [31, 32], was selected given the purpose of this review. The specific research questions that this review sought to address were: what published guidance exists on the reporting of outcomes for clinical trial protocols and reports; does the identified guidance support or refute each candidate item as a reporting item for clinical trial protocols or reports; and does any identified guidance support the creation of additional candidate items or the refinement of existing candidate items?

Methods

This review was prepared in accordance with the PRISMA extension for Scoping Reviews reporting guideline (see Additional File 1: eTable 1) [33]. The protocol for this review has been published elsewhere [30, 34]. This scoping review did not require ethics approval from our institution.

Eligibility criteria

Documents that provided guidance (advice or formal recommendation) or a checklist describing outcome-specific information that should be included in a clinical trial protocol or report were eligible if published in the last 10 years in a language that our team could read (English, French, or Dutch). Dates were restricted to the last 10 years from the time of review commencement to focus the review to inform the update and extension of existing guidance provided by CONSORT (published in 2010) and SPIRIT (published in 2013) on outcome reporting and to increase feasibility related to the large number of documents identified in our preliminary searches. There were no restrictions on population, trial design, or outcome type. We only included documents that provided explicit guidance (“stated clearly and in detail, leaving no room for confusion or doubt” [35], such that the guidance must specifically state that the information should be included in a clinical trial protocol or report) [36]. An example of included guidance follows from the CONSORT-PRO extension: “Evidence of patient-reported outcome instrument validity and reliability should be provided or cited, if available” [36].

Information sources

Documents were searched for using: an electronic bibliographic database search (MEDLINE and the Cochrane Methodology Register; see eTable 2 in Additional file 2 for search strategy), developed in close consultation with an experienced research librarian, and searched from inception to 19 March 2018; a grey literature search; solicitation of colleagues; and reference list searching. Eligible document types included review articles, reporting guidelines, recommendation/guidance documents, commentary/opinion pieces/letters, regulatory documents, government reports, ethics review board documents, websites, funder documents, and other trial-related documents such as trial protocol templates. The grey literature search methods included a systematic search of Google (www.google.com) using 40 combinations of key words (e.g., “trial outcome guidance”, “trial protocol outcome recommendations”; see eTable 3 in Additional file 3 for a complete list). The first five pages of the search results for each key term were reviewed (10 hits per page, leading to 2000 Google hits screened in total). Documents were also searched for using a targeted website search of 41 relevant websites (e.g., the EQUATOR Network, Health Canada, the Agency for Healthcare Research and Quality; see eTable 3 in Additional file 3) identified by the review team, solicitation of colleagues, and use of a tool for searching health-related grey literature [37]. Website searching included screening of the homepage and relevant subpages of each website. When applicable, the term “outcome” and its synonyms were searched for using the internal search feature of the website. We searched online for forms and guidelines from an international sample of ethics review boards, as ethics boards are responsible for evaluating proposed trials including the selection, measurement, and analyses of trial outcomes. We restricted the ethics review board search to five major research universities and five major research hospitals (considered likely to be experienced in reviewing and providing guidance on clinical trials) in four English-speaking countries: United States, United Kingdom, Canada, and Australia (see eTable 3 in Additional file 3). This approach helped to limit the search to a manageable sample of international ethics review board guidance. To ensure diverse geographic representation of documents from ethics review boards, as some countries yielded substantially more documents than others, documents were randomly selected from each of the four selected countries (i.e., 25% of documents were from each country), amounting to approximately half of the number of the total ethics review board documents initially identified. Additional documents and sources from experts were obtained by contacting all founding members of the “InsPECT Group” [25]. This included 18 trialists, methodologists, knowledge synthesis experts, clinicians, and reporting guideline developers from around the world [28]. We asked each expert to identify documents, relevant websites, ethics review boards, and additional experts who may have further information. All recommended experts were contacted with the same request. Given the comprehensiveness of our search strategies and the large number of documents identified as eligible for inclusion, we performed reference list searching only for included documents identified via Google searching, as this document set encompassed the diversity of sources and document types eligible for inclusion (e.g., academic publications, websites).

Selection of sources of evidence

A trained team member (L. Saeed) performed the final electronic bibliographic database searches and exported the search results into EndNote version X8 [38] to remove all duplicates. All other data sources were first de-duplicated within each source manually, and then de-duplicated between already screened sources, leaving only new documents to move forward for “charting” (in scoping reviews, the data extraction process is referred to as charting the results) [32, 33].

Initial screening

All screening and data charting forms are available on the Open Science Framework [39]. Titles and abstracts of documents retrieved from the electronic bibliographic database search were screened for potential eligibility by one of two reviewers with graduate-level epidemiological training (AM, EJM) before full texts were thoroughly examined. The two reviewers assessed 90 citations as a practice set and reviewed the results with a senior team member (NJB). The reviewers then screened a randomly selected training set of 100 documents from the electronic bibliographic database search and achieved 93% observed agreement and 71% chance agreement, yielding a Cohen’s κ score of 0.76 (substantial agreement [40]). The remaining search results were then divided and each independently screened by one of the two reviewers, with periodic verification checks performed by NJB. One reviewer (AM) screened and charted all website search results. Documents gathered from the ethics review board searches (by L. Saeed) and from the solicitation of experts moved directly to full-text review and charting by EJM.

Full-text screening

The reviewers (AM, EJM) performed full-text screening for eligibility using a similar process as for title and abstract screening. A sample of 35 documents identified from title and abstract screening were assessed for eligibility. The observed agreement rate was 94% (33 of 35 documents). The included documents (n = 14) were charted in duplicate, and the reviewers examined their charting results and resolved any discrepancies through discussion. Following review of the agreement results by a senior team member (NJB), the remaining search results were divided and independently screened and charted by one of the two reviewers, with periodic verification checks performed by NJB. Full-text screening and reasons for exclusion were logged using a standardized form [39] developed using Research Electronic Data Capture (REDCap) software [41].

Data charting process

The included documents proceeded to undergo data charting using a standardized charting form [39] developed using REDCap software [41]. Prior to data charting, 11 documents were piloted through the full-text screening form and the charting form by EJM and AM (AM was not involved in developing the forms), and the forms were modified as necessary following review of the form testing with NJB and MO. The reviewers (AM, EJM) charted data that included information such as characteristics of the document (e.g., publication type, article title, last name of first author, publication year, publisher) as well as the scope and characteristics for each of the specific recommendations extracted from each included document (e.g., whether the recommendation was specific to clinical trial protocols or reports, or specific to type of outcomes, trial design, or population). Given the nature of this review, a risk of bias assessment or formal quality appraisal of included documents was not performed. To help gauge the credibility of recommendations gathered, we categorized the type(s) of recommendation as made with supporting empirical evidence provided within the source document (e.g., based on findings from a literature review or expert consensus methods) and/or citation(s) provided to other documents (e.g., citation provided to an existing reporting guideline), or neither.

Synthesis of results

Recommendations identified within the included documents were compared with the candidate outcome reporting items to support, refute, or refine item content and to assess the need for the development of additional candidate items. To achieve these aims, the reviewers (AM and EJM) mapped each recommendation gathered to existing candidate items or one of the ten descriptive categories, supported by full-text extraction captured in free text boxes within the charting form. Recommendations that did not fall within the scope of any existing candidate items or categories were captured in free text boxes. Eight in-person meetings were held by members of the “InsPECT Operations Team” [25, 28] over a 2-month period to review these recommendations and to develop any new candidate reporting items or refine existing candidate items to better reflect the concepts/wording in the literature. Attendance was required by the review lead author (NJB), the senior author (MO), and at least three other members of the Operations Team (EJM, AM, L. Saeed, A. Chee-a-tow). After completion of data collection, the mapping results of recommendations to each candidate item were reviewed by NJB in their entirety and finalized by consensus with the two reviewers (EJM, AM). The wording of the candidate items was then clarified as necessary and finalized by the Operations Team. Data analysis included descriptive quantitative measures (counts and frequencies) to characterize the guidance document characteristics and their recommendations.

Results

The full dataset is available on the Open Science Framework [39]. The electronic database literature search yielded 2769 unique references, of which 153 documents were found to be eligible and included (Fig. 1). The Google searches (2000 hits assessed in total) led to the inclusion of 62 documents. An additional seven documents were identified and included from the targeted website search (41 websites assessed). There were five documents included from 12 experts (33 were contacted in total), 15 documents from 40 ethics review boards websites, and two from reference list screening. In total, 244 unique documents were included (Fig. 1).
Fig. 1

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram for the scoping review of documents containing trial outcome reporting recommendations. aThese exclusion criteria counts are not mutually exclusive

Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram for the scoping review of documents containing trial outcome reporting recommendations. aThese exclusion criteria counts are not mutually exclusive The majority of the included documents were published by academic journals (72%; Table 1). Other publishers include hospitals, universities, and research organizations as well as governments and non-governmental organizations. All but one document were published in English. The types of documents included varied but were primarily literature reviews (30%), recommendation/guidance documents (24%), commentary/opinion pieces/letters (12%), or reporting guidelines (14%; Table 1).
Table 1

General characteristics of the included documents (n = 244)

N (%)
Document publisher
 Academic journal176 (72.1)
 Hospital/university/research organization31 (12.7)
 Government21 (8.6)
 Non-governmental organization16 (6.6)
Document type
 Literature review74 (30.3)
  Assessment of reporting completenessa39 (16.0)
  Systematic/scoping review28 (11.5)
  Other type of review7 (2.9)
 Recommendation/guidance document59 (24.2)
 Commentary/opinion piece/letter30 (12.3)
 Reporting guideline34 (13.9)
 Trial protocol template16 (6.6)
 Research ethics board document11 (4.5)
 Regulatory document6 (2.5)
 Website5 (2.0)
 Government report3 (1.2)
 Otherb6 (2.5)
Publication year
 2008–201035 (14.3)
 2011–201369 (28.2)
 2014–201670 (28.7)
 2017–2018c56 (23.0)
 Not reported14 (5.7)
Language
 English243 (99.6)
 French1 (0.4)
 Dutch0 (0)

aIncludes any type of literature review that aimed to assess the completeness of reporting in the included articles from either an original review or a secondary analyses of documents included in a prior review

bIncludes reporting guideline development protocols (n = 2) and a reporting guideline pilot study, checklist for peer-reviewers, statistical analysis plan template, and article evaluating an outcome measurement instrument (n = 1 each)

cUntil time of search (19 March 2018)

General characteristics of the included documents (n = 244) aIncludes any type of literature review that aimed to assess the completeness of reporting in the included articles from either an original review or a secondary analyses of documents included in a prior review bIncludes reporting guideline development protocols (n = 2) and a reporting guideline pilot study, checklist for peer-reviewers, statistical analysis plan template, and article evaluating an outcome measurement instrument (n = 1 each) cUntil time of search (19 March 2018) Of the included documents, 45 (18%) had a primary focus on trial outcome reporting (e.g., the SPIRIT-PRO reporting guideline [22], a journal commentary on selective outcome reporting [42]). Approximately 40% of the documents were focused on specific age group(s) and/or clinical area(s). Of the 18 documents with a focus on a specific age group, most (n = 12) were focused on paediatric populations (Table 2). The clinical areas ranged widely (Table 2), with the highest numbers of documents focused on the areas of oncology (n = 15), mental health (n = 10), and oral and gastroenterology (n = 10). Approximately one-third of all included documents (n = 85) came from such discipline-specific documents (Table 2).
Table 2

Subject focus of the included documents (n = 244)

N (%)
Scope
 Primary focus on trial outcome reporting45 (18.4)
 Primary focus not on trial outcome reporting199 (81.6)
Demographic focusa
 None stated148 (60.7)
 Age focus explicitly stateda18 (7.4)
 Paediatric (birth–18 years old)12 (4.9)
  Neonates and/or infants10 (4.1)
  Children11 (4.5)
 Adolescents11 (4.5)
 Adulthood (19–65 years old)5 (2.0)
 Geriatric (> 65 years old)2 (0.8)
 Clinical area focus explicitly stateda85 (34.8)
  Oncology15 (6.1)
  Mental health10 (4.1)
  Oral and gastroenterology10 (4.1)
  Obstetrics and gynaecology8 (3.3)
  Rheumatology8 (3.3)
  Surgery6 (2.5)
  Neurology5 (2.0)
  Pain management4 (1.6)
  Haematology3 (1.2)
  Respiratory3 (1.2)
  Alternative medicine2 (0.8)
  Critical care2 (0.8)
  Dermatology2 (0.8)
  Diabetes2 (0.8)
  Infectious diseases2 (0.8)
  Nutrition2 (0.8)
  Otherb10 (4.1)

aNot mutually exclusive

bAnaesthesiology, cardiovascular and metabolism, endocrinology, nephrology, obesity, ophthalmology, palliative care, physical rehabilitation, radiology, and urology (n = 1 each)

Subject focus of the included documents (n = 244) aNot mutually exclusive bAnaesthesiology, cardiovascular and metabolism, endocrinology, nephrology, obesity, ophthalmology, palliative care, physical rehabilitation, radiology, and urology (n = 1 each) There were 1758 trial outcome reporting recommendations identified in total within 244 eligible documents. The median number of unique outcome reporting recommendations per guidance document was 4 (range 1–46). Assessment of the focus of each recommendation (Table 3) showed that most recommendations were specifically focused on clinical trial protocols (43%) and/or reports (44%). Others were focused on outcome reporting in trial documents generally, ethics boards submissions, and clinical trial proposals in grant applications (Table 3). Only 15% of recommendations focused on a specific trial phase and/or design (Table 3), although nearly half (n = 836, 47%) focused on a specific outcome classification (e.g., primary, secondary) or type (e.g., patient-reported outcomes or harms; Table 3 and Additional file 4: eTable 4).
Table 3

Focus of the outcome reporting recommendations (n = 1758) identified within 244 eligible documents

N (%)
Trial document typea
 Trial reports781 (44.4)
 Trial protocols758 (43.1)
 General trial reporting229 (13.0)
 Ethics boards documents for trial submissions21 (1.2)
 Study proposal for a trial in grant application(s)1 (0.06)
Trial type
 No specific focus explicitly stated1369 (77.9)
 All trials124 (7.1)
 Specific trial focusa265 (15.1)
  Phasea102 (5.8)
   Pilot/feasibility33 (1.9)
   II45 (2.6)
   III64 (3.6)
  Designa102 (5.8)
N-of-144 (2.5)
   Cluster16 (0.9)
   Non-inferiority14 (0.8)
   Equivalence11 (0.6)
   Within person10 (0.6)
   Parallel9 (0.5)
   Crossover6 (0.3)
   Adaptive6 (0.3)
   Superiority5 (0.3)
   Pragmatic3 (0.2)
Outcomes
 No specific focus explicitly stated911 (51.8)
 All outcomes11 (0.6)
 Specific outcome focusa836 (47.6)
  Outcome classificationa469 (26.7)
   Primary458 (26.1)
   Secondary326 (18.5)
   “Important”7 (0.4)
   Tertiary/exploratory6 (0.3)
  Outcome typea474 (27.0)
   Patient-reported outcome288 (16.4)
   Harm/adverse event116 (6.6)
   Biological marker41 (2.3)
   Efficacy outcome33 (1.9)
   Composite outcome13 (0.7)
   Survival/time-to-event outcome11 (0.6)
   Surrogate outcome8 (0.5)
   Clinician-reported outcome7 (0.4)
   Continuous outcome4 (0.2)
   Binary outcome3 (0.2)
   “Unintended” outcome1 (0.06)

aNot mutually exclusive

Focus of the outcome reporting recommendations (n = 1758) identified within 244 eligible documents aNot mutually exclusive Of all the recommendations identified, approximately 40% were not supported by any empirical evidence or citations; the remaining 60% were supported by empirical evidence provided within the document and/or citations to other documents (Table 4). The type of empirical evidence provided was most often generated from literature reviews, and/or through expert consensus methods (Table 4). Supporting citations to other documents were provided for about one-third of all recommendations (Table 4); cited documents included a wide range of sources, although were often existing reporting guidelines or guidance documents such as SPIRIT, CONSORT, and their associated extensions.
Table 4

Source of evidence provided to support each outcome reporting recommendation (n = 1758) identified within 244 eligible documents

N (%)
Empirical evidence provided within source document and/or citations provided1027 (58.4)
No empirical evidence or citations provided731 (41.6)
Empirical evidence provided within source documenta704 (40.0)
 Literature review513 (29.2)
  Systematic and/or scoping review290 (16.5)
  Assessment of reporting completenessb170 (9.7)
  Other type of review68 (3.9)
 Expert consensus373 (21.2)
 Interview12 (0.7)
 Case study2 (0.1)
 Survey1 (0.06)
Citation(s) provided to other document(s)a582 (33.1)
 Citations to existing reporting guidelines
  SPIRIT-PRO253 (14.4)
  CONSORT-PRO241 (13.7)
  CONSORT141 (8.0)
  SPIRIT42 (2.4)
  Other CONSORT extensions26 (1.5)
 Citations to selected key guidance documentsc
  ICH E6 Good Clinical Practice Guideline71 (4.0)
  International Society for Quality of Life Research (ISOQOL)-recommended PRO reporting standards14 (7.9)
  ICH E9 Statistical Principles for Clinical Trials8 (0.5)
  International Committee of Medical Journal Editors (ICMJE)7 (0.4)
  ICH E3 Structure and Content of Clinical Study Reports5 (0.3)
  Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials (IMMPACT) publication4 (0.2)
  ClinicalTrials.gov guidelines3 (0.2)
  Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) publications2 (0.1)

CONSORT Consolidated Standards of Reporting Trials, ICH International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use, PRO Patient Reported Outcomes, SPIRIT Standard Protocol Items: Recommendations for Interventional Trials

aEmpirical evidence within the source document and citation provided to other document categorizations were not mutually exclusive, nor are the subcategories within each

bIncludes any type of literature review that aimed to assess the completeness of reporting in the included articles from either an original review or a secondary analyses of documents included in a prior review

cThe complete list of citations provided to other documents can be found in the online dataset

Source of evidence provided to support each outcome reporting recommendation (n = 1758) identified within 244 eligible documents CONSORT Consolidated Standards of Reporting Trials, ICH International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use, PRO Patient Reported Outcomes, SPIRIT Standard Protocol Items: Recommendations for Interventional Trials aEmpirical evidence within the source document and citation provided to other document categorizations were not mutually exclusive, nor are the subcategories within each bIncludes any type of literature review that aimed to assess the completeness of reporting in the included articles from either an original review or a secondary analyses of documents included in a prior review cThe complete list of citations provided to other documents can be found in the online dataset Comparison of each of the 1758 recommendations with the initial list of 70 candidate items led to the development of an additional 61 unique candidate reporting items (Table 5). Team discussions produced two additional candidate reporting items, producing a list of 133 candidate reporting items categorized within the ten descriptive categories. One item was excluded by consensus by the Operations Team as the recommendation was consistent with recognized poor methodological practice, yielding 132 candidate reporting items in total (Table 5). The number of candidate items that mapped to each of the ten descriptive categories was variable (range 2–41 items per category), with the largest number of candidate items mapped to the “Outcome data management and analyses” (n = 41 items) and the “How: the way the outcome is measured” (n = 26 items) categories (Table 5). Most of the recommendations made (n = 1611, 91%) could be mapped to a specific candidate reporting item; 153 (9%) were general in nature and were mapped generally to the appropriate category. For example, the recommendation “state how outcome was measured” would be too general in nature to map to a specific candidate item and instead would be mapped to the overall “How: the way the outcome is measured” category. No documents provided an explicit recommendation that refuted or advised against reporting any of the 132 candidate items.
Table 5

Total number of documents identified containing a trial outcome reporting recommendation supporting each of the 132 candidate outcome reporting items in total, and number of documents with recommendations made that were specific to protocols, reports, both protocols and reports, and generally for trial documents

Candidate outcome reporting items within ten descriptive categoriesTotal number of documents (N, %)Number for protocolsNumber for reportsNumber for protocolsand reportsNumber generally
What: description of the outcome (n = 15 items)
Category of “What” in general (recommendation not specific to any candidate item)3615101709
State the outcome84343826317
Specify the outcome as primary (or secondary)82343035314
If a primary outcome, provide a rationale for classifying the outcome as primary733310
Report if outcome is planned or unplanneda10N/A1N/A0
If a composite outcome, describe all individual components210101
If a composite outcome, provide citation to methodological paper(s), if applicablea101000
Specify the outcome domainb1355602
Provide a rationale for the selected outcome domaina,b210101
Classify the outcome and the outcome domain according to a standard outcome classification system or taxonomya,b422101
Specify if the outcome is an efficacy or harm outcome (adverse event). If a harm, see CONSORT for harms for specific guidance for trial reports12510200
If outcome is patient-reported, refer to CONSORT-PRO or SPIRIT-PRO for specific guidance, as appropriatec000000
Specify cut-off value for the outcome, if the outcome is continuous but defined and analysed as categorical, and justify cut-off valuea522102
Define clinical significance/meaningful change in terms of the outcome (e.g., minimal important difference, responder definition), including what would constitute a good or poor outcome291216715
Justify the criteria used for defining meaningful change including what would constitute a good or poor outcome, such as from an outcome measurement interpretation guideline522201
Describe underlying basis for determining the criteria used for defining meaningful changea212000
Why: rationale for selecting the outcome (n = 10 items)
Category of “Why” in general (recommendation not specific to any candidate item)15610401
Explain how the outcome relates to the hypothesis of the study22971122
Explain how the outcome addresses the objective/research question of the study16710402
Explain the mechanism (e.g., pathophysiological, pharmacological, etc.) or theoretical framework/model by which the experimental intervention is expected to cause change in the outcome in the target population943402
Specify if a relevant core outcome set is publicly available (e.g., via www.comet-initiative.org/), and if so, if the outcome is part of a core outcome set. If applicable, specify which core outcome set the outcome is part of311200
If a completely new outcome, justify why other outcomes are not appropriate or relevant for use in this trial100010
If there are other published definitions of the outcome beside the one that was used, explain why the chosen definition was used000000
Describe why the outcome is relevant to stakeholder groups (e.g., patients, clinicians, funders, etc.)210200
Report which stakeholders (e.g., patients, clinicians, funders, etc.) are actively involved in outcome selection, as per available guidance for the reporting of patient and public involvement211100
If applicable, describe discrepancies between the selected outcome and outcomes shown to be of interest to relevant stakeholder groups (e.g., through a core outcome set), and ways to reconcile discrepanciesa310201
Provide rationale for the choice of the specific type of outcome (e.g., why a patient-reported outcome instead of a clinician-reported outcome)733310
How: the way the outcome is measured (n = 26 items)
Category of “How” in general (recommendation not specific to any candidate item)5924232817
Describe the outcome measurement instrument (e.g., questionnaire, laboratory test). If applicable, include instrument scaling and scoring details (e.g., range and direction of scores)53222319011
Justify the selection of the outcome measurement instrumenta1569600
If applicable, specify where outcome measurement instrument materials can be accessed. For trial protocols only: if materials are not publicly available, provide a copya423100
Specify if more than one language version of the outcome measure instrument used, and if yes, state how the translated versions were developed313000
If applicable, specify use of outcome measurement instrument in accordance with any user manual, and specify and justify deviations from user manual1049100
If a new or untested outcome measurement instrument, describe an explicit framework (e.g., pathophysiological rationale) and/or supporting clinimetrics to support its usea210200
If assessing multiple outcomes, specify any standardization of order of administration of the outcome measurement instrument(s)101000
If applicable, specify which outcome measurement instrument(s) is used at each assessment time pointa212000
Describe level at which the outcome is measured (i.e., cluster or individual)a210200
Describe any additional resources/materials or processes necessary to perform outcome assessment, when relevant (e.g., language interpreter)624101
If applicable, specify the recall period for outcome assessment313000
Describe mode of outcome assessment (e.g., face to face, telephone, electronically)1568511
Justify mode of outcome assessment (e.g., equivalence between different modes of administration)101000
Describe or provide reference to an empirical study that established validity of the outcome measure instrument for the mode of assessment used in this studya100100
Describe or provide reference to an empirical study that establishes the validity of the outcome measurement instrument in individuals similar to the study sample3916151914
If outcome measurement instrument is known to have poor validity in individuals similar to the study sample, described how this discrepancy is accounted fora100100
Describe or provide reference to an empirical study that established validity of the outcome measure instrument in the study setting3514131912
Describe or provide reference to an empirical study that established reliability of the outcome measure instrument in individuals similar to the study sample271181414
Describe or provide reference to an empirical study that established reliability of the outcome measure instrument in individuals similar to the study setting281181415
Describe or provide reference to an empirical study that establishes the responsiveness of the outcome measurement instrument in the study sample313000
Describe level of imprecision of outcome measurement instrumenta100100
Describe the feasibility of the outcome measurement instrument in the study sample000000
Describe the acceptability and burden of the outcome measurement instrument in the study sample313000
Describe any health risk(s) of the outcome assessment procedureb000000
If applicable, describe any mathematical manipulation of the data necessary to perform during outcome assessmenta100100
Specify any monitoring of outcome data during the trial for the purpose of informing the clinical care of individual trial participants, and if applicable, describe how monitoring is managed in a standardized way212000
Who: source of information of the outcome (n = 12 items)
Category of “Who” in general (recommendation not specific to any candidate item)000000
Describe who assesses the outcome (e.g., nurse, parent) in each study group, and if applicable, how many assessors there are2410121002
Justify the choice of outcome assessor(s) (e.g., proxy versus healthcare provider)212000
Describe if there is an endpoint adjudication committee and if so, when the committee will perform the adjudicationa424000
Describe any processes to maximize outcome data quality (e.g., duplicate measurements)1778900
Describe any trial-specific training required for outcome assessors to apply the outcome measurement instrument1679502
Describe masking procedure(s) for outcome assessors, outcome data entry personnel, and/or outcome data analysts20861013
Describe if outcome assessor(s) are masked to the intervention assignment321382103
Specified any masking of members of the endpoint adjudication committee to the participant’s intervention group assignmenta211100
If applicable, justify why masking was not done, or explain why it was not possible, for outcome assessors, data entry personnel, and/or data analystsa421300
State any strategies undertaken to reduce the potential for unmasking of outcome assessors, data entry personnel, and/or data analystsa424000
If measured, describe success of masking of outcome assessors, outcome data entry personnel, and/or outcome data analysts to intervention assignmenta62N/A5N/A1
Specify the name, affiliation, and contact details for the individual(s) responsible for the outcome content to identify the appropriate point of contact for resolution of any outcome-specific inquiries423001
Where: assessment location and setting of the outcome (n = 3 items)
Category of “Where” in general (recommendation not specific to any candidate item)000000
Describe setting of outcome assessment for each study group (e.g., community clinic, academic hospital)1566900
Specify geographic location of outcome assessment for each study group (e.g., list of countries)1044600
Justify suitability of the outcome assessment setting(s) for the study sample (e.g., measuring blood pressure in clinic vs. home)000000
When: timing of measurement of the outcome (n = 2 items)
Category of “When” in general (recommendation not specific to any candidate item)100100
Specify timing and frequency of outcome assessment(s) (e.g., time point for each outcome, time schedule of assessments)74303228311
Provided justification of timing and frequency of outcome assessment(s) (e.g., related to pathophysiological evidence for treatment response or complications occurrence and/or pragmatic justification)945400
Outcome data management and analyses (n = 41 items)
Category of “Outcome data management and analyses” in general (recommendation not specific to any candidate item)241014316
Data management and processes
Describe outcome data entry, coding, security and storage, including any related processes to promote outcome data quality (e.g., double entry, range checks from outcome data values). Reference to where details of data management procedures can be found, if not included19816300
If applicable, specify who designs the electronic case report form, the name of the data management system, and if it is compliant with jurisdictional regulationsa424000
Analyses
Describe analysis metric for the outcome (e.g., change from baseline, final value, time to event)19814302
Describe method of aggregation for the outcome data (e.g., mean, median, proportion)17710502
Described relevant level of precision (e.g., standard deviation) of the outcome dataa832402
Describe unit of analysis of the outcome (i.e., cluster or individual)1042800
If applicable, describe any transformations of the outcome dataa422200
Provide definition of analysis population relating to protocol non-adherence (e.g., as randomized analysis)3916231213
Justify definition of analysis population relating to protocol non-adherence (e.g., as randomized analysis)a101000
Describe specific plans on how to present outcome data (including harms) (e.g., tables, graphs, etc.)a523N/AN/A2
Describe time period(s) for which the outcome is analysed291224302
If the outcome is assessed at several time points after randomization, state the pre-specified time point of primary interesta735200
Describe statistical/analytical methods and significance test(s) for analysing the outcome data. This should include any analyses undertaken to address risk of type I error, particularly for trials with multiple outcomes and/or measurement time points. Reference to where other details of the statistical analysis plan can be found, if applicable7531412527
Justify statistical method(s) for the outcome analysesa522300
State if outcome is part of any interim analysesa422200
If interim analyses of the outcome are performed, describe the method to adjust for this in the final analysisa423001
If applicable, describe methods for additional analyses, such as subgroup analyses and adjusted analyses2711131103
Identify statistical software for outcome analysis (e.g., SAS, R)a100100
Describe how the outcome data are assessed for meeting assumptions for the statistical tests selected (e.g., normality, homogeneity of variance, etc.)a735101
Specify alternative statistical methods to be used if the underlying assumptions (e.g., normality) do not holda212000
Describe how the statistical methods planned to evaluate the outcome are evaluated before implementation (e.g., through the use of simulations)a100100
If applicable, describe any covariates/factors in the statistical model (e.g., adjusted analyses) used for analysing the outcome data23911714
If applicable, justify inclusion and choice of covariates/factors522300
State and justify the criteria used to exclude any outcome data from the outcome analysis and reporting (e.g., unused data, spurious data)a14612200
If applicable, discuss the available power for secondary hypothesis testing for outcomes considered secondarya101000
If intending to report the results of underpowered analyses, state an explicit strategy for their interpretationa211N/AN/A1
Describe how any unplanned repeat measurements are handled when analysing the outcome data210200
Specify who analyses the outcome data (e.g., name and affiliation)1048101
Results
Report the number of participants assessed for the outcomea94N/A5N/A4
For each group, specify the number of participants analysed for the outcomea208N/A19N/A1
Describe results for each group, and estimated effect size and its precision (such as 95% confidence interval). For binary outcomes, presentation of both absolute and relative effect sizes is recommended4518N/A36N/A9
Provide the results of planned outcome analyses (regardless of statistical significance)2711N/A23N/A4
Describe results of outcome data at each pre-specified time pointa31N/A3N/A0
If a composite outcome, report results of its individual componentsa31N/A2N/A1
If applicable, separate pre-specified statistical analyses from post-hoc analyses that were not pre-specifieda73N/A6N/A1
Report aggregated values of all outcome data (e.g., a table with mean, proportion, etc.) for each groupa94N/A6N/A3
Describe results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from exploratorya62N/A6N/A0
If the outcome is used to make clinical decisions, provide an effect measure to quantify treatment effects (e.g., number needed to treat)a52N/A4N/A1
If the outcome data is part of a statistical analysis, state where the raw data are accessible (or will be accessible)a832600
Report the statistical code used to complete each outcomes analyses (or where it is/will be accessible)a211001
If someone other than a member in the study group interprets the outcome data, describe the person’s affiliationsa313000
Missing outcome data (n = 9 items)
Category of “Missing outcome data” in general (recommendation not specific to any candidate item)312100
Describe any plans to minimize missing outcome data1159N/AN/A2
Describe plans on how reasons for missing outcome data will be recordeda313N/AN/A0
Describe outcome data collection, assessment process, and analysis for participants who discontinue or deviate from the assigned intervention protocol14613100
Describe methods to calculate missing outcome data rates and assess patterns of missing outcome dataa101000
For each group, describe how much outcome data are missing229N/A14N/A8
For each group, describe any reason(s) for missing outcome data (e.g., missing study visits, lost to follow-up)177N/A10N/A7
Describe statistical methods to handle missing outcome items or entire assessments (e.g., multiple imputation)5322291725
If applicable, describe any analyses conducted to assess the risk of bias posed by missing outcome data (e.g., comparison of baseline characteristics of participants with and without missing outcome data)420301
Provide justification for methods to handle missing outcome data. This should include: assumptions underlying the missing outcome data mechanism with justification (including analyses performed to support assumptions about the missingness mechanism); and how the assumed missingness mechanism and any relevant features of the outcome data would influence the choice of statistical method(s) to handle missing outcome data including sensitivity analyses1986904
Interpretation (n = 11 items)
Category of “Interpretation” in general (recommendation not specific to any candidate item)731501
If there are elements in the clinical trial that would be different in a routine application setting (e.g., patient prompts/reminders, training sessions), discuss what impact the omission of these elements could have on outcomes if the intervention is applied outside the study settinga420400
Report how the outcome results address the trial hypothesis, including the definition of clinically meaningful change, if applicablea21N/A1N/A1
Report how the outcome results addresses the research objectivea10N/A1N/A0
Interpret outcome data in relation to clinical outcomes including survival data, where relevant135N/A11N/A2
Discuss the possibility that the results are caused by type I or type II errors (e.g., multiple outcomes assessed, small sample size)a42N/A4N/A0
Describe other considerations or procedures that could affect the ability to interpret the outcome results187N/A13N/A5
If applicable, discuss impact of missing outcome data on the interpretation of findings73N/A4N/A3
If applicable, discuss limitations related to the lack of blinding of outcome assessors, outcome entry personnel, and/or outcome data analystsa31N/A3N/A0
If applicable, discuss any problems with statistical assumptions and/or data distributions for the outcome that could affect the validity of trial resultsa10N/A1N/A0
If a multi-centre trial, discuss any sources of variability in outcome assessment and the potential impact on trial result(s)a10N/A1N/A0
Interpret potential impact of imprecision on outcome resultsa31N/A3N/A0
Modifications (n = 3 items)
Category of “Modifications” in general (recommendation not specific to any candidate item)100001
Describe any changes to trial outcomes after the trial commenced (e.g., status of primary, definition), with reasons271112312
Described any changes to trial outcomes since the trial was registered, with reasonsa210200
Described whether any changes made to the planned analysis of outcomes (including omissions) after the trial commenced, with reasons1563552

CONSORT Consolidated Standards of Reporting Trials, PRO Patient Reported Outcomes, SPIRIT Standard Protocol Items: Recommendations for Interventional Trials. N/A indicates item content was not applicable to trial protocols (e.g., pertained specifically to known trial results) or trial reports (e.g., pertained to trial planning only). Items without footnote a or c are those from the initial list of 70 candidate items

aA new item identified from this scoping review (n = 61 unique candidate items added from this review in total)

bOutcome domain in this context refers to a relatively broad aspect of the effect of illness within which an improvement may occur in response to an intervention; domains may not be directly measurable themselves, so outcomes are selected to assess change within them [43]

cA new item generated through Operations Team discussions when the scoping review findings were reviewed for new items

Total number of documents identified containing a trial outcome reporting recommendation supporting each of the 132 candidate outcome reporting items in total, and number of documents with recommendations made that were specific to protocols, reports, both protocols and reports, and generally for trial documents CONSORT Consolidated Standards of Reporting Trials, PRO Patient Reported Outcomes, SPIRIT Standard Protocol Items: Recommendations for Interventional Trials. N/A indicates item content was not applicable to trial protocols (e.g., pertained specifically to known trial results) or trial reports (e.g., pertained to trial planning only). Items without footnote a or c are those from the initial list of 70 candidate items aA new item identified from this scoping review (n = 61 unique candidate items added from this review in total) bOutcome domain in this context refers to a relatively broad aspect of the effect of illness within which an improvement may occur in response to an intervention; domains may not be directly measurable themselves, so outcomes are selected to assess change within them [43] cA new item generated through Operations Team discussions when the scoping review findings were reviewed for new items The number of documents containing an outcome reporting recommendation supporting the description of each of the 132 candidate items ranged widely (median 5, range 0–84 documents per item, from a total possible sample of 244 documents; Table 5 and Additional file 5: eFigure 1). Of the 132 candidate reporting items, 104 were applicable to both trial protocols and reports, 24 were not applicable to trial protocols (e.g., pertained specifically to known trial results), and 4 were not applicable to trial reports (e.g., pertained to trial planning only). Comparison with the items and concepts covered in SPIRIT 2013 showed that 78 of the 108 (72%) candidate items relevant to protocols are not currently covered either completely or in part by items in the existing SPIRIT checklist. Comparison with items covered in CONSORT 2010 showed that 106 of the 128 (83%) candidate items relevant to trial reports are not currently covered either completely or in part in the existing CONSORT checklist.

Discussion

We performed a review of clinical trial outcome-reporting guidance that encompassed all outcome types, disease areas, and populations from a diverse and comprehensive range of sources. Our findings show that existing outcome reporting guidance for clinical trial protocols and trial reports lacks consistency and is spread across a large number of sources that may be challenging for authors to access and implement in research practice. These results suggest that evidence and consensus-based guidance is needed to help authors adequately describe trial outcomes in protocols and reports transparently and completely to help minimize avoidable research waste. This review provides a comprehensive, evidence-based set of reporting items for authors to consider when preparing trial protocols and reports. The large number of documents included suggest there is much interest in improving outcome reporting in clinical trial documents. Identified outcome reporting items covered diverse concepts that we categorized across ten categories, and the number of items within each category ranged widely. However, authors wishing to use the reporting items identified in this review would face the challenge of trying to describe a large number of reporting concepts into what is typically expensive journal “real estate” (i.e., limited space for competing papers). To date, no published consensus exists on which of these items are essential and constitute best practice to report. For example, it seems unlikely that authors would commonly have the space allowance to provide descriptions of all 41 items within the “Outcome data management and analyses” category, and it is unknown—in the absence of a consensus process—which of these items may be appropriate or necessary to report for any given trial. Notably, a considerable number of the recommendations we identified are not covered in content or in principle in the existing SPIRIT and CONSORT reporting guidelines [20, 21]. Currently, SPIRIT requires more information on trial outcomes to be reported, and in greater detail, than CONSORT [20, 21]. The results of this review, however, showed that most of the candidate items had a similar number of supporting documents that advocated for their inclusion in protocols and in reports, with a few notable exceptions. For example, 24 documents explicitly supported describing the time period(s) for which the outcome is analysed in trial protocols, but only three suggested including this in trial reports. The exclusion of a clear statement of the planned time period(s) of analyses in trial reports enables the possibility of data analysis “cherry-picking” (e.g., multiple unplanned analyses are performed for multiple measurement time points, with only results for the significant analyses being reported). Consulting other trial documents, such as trial protocols and statistical analyses plans [44], may help mitigate the need for such information in the trial report itself. However, these trial documents may not be publicly available [45] and one must also consider the burden on the knowledge user of needing to consult multiple information sources in an era of transition to online publication methods and free sharing platforms. In order to identify the minimum set of reporting items it is necessary to include in all clinical trial protocols and reports, respectively, the results of this scoping review were consolidated and presented during the recently held international Delphi survey and expert Consensus Meeting to determine which candidate items should be included or excluded in the SPIRIT-Outcomes and CONSORT-Outcomes extensions and to develop the wording of the final reporting items. This protocol for this process has been described in detail elsewhere [46] and the results are being prepared for publication as part of the extension statements.

Strengths and limitations

We used a scoping review methodology [32] to map guidance on trial outcome reporting from multiple information sources in an attempt to capture guidance produced and used by relevant stakeholders, including from academic journals, regulatory and government agencies, and ethics review boards [30]. Sensitivity and accuracy may have been reduced by not completing screening and charting in duplicate, although the reviewer training results and periodic data checks by the senior reviewer as well as the fact that all reviewers have graduate-level epidemiological training may have limited this risk. Furthermore, the mapping of every recommendation extracted to each candidate item was verified by the senior reviewer and all of the mapping results presented achieved consensus. The development of new candidate reporting items followed a planned standardized process of team review and discussion that aimed to minimize item content redundancy and ensure correct interpretation of the extracted recommendations [30]. There may be relevant documents published outside the included date range, and the language restrictions employed yielded a sample of documents that were almost entirely published in English. The international ethics review board websites search represented a convenience sample and therefore may not be representative, for example, of guidance provided by non-English speaking and/or smaller institutions. We were limited to documents that were publicly available or available through our institutional access; in particular, ethics review boards may provide guidance to local investigators that is not publicly available to access. However, using sensitive search methods, saturation was reached such that no new items were identified well prior to the end of document review and charting. Most new items were identified in the initial stages of the review. Our review focused on the quantity of documents supporting each recommendation and did not formally assess their quality. To help gauge the credibility of gathered recommendations, we categorized the type(s) of underpinning empirical evidence for each recommendation. Indeed, some candidate items were supported by multiple well-recognized sources and had an empirical evidence base or process that underpinned the recommendations as to why this item is recommended to be reported (e.g., from a systematic review or Delphi process). Others were less frequently recommended for reporting or did not provide supporting empirical evidence, but still may have important implications and merit for reporting. For example, a clear recommendation to “identify the outcomes in a trial report as planned (i.e., pre-specified) or unplanned” was found in only one document. However, selective outcome reporting and outcome switching has been well documented in trial reports, is often difficult to detect, and has been shown to impact treatment estimates in meta-analyses [3, 9–11, 14]. The results from the Delphi and consensus processes will help clarify the relative importance and acceptability of the candidate items by an international group of expert stakeholders.

Conclusions

There is a lack of harmonized guidance to lead authors, reviewers, journal editors, and other stakeholders through the process of ensuring that trial outcomes are completely and transparently described in clinical trial protocols and reports. Existing recommendations are spread across a diverse range of sources and vary in breadth and content. The large number of documents identified, despite limiting our search to the last decade, indicate a substantial interest and need for improving outcome reporting in clinical trial documents. To determine which outcome reporting recommendations constitute best practices for outcome reporting for any clinical trial, a minimum, essential set of reporting items will be identified through evidence and consensus-based methods and ultimately developed into the SPIRIT-Outcomes and CONSORT-Outcomes reporting guidelines. Additional file 1. Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist. Additional file 2. Electronic database searches. Additional file 3. Grey literature information sources. Additional file 4. Comparison of item content with SPIRIT 2013 and CONSORT 2010 and the number of documents identified containing a trial outcome reporting recommendation supporting each of item, by the reported application of the recommendations to specific outcomes or trial types. Additional file 5. Number of documents containing an outcome reporting recommendation supporting each of the 132 candidate outcome reporting items.
  34 in total

1.  Make journals report clinical trials properly.

Authors:  Ben Goldacre
Journal:  Nature       Date:  2016-02-04       Impact factor: 49.962

2.  CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials.

Authors:  David Moher; Sally Hopewell; Kenneth F Schulz; Victor Montori; Peter C Gøtzsche; P J Devereaux; Diana Elbourne; Matthias Egger; Douglas G Altman
Journal:  BMJ       Date:  2010-03-23

3.  Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors.

Authors:  An-Wen Chan; Douglas G Altman
Journal:  BMJ       Date:  2005-01-28

4.  Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support.

Authors:  Paul A Harris; Robert Taylor; Robert Thielke; Jonathon Payne; Nathaniel Gonzalez; Jose G Conde
Journal:  J Biomed Inform       Date:  2008-09-30       Impact factor: 6.317

5.  Cohort study of trials submitted to ethics committee identified discrepant reporting of outcomes in publications.

Authors:  Shelagh Redmond; Erik von Elm; Anette Blümle; Malou Gengler; Thomas Gsponer; Matthias Egger
Journal:  J Clin Epidemiol       Date:  2013-09-24       Impact factor: 6.437

6.  The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews.

Authors:  Jamie J Kirkham; Kerry M Dwan; Douglas G Altman; Carrol Gamble; Susanna Dodd; Rebecca Smyth; Paula R Williamson
Journal:  BMJ       Date:  2010-02-15

Review 7.  Primary Outcomes Reporting in Trials (PORTal): a systematic review of inadequate reporting in pediatric randomized controlled trials.

Authors:  Zafira Bhaloo; Denise Adams; Yali Liu; Namrata Hansraj; Lisa Hartling; Caroline B Terwee; Sunita Vohra
Journal:  J Clin Epidemiol       Date:  2016-09-22       Impact factor: 6.437

Review 8.  Measurement issues in trials of pediatric acute diarrheal diseases: a systematic review.

Authors:  Bradley C Johnston; Larissa Shamseer; Bruno R da Costa; Ross T Tsuyuki; Sunita Vohra
Journal:  Pediatrics       Date:  2010-06-21       Impact factor: 7.124

Review 9.  Outcome reporting in randomised controlled trials and meta-analyses of appendicitis treatments in children: a systematic review.

Authors:  Nigel J Hall; Mufiza Z Kapadia; Simon Eaton; Winnie W Y Chan; Cheri Nickel; Agostino Pierro; Martin Offringa
Journal:  Trials       Date:  2015-06-17       Impact factor: 2.279

10.  Interrater reliability: the kappa statistic.

Authors:  Mary L McHugh
Journal:  Biochem Med (Zagreb)       Date:  2012       Impact factor: 2.313

View more
  3 in total

1.  Study protocol for developing, piloting and disseminating the PRISMA-COSMIN guideline: a new reporting guideline for systematic reviews of outcome measurement instruments.

Authors:  Ellen B M Elsman; Nancy J Butcher; Lidwine B Mokkink; Caroline B Terwee; Andrea Tricco; Joel J Gagnier; Olalekan Lee Aiyegbusi; Carolina Barnett; Maureen Smith; David Moher; Martin Offringa
Journal:  Syst Rev       Date:  2022-06-13

2.  Exercise-based rehabilitation reduces reinjury following acute lateral ankle sprain: A systematic review update with meta-analysis.

Authors:  Jente Wagemans; Chris Bleakley; Jan Taeymans; Alexander Philipp Schurz; Kevin Kuppens; Heiner Baur; Dirk Vissers
Journal:  PLoS One       Date:  2022-02-08       Impact factor: 3.240

3.  Scoping and targeted reviews to support development of SPIRIT and CONSORT extensions for randomised controlled trials with surrogate primary endpoints: protocol.

Authors:  Anthony Muchai Manyara; Philippa Davies; Derek Stewart; Valerie Wells; Christopher Weir; Amber Young; Rod Taylor; Oriana Ciani
Journal:  BMJ Open       Date:  2022-10-13       Impact factor: 3.006

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.