| Literature DB >> 35940834 |
Robert Schulz1,2, Georg Langen3, Robert Prill4, Michael Cassel2, Tracey L Weissgerber5.
Abstract
OBJECTIVES: Transparent reporting of clinical trials is essential to assess the risk of bias and translate research findings into clinical practice. While existing studies have shown that deficiencies are common, detailed empirical and field-specific data are scarce. Therefore, this study aimed to examine current clinical trial reporting and transparent research practices in sports medicine and orthopaedics.Entities:
Keywords: clinical trials; medical education & training; orthopaedic & trauma surgery; rehabilitation medicine; sports medicine; statistics & research methods
Mesh:
Year: 2022 PMID: 35940834 PMCID: PMC9364413 DOI: 10.1136/bmjopen-2021-059347
Source DB: PubMed Journal: BMJ Open ISSN: 2044-6055 Impact factor: 3.006
Terminology and concepts. Created by the authors
| Concept | |
| Questionable research practices | Questionable research practices are defined as ‘Design, analytical or reporting practices that have been questioned because of the potential for the practice to be employed with the purpose of presenting biased evidence in favour of an assertion”. |
| Selective reporting/ cherry picking | The decision about whether to publish a study or parts of a study is based on the direction or statistical significance of the results. |
| Publication bias | The decision about whether to publish research findings depends on the strength and direction of the findings. |
| Outcome reporting bias | Only particular outcome variables are included in publications and decisions about which variables to include are based on the statistical significance or direction of the results. |
| Attrition bias | Attrition refers to reductions in the number of participants throughout the study due to withdrawals, dropouts or protocol deviations. Attrition bias occurs when there are systematic differences between people who leave the study and those who continue. |
| Null hypothesis statistical testing (NHST) | NHST is originally based on theories of Fisher and Neyman-Pearson. The null hypothesis is rejected or accepted depending on the position of an observed value in a test distribution. While NHST is standard practice in many fields, the International Committee of Medical Journal Editors warns against the inappropriate use and sole reliance on NHST due to several shortcomings of using this approach inappropriately. |
| p-Hacking | Describes the process of analysing the data in multiple ways until statistically significant results are found. |
| HARKing | HARKing, or hypothesising after results are known, is defined as presenting a post-hoc hypothesis as if it were an a priori hypothesis. |
Criteria for reporting and transparent research practices. The table shows specific questions used to assess each outcome criteria and provides a brief justification for why each criteria was selected. Created by the authors
| Category | Assessment | Rationale and context |
| Sample size calculation | Was an a priori sample size calculation performed? |
Low power is associated with high rates of spurious findings and overinflated effect sizes, A priori sample size calculations help to prevent underpowered trials, however, they are regularly performed inadequately. Common problems include failing to justify the expected treatment effect and not stating all values required for calculation. |
| Randomisation and concealed allocation | Did the authors address whether randomisation was used? Who generated the randomisation sequence? Who enrolled participants? Who assigned participants to groups? |
Inadequate randomisation and allocation concealment procedures introduce selection bias and are associated with increased odds of significant but spurious results |
| Blinding | Did the article include a statement on blinding? |
Blinding prevents ascertainment bias in clinical trials. A lack of blinding is associated with overinflated effect sizes. |
| Flow of participants | Were the inclusion and exclusion criteria clearly stated? |
Detailed inclusion and exclusion criteria help the reader to assess generalisability. Knowing when and why participants dropped out or were removed from the study is essential to estimate attrition bias. |
| Data analysis | Was a study hypothesis presented and a primary outcome specified? |
Specifying the study hypothesis and the primary outcome prospectively safeguards against selective reporting. Discrepancies between the registration and the study report may indicate outcome switching, which favours statistically significant results and introduces selective reporting bias. Reporting the test statistic and df allows readers to identify misreported p values. In 13% of psychology studies, meta-researchers detected mismatches between p values and the reported test statistic and df that would affect statistical conclusions. Analyses should take the magnitude, confidence and likelihood of an effect into account, instead of focusing on whether effects are statistically significant. Effect sizes show the magnitude of effects within a study, while standardised effect sizes allow for comparisons across studies. |
| Data visualisation | Were bar graphs used to visualise continuous data? |
Using bar graphs to visualise continuous data are problematic because many different data distributions can lead to the same bar graph. The actual data may suggest different conclusions from the summary statistics alone. |
| Intervention reporting | What type of intervention was performed (eg, exercise, physical therapy, surgery)? Was monitoring of adherence to the intervention addressed? Were essential details needed to replicate the experimental and control interventions (eg, frequency, intensity, volume, and type of exercise) provided? |
When clinical trials do not report details needed to implement the intervention, findings cannot be translated into clinical practice. The minority of exercise studies provided enough information to allow others to replicate interventions. Adherence can effect intervention efficacy. Intervention effects can be up to three times larger in fully adherent participants compared with partly adherent participants. |
| Transparency criteria | Was the study registered or pre-registered? |
Half of researchers admit to selectively reporting results and presenting post hoc analyses as if they had been pre-specified. Open access papers generate more media coverage and citations. Open data facilitates collaboration and benefits society. |
Figure 1Reporting prevalence for rigour and sample criteria. This plot displays the percentage of trials that addressed each criteria. For information on the actual randomisation or blinding status, please refer to the text. The different coloured data points are for better visual differentiation of each subcategory. Created by the authors.
Figure 2The blinding status across the main different stakeholder groups across all clinical trials (n=163). Created by the authors.
Figure 3Reporting prevalence for data analysis and transparency criteria. This plot displays the percentage of trials that addressed each criteria. Created by the authors. ES, effect size; NHST, null hypothesis statistical testing.
Figure 4A priori sample size calculations are essential for generating meaningful results with clinical trials. Created by the authors. This infographic focuses on key elements a priori sample size calculations that should be reported in clinical trial publication. However, it is important to note that each element should be justified individually including the thresholds for type 1 and type 2 errors, and the expected effect size. Lakens free article on sample size justification provides an excellent overview of aspects to consider when planning empirical research studies.96