| Literature DB >> 36157898 |
Abstract
Entities:
Keywords: error; evidence; randomization; sport; statistics; strength training and conditioning; trials
Year: 2022 PMID: 36157898 PMCID: PMC9493045 DOI: 10.3389/fspor.2022.981836
Source DB: PubMed Journal: Front Sports Act Living ISSN: 2624-9367
Summary of the recommendations for the design of randomized controlled trials and data interpretation in strength and conditioning.
|
|
|
|
|
|---|---|---|---|
| The central aim of S&C coaches is to improve their athletes' performance with exercise prescription. S&C prescription should be based on the most relevant and updated scientific evidence following where possible an evidence-based approach. | Following CONSORT guidelines and learn from clinical medicine (see phases of RCTs). | Type I error: a training method is found effective when it is actually ineffective (false positive). | A common statistical error is the use of within-group comparisons instead of between-group comparisons to determine longitudinal differences between interventions. The experimental and control groups must be directly compared. |
| Evidence pyramid categorized evidence based on robustness. At the top of the pyramid, we find meta-analyses and systematic reviews, followed by (a level lower) RCTs, while at the bottom we find experts' opinions and case reports. | Researchers need to control for bias and confounding factors. Researchers and practitioners can use different types of randomization such as simple, block, stratified, unequal randomization, and covariate adaptive randomization. | Inadequate sample size: small samples, first, increase the chance of making a type II error, second such underpowered studies could struggle to find difference between interventions (or a control group) spreading wrong evidence, third they could be a waste of time and money for researchers and athletes. | This paper uses null hypothesis significance testing for assessing differences between interventions, but significance testing/ |
| Practitioners should design their training protocols using the evidence on the top of this pyramid and if such evidence is missing, they can use the less robust articles up to the last level. | Practitioners should be aware of the differences that exist between designs such as RCTs, superiority and non-inferiority trials, and they should select the most adequate research design based on the existing evidence reported in the literature. | Practitioners should be aware that the sample size matters, and adequately powered studies should be prioritized because they offer more robust evidence. Practitioners could use G*Power to calculate the statistical power of their studies ( | |
| Common limitations in the field of S&C are the length of interventions (too short), a low number of participants enrolled, and the lack of a control group in the study design. | Control group: Researchers can involve a traditional control group (no-intervention group), or they can compare the effect of a new intervention vs. active control, specifically, an intervention which has been proven to be effective (e.g., current best-practice treatment). | CIs are related to the selection of the alpha value, CIs = (1 - alpha value)*100%. Therefore, an alpha value of 5% corresponds to a 95% CIs. Using 90% CIs is possible, but this decision should be justified because using it increases the risk of Type I error. | CIs provide critical information beyond statistical significance such as they provide a plausible range of values for the true effect and reveal the precision of the estimate. |
RCTs, Randomized controlled trials; S&C, Strength and conditioning; CONSORT, Consolidated Standards of Reporting Trials; CIs, Confidence intervals.
Figure 1Example of an a priori power analysis using an ANOVA, repeated measures, within-between interaction with a medium effect size (f = 0.25) and an alpha error prob of 0.05 (5%). The total sample size for this study is 28 participants, with an actual power of 0.811. Moreover, this figure shows that increasing the sample size (y axis) is possible to increase the power (1-beta err prob), for instance, recruiting 35 participants would increase the sample power to 0.9, which would decrease the type II error.