Literature DB >> 23518273

Sample size determinations in original research protocols for randomised clinical trials submitted to UK research ethics committees: review.

Timothy Clark1, Ursula Berger, Ulrich Mansmann.   

Abstract

OBJECTIVES: To assess the completeness of reporting of sample size determinations in unpublished research protocols and to develop guidance for research ethics committees and for statisticians advising these committees.
DESIGN: Review of original research protocols. STUDY SELECTION: Unpublished research protocols for phase IIb, III, and IV randomised clinical trials of investigational medicinal products submitted to research ethics committees in the United Kingdom during 1 January to 31 December 2009. MAIN OUTCOME MEASURES: Completeness of reporting of the sample size determination, including the justification of design assumptions, and disagreement between reported and recalculated sample size.
RESULTS: 446 study protocols were reviewed. Of these, 190 (43%) justified the treatment effect and 213 (48%) justified the population variability or survival experience. Only 55 (12%) discussed the clinical importance of the treatment effect sought. Few protocols provided a reasoned explanation as to why the design assumptions were plausible for the planned study. Sensitivity analyses investigating how the sample size changed under different design assumptions were lacking; six (1%) protocols included a re-estimation of the sample size in the study design. Overall, 188 (42%) protocols reported all of the information to accurately recalculate the sample size; the assumed withdrawal or dropout rate was not given in 177 (40%) studies. Only 134 of the 446 (30%) sample size calculations could be accurately reproduced. Study size tended to be over-estimated rather than under-estimated. Studies with non-commercial sponsors justified the design assumptions used in the calculation more often than studies with commercial sponsors but less often reported all the components needed to reproduce the sample size calculation. Sample sizes for studies with non-commercial sponsors were less often reproduced.
CONCLUSIONS: Most research protocols did not contain sufficient information to allow the sample size to be reproduced or the plausibility of the design assumptions to be assessed. Greater transparency in the reporting of the determination of the sample size and more focus on study design during the ethical review process would allow deficiencies to be resolved early, before the trial begins. Guidance for research ethics committees and statisticians advising these committees is needed.

Entities:  

Mesh:

Year:  2013        PMID: 23518273      PMCID: PMC3604970          DOI: 10.1136/bmj.f1135

Source DB:  PubMed          Journal:  BMJ        ISSN: 0959-8138


Introduction

The determination of sample size is central to the design of randomised controlled trials.1 To have scientific validity a clinical study must be appropriately designed to meet clearly defined objectives.2 3 Clinical trials should provide precise estimates of treatment effects, thus allowing healthcare professionals to make informed decisions based on sound evidence.3 Equally, trials should not be too large, as these may expose some patients to unnecessary risks. An extensive literature on sample size calculations in clinical research now exists for a wide variety of data types and statistical tests.4 5 6 7 8 9 The International Conference on Harmonisation of technical requirements for registration of pharmaceuticals for human use, topic E9, sets down the requirements for sample size reporting in research protocols for studies supporting the registration of drugs for use in humans.1 Although these standards primarily concern commercial sponsors, the principles (box 1) have broad application to all clinical trials. The consolidated standards of reporting trials statement provides similar guidance for published randomised trials.10 11 For example, superiority, non-inferiority, equivalence For example, parallel group, crossover, factorial Clinically most relevant endpoints from the patients’ perspective For example, t test for continuous variables, χ2 test for binary variables Ratio of number of participants in each treatment arm Minimal effect that has clinical relevance or the anticipated effect of the new treatment, where this is larger For example, variance, response rates, and event rates, used in the calculation Probability of erroneously rejecting the null hypothesis Probability of erroneously failing to reject the alternative hypothesis: Expected proportion of subjects with no post-randomisation information Justification of treatment difference sought and other design assumptions Investigation of how sample size changes under different assumptions (sensitivity analyses) Accrual and total study duration used to estimate the number of patients required in event driven studies Adjustments for multiple testing—for example, multiple endpoints, multiple checks during interim monitoring Surprisingly few evaluations have been made of the quality of the sample size determination in randomised controlled trials. Those that have been performed are mainly based on published data owing to the difficulty in obtaining access to unpublished research protocols.12 13 These reviews have several limitations. Firstly, the reporting of the sample size determination in the study publication is less detailed than in the research protocol. Secondly, they are affected by publication bias. Thirdly, one study showed that there are often discrepancies between the research protocol and the publication.14 Consequently, definitive conclusions about the quality of sample size determinations should be based on a review of original research protocols. Studies that are too large or too small have been branded as unethical.2 15 16 The view that underpowered studies are in themselves unethical has been challenged by some researchers, who argue that this is too simplistic.17 18 19 We believe that a study must be judged on whether it is appropriately designed to answer the research question posed, and the validity of the sample size calculation is germane to this assessment. This is not merely a matter of whether the sample size can be recalculated, since the calculation can be correct mathematically but still be of poor quality if the assumptions used have not been suitably researched and qualified.20 Greater transparency in the reporting of the sample size determination and more focus on study design during the ethical review process would allow deficiencies to be resolved early, before the trial begins; once the trial starts it is too late. We assessed the quality of sample size determinations reported in research protocols with the aim of developing guidance for research ethics committees.

Methods

We searched the research ethics database, a web based database application for managing the administration of the ethical review process in the United Kingdom, using filter criteria (see supplementary file) to identify all validated applications for randomised (phase IIb, III, and IV) clinical trials of investigational medicinal products submitted to the National Research Ethics Service for ethical review during 2009. We designed these criteria to create a large database of recently submitted protocols (2009 was the last complete year before the project started in 2010) for randomised controlled trials.

Creation of the protocol database

Three researchers extracted the characteristics of the studies (table 1) from the research ethics database according to prespecified rules and entered the data into the protocol database. Two reviewers independently assessed each research protocol. The researchers met regularly to discuss and agree on the final data to be entered into the database.
Table 1

 Study characteristics entered into research protocol database

Study identifier or research ethics committee reference number
Commercial or non-commercial sponsor
Therapeutic area and disease category
Standard drug treatments for medical condition. Was there an accepted “standard treatment” for the medical condition at the time the study was being designed?
Clinical phase: IIb, III, or IV
Primary outcome variables
Form of primary outcome variables (continuous, binary, time to event) and test procedure
Objectively assessed outcome—that is, one that is not influenced by investigators’ judgment (for example, all cause mortality and recognised laboratory variables)21
Study blinding such as open label, partial blind, or double-blind
Comparators such as placebo and active-control
Study design, such as parallel group, crossover, group sequential
Study objective: superiority, non-inferiority, or therapeutic equivalence
Allocation ratio
Treatment difference sought (or margin). Data on which assumption was based; why plausible for planned study
Clinical importance of the treatment difference discussed
Standard deviation of treatment difference (or margin) or hazard rates, median survival, event rate, or responder rate in each study arm. Data on which assumption was based; why plausible for the planned study
Type I error: one sided or two sided test
Type II error (power of a trial is 1−probability of a type II error)
Sample size: evaluable number of patients required for analysis or in the case of an event driven study, the number of events. The evaluable number of patients required for analysis (obtained from the sample size calculation before adjusting for withdrawals). If only the total number of subjects to be enrolled was reported then the number of evaluable patients was calculated using the assumed withdrawal rate. If the research protocol only reported one value for the sample size with no information on assumed withdrawals then this figure was entered into the database
Withdrawal or dropout rate
Interim analysis and strategy to control type I error
Multiple comparisons and strategy to control type I error
Additional information: additional variables needed to perform sample size calculations for specific statistical tests—for example, analysis of covariance, negative binomial model, non-parametric tests; and sensitivity analyses
Study characteristics entered into research protocol database To verify the data sources we checked that the information in the research ethics database was consistent with the research protocol on file at the research ethics committee office.

Data analysis

The database was analysed using SPSS Version 19. We describe the results using frequency tables with percentages, cross tabulations, relative risks with 95% confidence interval, and box and Bland-Altman plots.

Assessing the sample size determination

We assessed sample size determination based on three factors: the reporting of how the sample size was determined, the reporting of and justification for the design assumptions, and recalculation of the original sample size determination.

Reporting of sample size determination

We reviewed each protocol to determine the presence or absence of the core sample size components. Reporting of additional information such as adjustment for multiple testing (for example, multiple endpoints, multiple checks during interim monitoring) required for the sample size calculation was also documented. We did not assess the appropriateness of the proposed methods of analysis.

Reporting and justifying design assumptions

Each design assumption was categorised (box 2). We also documented the reporting of sensitivity analyses and consideration of an adaptive design. We did not independently assess the appropriateness of the design assumptions. Details of the data underpinning the variable—for example, previous studies with the new drug or products in the same therapeutic class, physician survey, meta-analysis, literature search given Required more than a simple statement that the “treatment difference was clinically important”. Reference to a specific study or studies in which the clinically relevant difference has been determined, or a detailed clinical discussion of why the investigators considered the difference sought to be meaningful Required a discussion of the data underpinning the variable and an explanation why the value used in the sample size calculation was plausible for the planned study

Recalculation of original sample size determination

The three researchers who created the protocol database recalculated the original sample size according to prespecified rules. Two independent reviewers carried out each recalculation; the researchers met regularly to discuss and agree the final data to be entered into the database. Any outstanding questions were referred to a fourth reviewer for resolution (n=65). If the sample size determination stated that a specific statistical software had been used (for example, nQuery Advisor, East, PASS) or referenced a specific publication, then we used the same software or published methodology to recalculate the sample size. If the protocol stated that the sample size was based on a more complex method of analysis, such as analysis of covariance then we used PASS 11 or nQuery 6.01. Otherwise we used standard formulas for normal, binary, or survival data.4 5 6 Missing information was imputed in four ways. Firstly, if the withdrawal rate was not specified we recalculated the sample size using the variables given. This recalculated sample size was then compared with the sample size reported in the protocol. Secondly, if the type I error or type II error (power of a trial is 1−probability of a type II error) was not specified, we recalculated the sample size using a two sided 5% type I error or a 20% type II error. Thirdly, when adjustments for multiple testing were not reported we assumed no adjustment had been applied. Finally, if the sample size was based on a more complex method of analysis, but insufficient information was reported to allow the sample size to be recalculated for the planned method, we used standard formulas to recalculate the sample size. We defined two populations for analysis: protocols where missing information was imputed and protocols that reported all core components and any additional information such as adjustments for multiple testing required to accurately recalculate the sample size (complete reporting). If the ratio of the number of evaluable patients or events reported in the protocol to that calculated fell within the range 0.95 to 1.05, then we reproduced the sample size, since a difference of 5% or less either way represented an inconsequential reduction or increase in power (approximately 2% for normal, binary, or survival data).

Results

A total of 929 research protocols were identified by the initial search. Of these, 446 met the inclusion criteria (see supplementary figure 5). Table 2 lists the main characteristics of the 446 research protocols (also see supplementary table 5). The most common therapeutic areas were oncology (94; 21%) and endocrinology (49; 11%). Most studies were sponsored by industry (314; 70%), were in phase III (251; 56%), had a parallel group design (319; 72%), and had superiority of the test over control medicinal product as the primary objective (375; 84%). Six (1%) protocols included sample size re-estimation in the study design.
Table 2

Main characteristics of the 446 research protocols

Study characteristicsNo (%) of protocols (n=446)
Therapeutic area:
 Oncology94 (21)
 Endocrinology49 (11)
 Infectious disease38 (9)
 Cardiovascular disease36 (8)
 Central nervous system35 (8)
 Respiratory system34 (8)
 Musculoskeletal system34 (8)
 Pain and anaesthesia27 (6)
 Other therapeutic areas (each <5%)99 (22)
Commercial status:
 Commercial314 (70)
 Non-commercial132 (30)
Clinical phase:
 Phase IIb102 (23)
 Phase II/III5 (1)
 Phase III251 (56)
 Phase IV88 (20)
Trial design:
 Parallel group319 (72)
 Group sequential88 (20)
 Crossover18 (4)
 Factorial13 (3)
 Adaptive6 (1)
 Withdrawal2 (0.5)
Test hypothesis:
 Superiority375 (84)
 Non-inferiority and equivalence58 (13)
 Superiority and non-inferiority11 (2)
 Not stated2 (0.4)
Main characteristics of the 446 research protocols

Reporting of sample size components

The individual core components of the sample size were generally reported in the 446 protocols, with the exception of withdrawals (269; 60%, fig 1) (also see supplementary table 6). Of the 446 protocols, 240 (54%) reported all the core components; withdrawal rate was the only element missing in 143 out of 206 (69%) protocols that did not report all core components.

Fig 1 Reporting of core sample size components

Fig 1 Reporting of core sample size components When we considered protocols that reported all core components and additional information such as adjustments for multiple testing to accurately recalculate the sample size (complete reporting) then the number reduced to 188 protocols (42%).

Reporting design assumptions

Less than half of the 446 protocols (190; 43%) reported the data on which the treatment difference (or margin) was based. Of the 190 protocols that did report the basis of the treatment difference, 92 (48%) cited previous studies with the product or a product in the same class and 38 (20%) cited a literature search (fig 2 and supplementary table 7). In only four (2%) protocols was the estimated treatment difference based on a meta-analysis. Reporting the basis for the treatment difference was lowest in studies on oncology (28/94; 30%) and cardiovascular disease (12/36; 33%) and highest in those on pain and anaesthesia (16/27; 59%) (see supplementary table 8).

Fig 2 Reporting the design assumptions

Fig 2 Reporting the design assumptions Overall, 55 out of 446 (12%) protocols reported both the basis of the treatment effect and its clinical importance, 135 (30%) protocols reported the basis only, and 256 (57%) reported neither. Limited information on the nature of the data underpinning the treatment effect was usually given, and just 13 (3%) protocols gave a reasoned explanation why the value chosen was plausible for the planned study. The same pattern was observed with population variability or survival, with less than half (213/446; 48%) of the protocols reporting the basis of the variable used in the calculation (fig 2 and supplementary table 9). Previous studies, a literature search, or both, were again most commonly cited. The variability or survival estimate was based on a meta-analysis in only two of the 213 (1%) protocols. Again, limited information was usually given, and just 17 (4%) protocols explained the plausibility of the value chosen. Only 11 out of the 446 (3%) protocols reported analyses investigating the sensitivity of the sample size to deviations from the assumptions used in the calculation.

Reporting of strategies to control type I (false positive) and type II (false negative) error

Adjustments for multiple comparisons (81/144; 56%) or interim analyses (56/95; 59%) were reported in just over half of the research protocols with these design features (see supplementary table 10). The potential for increasing the type II error was not considered in any study with multiple comparisons. If all co-primary variables must be significant to declare success then the type II error rate can be inflated, resulting in reduction in the overall study power.1 22

Recalculation of the original sample size determination

If all protocols were considered using the rules for imputing missing information then 262 of out 446 (59%) sample size determinations could be reproduced, with 51 (11%) under-estimated and 103 (23%) over-estimated. Thirty (7%) of the original sample size calculations could not be recalculated (see supplementary table 11). Figure 3 shows a box plot of the relative differences between the reported and recalculated sample sizes.

Fig 3 Difference between reported and calculated sample size. *Ratio of number of evaluable patients or events reported in protocol to that calculated. †All calculations (n=416) with missing data imputed. Observations below 2.5th (0.61) or above the 97.5th (1.74) centile are excluded. Minimum and maximum values (not shown) observed were 0.12 and 5.21, respectively. ‡Complete reporting (n=188): no data imputation. Minimum and maximum values (not shown) observed were 0.32 and 2.45, respectively. Central boxes span 25th (1.00 for both plots) and 75th (1.05 and 1.03, respectively) centiles, the interquartile range. Horizontal line within box represents median (1.01 in both plots)

Fig 3 Difference between reported and calculated sample size. *Ratio of number of evaluable patients or events reported in protocol to that calculated. †All calculations (n=416) with missing data imputed. Observations below 2.5th (0.61) or above the 97.5th (1.74) centile are excluded. Minimum and maximum values (not shown) observed were 0.12 and 5.21, respectively. ‡Complete reporting (n=188): no data imputation. Minimum and maximum values (not shown) observed were 0.32 and 2.45, respectively. Central boxes span 25th (1.00 for both plots) and 75th (1.05 and 1.03, respectively) centiles, the interquartile range. Horizontal line within box represents median (1.01 in both plots) A total of 134 of the 188 (71%) sample size calculations from protocols with complete reporting could be reproduced, with 20 (11%) under-estimated and 34 (18%) over-estimated, respectively. The reproducibility of the sample size increased with more comprehensive reporting, primarily withdrawal rates and adjustments for multiple testing. None the less, both analyses showed a tendency for over-estimation, and in total only 134 of the 446 (30%) original sample size calculations could be accurately reproduced. Supplementary figure 6 shows a Bland-Altman plot comparing reported and calculated sample sizes.

Commercial versus non-commercial sponsors

The reporting of the core components of the sample size determination did not differ noticeably between studies with commercial and non-commercial sponsors (fig 4 and supplementary table 12). Studies with non-commercial sponsors were more likely than those with commercial sponsors to report the basis for design assumptions (relative risk 1.69, 95% confidence interval 1.38 to 2.08 for treatment difference and 1.29, 1.07 to 1.56 for variance and survival). Conversely, studies with non-commercial sponsors were less likely than those with commercial sponsors to report adjustments for multiple comparisons (0.26, 0.13 to 0.50) and interim analyses (0.54, 0.31 to 0.93) and provide complete reporting (0.60, 0.45 to 0.81); the sample size calculation from protocols of studies with non-commercial sponsors was also less likely to be reproduced (0.72, 0.59 to 0.88).

Fig 4 Reporting by commercial status

Fig 4 Reporting by commercial status

Discussion

Our review suggests that the reporting of the sample size determination in the research protocol often lacks essential information. Treatment difference and type I error were usually given, but withdrawal rates and adjustments for multiple testing were often missing. Only 188 of 446 (42%) protocols contained sufficient information to accurately recalculate the sample size. More than half of the research protocols provided no justification for the assumptions used in the sample size calculation. When a justification was given, it generally lacked detail. Sensitivity analyses, which can help investigators understand the reliability of the variables used in the sample size calculation and whether sample size re-estimation should be included in the study design, were rarely reported.23 24 25 Imputing missing information resulted in 262 out of 446 (59%) reproduced sample sizes. This increased to 134 out of 188 (71%) when only complete reports were considered. Overall, only 134 of the 446 (30%) sample size calculations could be accurately reproduced. Study size tended to be over-estimated rather than under-estimated. Our research, the first extensive review of unpublished research protocols, raises several problems with the statistical planning of randomised controlled trials, in particular the limited consideration afforded to the choice of design assumptions. Sample size determinations are highly sensitive to changes in design assumptions, which behoves sponsors to be rigorous when estimating these variables.26 Moreover, if the degree of uncertainty is high then design assumptions should be checked during the course of the trial.26

Limitations of this review

We only reviewed the research protocol submitted to the research ethics committee and had no access to any other documents. Moreover, our review was completely independent of the ethical review process. The protocols were submitted in 2009 to the UK National Research Ethics Service and reflect clinical research practice at that time. None the less, the sample is relatively recent and many sponsors planned to include sites both within and outside the United Kingdom, so we believe our findings can be generalised to other countries and regions for commercial studies where global regulatory requirements exist. For non-commercial studies, the quality of reporting depends on the investigators experience. We did not verify the appropriateness of the design assumptions used in the sample size determination in this research project.

Implications of the findings

In many instances the validity of the sample size determination and by extension the scientific validity of the study—one of the main aspects of the ethical review process—could not be judged.2 The available evidence suggests that key sample size assumptions are not determined in a rigorous manner. This may explain why large differences have been observed between design assumptions and observed data.13 27 Furthermore, sample sizes tended to be over-estimated, which is a concern given the challenges of recruiting to randomised controlled trials.28 Finally, methodologies to check assumptions and re-estimate sample size during the study are often not applied, despite the fact that these methods are encouraged by regulatory authorities.29 30 Investigators should be rigorous in the determination of design assumptions. There is no “one size fits all” approach. Sufficient information should be reported to allow the sample size to be reproduced and show that there is solid reasoning behind the assumptions used in the calculation (box 3). All components necessary to reproduce the sample size, in particular withdrawal or dropout rate and adjustments for multiple comparisons or interim analyses Confidence interval for variables used in the calculation A concise summary of the data from which variable estimates are derived. If the variable is based on previous studies then give details of the study design, clinical phase, study population, relevant outcome measures, relevant results, and study size, ideally in a table Discussion of the clinical importance of the treatment effect A reasoned explanation of why the treatment difference and other design assumptions are plausible for the planned study, taking into account: All existing data, for example, previous clinical studies, relevant clinical pharmacology (dose effect relation, etc) and non-clinical data How any differences between the previous studies and the one planned impact on the design assumptions How robust the sample size and/or statistical power is to different assumptions (sensitivity analysis). If the variable estimates are considered unreliable then re-estimation of the sample size could be considered We would also ask the suppliers of software used to calculate sample size to consider including the withdrawal and dropout rate in the package to ensure that this is taken into account and reported in the research protocol. A poorly designed trial cannot be saved once it is completed. Greater transparency in the reporting of sample size determinations in research protocols would facilitate the early detection of deficiencies in the study design. Moreover, better justification of the design assumptions in the research protocol would facilitate the overall ethical review process.31 Despite calls for a different approach to sample size determination, we believe that there is no substitute for spending time designing the study and giving due consideration to the risks and how these can be tackled.13 32 Wherever the responsibility for scientific and statistical review lies, we believe clear guidance on the sample size determination should be provided and followed. Individuals with appropriate statistical expertise should also play a central role in the ethical review of research protocols.33 Improving the review process to place more focus on study design was the aim of the National Research Ethics Service at the start of this project and we propose to use the results of our research to develop guidance, working with the ethics service and others interested in this area. Sample size determination is an accepted and important part of the planning process for randomised controlled trials Sample size reporting in publications is often lacking essential information Sample size reporting in original research protocols is often incomplete and in many instances the reliability of the design assumptions and hence the validity of the sample size determination cannot be judged The ethical review process should place greater focus on study design Withdrawal and dropout rate are frequently not reported and therefore suppliers of sample size software could include this variable in the package to improve reporting
  23 in total

1.  What makes clinical research ethical?

Authors:  E J Emanuel; D Wendler; C Grady
Journal:  JAMA       Date:  2000 May 24-31       Impact factor: 56.272

2.  Cardiotocography v Doppler auscultation. All unbiased comparative studies should be published.

Authors:  Iain Chalmers
Journal:  BMJ       Date:  2002-02-23

3.  Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study.

Authors:  Lesley Wood; Matthias Egger; Lise Lotte Gluud; Kenneth F Schulz; Peter Jüni; Douglas G Altman; Christian Gluud; Richard M Martin; Anthony J G Wood; Jonathan A C Sterne
Journal:  BMJ       Date:  2008-03-03

4.  Are research ethics committees behaving unethically? Some suggestions for improving performance and accountability.

Authors:  J Savulescu; I Chalmers; J Blunt
Journal:  BMJ       Date:  1996-11-30

5.  Sample size calculations: should the emperor's clothes be off the peg or made to measure?

Authors:  Geoffrey Norman; Sandra Monteiro; Suzette Salama
Journal:  BMJ       Date:  2012-08-23

Review 6.  Estimating sample sizes for binary, ordered categorical, and continuous outcomes in two group comparisons.

Authors:  M J Campbell; S A Julious; D G Altman
Journal:  BMJ       Date:  1995-10-28

7.  Statistics and ethics in medical research: III How large a sample?

Authors:  D G Altman
Journal:  Br Med J       Date:  1980-11-15

Review 8.  Optimism bias leads to inconclusive results-an empirical study.

Authors:  Benjamin Djulbegovic; Ambuj Kumar; Anja Magazin; Anneke T Schroen; Heloisa Soares; Iztok Hozo; Mike Clarke; Daniel Sargent; Michael J Schell
Journal:  J Clin Epidemiol       Date:  2010-12-16       Impact factor: 6.437

9.  What influences recruitment to randomised controlled trials? A review of trials funded by two UK funding agencies.

Authors:  Alison M McDonald; Rosemary C Knight; Marion K Campbell; Vikki A Entwistle; Adrian M Grant; Jonathan A Cook; Diana R Elbourne; David Francis; Jo Garcia; Ian Roberts; Claire Snowdon
Journal:  Trials       Date:  2006-04-07       Impact factor: 2.279

Review 10.  Reporting of sample size calculation in randomised controlled trials: review.

Authors:  Pierre Charles; Bruno Giraudeau; Agnes Dechartres; Gabriel Baron; Philippe Ravaud
Journal:  BMJ       Date:  2009-05-12
View more
  22 in total

1.  [Influence of impact factor on reporting sample size calculations in publications on studies exemplified by AMD treatment : Cross-sectional investigation on the presence of sample size calculations in publications of RCTs on AMD treatment in journals with low and high impact factors].

Authors:  Sabrina Tulka; Berit Geis; Stephanie Knippschild; Christine Baulig; Frank Krummenauer
Journal:  Ophthalmologe       Date:  2020-02       Impact factor: 1.059

2.  Specifying the target difference in the primary outcome for a randomised controlled trial: guidance for researchers.

Authors:  Jonathan A Cook; Jenni Hislop; Douglas G Altman; Peter Fayers; Andrew H Briggs; Craig R Ramsay; John D Norrie; Ian M Harvey; Brian Buckley; Dean Fergusson; Ian Ford; Luke D Vale
Journal:  Trials       Date:  2015-01-15       Impact factor: 2.279

3.  Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

Authors:  Elsa Tavernier; Bruno Giraudeau
Journal:  PLoS One       Date:  2015-07-14       Impact factor: 3.240

4.  Using systematic reviews to inform NIHR HTA trial planning and design: a retrospective cohort.

Authors:  Sheetal Bhurke; Andrew Cook; Anna Tallant; Amanda Young; Elaine Williams; James Raftery
Journal:  BMC Med Res Methodol       Date:  2015-12-29       Impact factor: 4.615

5.  Strengthening Interactions between Statisticians and Collaborators: Objectives and Sample Sizes.

Authors:  Emily Van Meter; Richard Charnigo
Journal:  J Biom Biostat       Date:  2014

Review 6.  Virtual reality using games for improving physical functioning in older adults: a systematic review.

Authors:  Karina Iglesia Molina; Natalia Aquaroni Ricci; Suzana Albuquerque de Moraes; Monica Rodrigues Perracini
Journal:  J Neuroeng Rehabil       Date:  2014-11-15       Impact factor: 4.262

7.  Sample size requirements to estimate key design parameters from external pilot randomised controlled trials: a simulation study.

Authors:  M Dawn Teare; Munyaradzi Dimairo; Neil Shephard; Alex Hayman; Amy Whitehead; Stephen J Walters
Journal:  Trials       Date:  2014-07-03       Impact factor: 2.279

8.  Five questions that need answering when considering the design of clinical trials.

Authors:  Timothy Clark; Hugh Davies; Ulrich Mansmann
Journal:  Trials       Date:  2014-07-16       Impact factor: 2.279

Review 9.  Systematic review finds major deficiencies in sample size methodology and reporting for stepped-wedge cluster randomised trials.

Authors:  James Martin; Monica Taljaard; Alan Girling; Karla Hemming
Journal:  BMJ Open       Date:  2016-02-04       Impact factor: 2.692

10.  Finding Alternatives to the Dogma of Power Based Sample Size Calculation: Is a Fixed Sample Size Prospective Meta-Experiment a Potential Alternative?

Authors:  Elsa Tavernier; Ludovic Trinquart; Bruno Giraudeau
Journal:  PLoS One       Date:  2016-06-30       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.