Literature DB >> 24465806

Are sample sizes clear and justified in RCTs published in dental journals?

Despina Koletsi1, Padhraig S Fleming2, Jadbinder Seehra3, Pantelis G Bagos4, Nikolaos Pandis5.   

Abstract

Sample size calculations are advocated by the CONSORT group to justify sample sizes in randomized controlled trials (RCTs). The aim of this study was primarily to evaluate the reporting of sample size calculations, to establish the accuracy of these calculations in dental RCTs and to explore potential predictors associated with adequate reporting. Electronic searching was undertaken in eight leading specific and general dental journals. Replication of sample size calculations was undertaken where possible. Assumed variances or odds for control and intervention groups were also compared against those observed. The relationship between parameters including journal type, number of authors, trial design, involvement of methodologist, single-/multi-center study and region and year of publication, and the accuracy of sample size reporting was assessed using univariable and multivariable logistic regression. Of 413 RCTs identified, sufficient information to allow replication of sample size calculations was provided in only 121 studies (29.3%). Recalculations demonstrated an overall median overestimation of sample size of 15.2% after provisions for losses to follow-up. There was evidence that journal, methodologist involvement (OR = 1.97, CI: 1.10, 3.53), multi-center settings (OR = 1.86, CI: 1.01, 3.43) and time since publication (OR = 1.24, CI: 1.12, 1.38) were significant predictors of adequate description of sample size assumptions. Among journals JCP had the highest odds of adequately reporting sufficient data to permit sample size recalculation, followed by AJODO and JDR, with 61% (OR = 0.39, CI: 0.19, 0.80) and 66% (OR = 0.34, CI: 0.15, 0.75) lower odds, respectively. Both assumed variances and odds were found to underestimate the observed values. Presentation of sample size calculations in the dental literature is suboptimal; incorrect assumptions may have a bearing on the power of RCTs.

Entities:  

Mesh:

Year:  2014        PMID: 24465806      PMCID: PMC3897561          DOI: 10.1371/journal.pone.0085949

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Randomized controlled trials (RCTs) are considered the gold standard for assessing the efficacy and safety of an intervention and are the bedrock of evidence-based practice in medicine and dentistry. Appropriate planning of RCTs ensures validity and precise estimation oftreatment effects [1]. To increase precision in identifying a difference between treatment modalities if such a difference exists beyond chance, an a priori estimation of the appropriate number of participants to be included in the trial is required [2], [3]. Additionally, given the implications of RCTs in terms of time and resources, recruitment of the appropriate number of patients is imperative [4]. Unjustifiably large numbers of participants in an RCT may risk wasting sources, or may even be unethical by exposing participants to potentially ineffective or harmful treatment. Conversely, small trials may possess insufficient power to detect a clinically significant difference, if such a difference exists [5], [6], [7], [8]. In view of these issues, provision of specific details of sample size calculations in reports of clinical trials or protocol registries is recommended in the CONSORT guidelines. This allows replication of the calculation, verification of appropriate numbers in trials, and prevention of post hoc decisions to reduce the initially calculated necessary sample [9], [10]. Sample size calculations are based on assumptions concerning the expected and clinically important treatment effect of the new intervention compared to the control and its variance (continuous outcomes only). Additionally, levels of type I error or ‘alpha’, and type II error (‘beta’) or power must be selected. These assumptions are either based on previously published research in the same field or are derived from a pilot study prior to the commencement of the main trial [11]. Incorrect assumptions concerning the expected treatment outcomes risks leading to either underpowered studies or studies that are unnecessarily large [12]. Type I error (‘alpha’) is usually set at .05 (or less frequently at .01) and refers to the probability of 5% (or 1%) of observing a statistically significant difference between the treatment arms when no such difference exists (false positive). Type II error on the other hand is typically set at .2 (or .1) and refers to the probability of not identifying a difference if one exists (false negative). Type II error is more often expressed in terms of power (1-beta) set at 80% or 90%. Power indicates the probability of observing a difference between treatment arms if such a difference exists. Investigators are more tolerant of false negatives than false positives; this is reflected in the difference in what is considered an acceptable level for Type I (5%) and Type II errors (10% or 20%). Allowance for false positive and false negative results is unavoidable permitting reasonable sample sizes, as having statistical power of 100% would necessitate an infinite number of participants [5], [13]. Despite the importance of sample size calculations during trial design, relatively little attention has been given to the assessment of their veracity in either the medical and dental literature. A relatively recent review based on six high impact factor medical journals revealed that sample size calculations are inadequately reported and often based on inaccurate assumptions [3]. Suboptimal reporting has also been found in studies published within dentistry [14], [15], [16], [17], [18], [19], [20], [21]. In addition to lack of reporting or incomplete reporting of sample size calculations, further issues include whether the recruited numbers are calculated correctly based on the preset assumptions, and whether those a priori assumptions hold for the observed results [3]. It is known that sample size calculation assumptions can be doctored to approximate the available sample size rather than being truly based on the correct assumptions. In particular, setting unrealistically large treatment effects, low variances and low power will result in artificially low sample size requirements. This pattern has been highlighted for continuous outcomes published in high impact medical journals, with these studies often underpowered and predicated on optimistic assumptions [12]. The aim of the present study was to assess the quality and adequacy of sample size calculations and assumptions in RCTs published in eight leading journals in the field of dentistry over the past 20 years, to verify the accuracy of these calculations and to compare the initial assumptions with the observed values. A secondary aim was to investigate on an exploratory basis factors associated with correct performance of sample size calculations in dental specialty journals.

Materials and Methods

The archives of eight leading dental specialty and general audience journals with the highest impact factor were screened for reports of RCTs over the last 20 years (1992–2012), by three authors (DK, JS, PSF): - American Journal of Orthodontics and Dentofacial Orthopedics (AJODO) - British Journal of Oral and Maxillofacial Surgery (BJOMS) - International Journal of Prosthodontics (IJP) - Journal of Clinical Periodontology (JCP) - Journal of Endodontics (JE) - Pediatric Dentistry (PD) - Journal of the American Dental Association (JADA) - Journal of Dental Research (JDR) Journals were searched electronically using the terms “randomized” or “randomised” in all fields and titles and abstracts were screened for potential inclusion by two authors (DK, PSF). All types of trial design were considered including parallel with two or more arms, split-mouth, crossover, cluster, factorial and non-inferiority. Full-text versions of the selected papers with any relevant additional supplementary material providing details of trial methodology and sample size calculation were assessed. Data abstraction forms were developed and two authors (DK, PSF) were calibrated by piloting 20 selected articles. For each paper all details contributing to sample estimation were recorded including: Type I error (alpha), power, assumptions in the interventions and control groups relating to the outcome under investigation (mean and standard deviation of difference for continuous outcomes, proportions or rate of events and difference for dichotomous and time to event outcomes). The target sample size as indicated by the a priori sample size calculation, the number of participants recruited and lost to follow up, and the number of studies presenting clustering effects accounted for during sample size calculation was also recorded. Finally, where applicable, the assumed variances used for sample size calculation and observed variances after completion of the study were recorded for continuous outcomes. Assumed and observed Odds Ratios (ORs) and Hazard ratios (HRs) were recorded for dichotomous and time-to-event outcomes, respectively. The following additional characteristics were also recorded for each study: number of authors, geographical region of the first author (Europe, Americas or Asia/Other), publication year, single or multi-center study, methodologist involvement, and statistical significance of the result. Collaboration with a methodologist was determined by the affiliation information given for the authors and also if explicitly stated in the “Methods” section of the study. For each study displaying sufficient details to allow replication of the sample size calculation, calculations were repeated. To be included in the subgroup of studies that allowed for recalculation, complete reporting of type I error, one or two tailed test, power, and assumptions for the intervention and the control groups were deemed necessary (ie mean and standard deviation of difference for continuous outcomes and proportions/rate of events and difference for dichotomous/time to event outcomes). Where only type I error was not provided, an alpha level of .05 on a two-tailed test were inferred. The calculations were replicated with statistical software using the sampsi, stpower, fpower and sampclus family of commands where necessary (Stata 12.1, Statacorp, College Station, TX, USA). The standardized difference (%) between the actual and the estimated sample size was calculated following the formula: The standardized difference (%) between assumed and observed square root of variances for continuous outcome estimates was calculated accordingly: The ratio of odd ratios (ROR) of the assumed vs. the observed ORs or HRs for binary and time-to-event outcome estimates was also calculated, allowing the degree of under- or over-estimation of the required sample size to be quantified.

Statistical Analysis

Descriptive statistics were performed for the total number of RCTs identified in each journal, geographical area, and the remaining study characteristics. Initially, Pearson chi-squared test was used to determine the association between sufficient reporting of sample size calculation, and trial characteristics including journal of publication, continent of publication, number of authors, trial design, methodologist involvement, number of research centers and arms, and significance of the results. Univariable and multivariable logistic regression modeling was used to determine the association between the feasibility of sample size calculation and predictor variables including journal, methodologist involvement, number of research centers and year of publication).The model fit was assessed using the Hosmer-Lemeshow test. In the logistic regression analysis, the journal PD was excluded as no studies with sufficient information to allow sample size recalculation were reported in this journal. All statistical analyses were conducted with statistical software (Stata 12.1, Stata Corp, College Station, TX, USA).

Results

Four hundred and thirteen RCTs were identified in eight leading dental specialty journals (Table 1, Figure 1). The highest number of RCTs identified was published in JCP followed by JE, AJODO and JDR. The majority of studies were conducted in a single center, by European researchers without the involvement of a methodologist. Two-arm parallel trials were most frequent, and a slight majority of studies reported statistically significant primary outcomes, whilst most studies involved 5 to 7 authors.
Table 1

Characteristics of the identified RCTs (n = 413).

Sample size adequately reportedSample size inadequately reportedTotalp-value
JournalNo.%No.%No.%
AJODO 451516136115<0.001*
BJOMS 371311389
IJP 26965328
JCP 6322645312731
JE 551914126917
PD 16500164
JADA 25954307
JDR 25915124010
Continent
Europe 134466755201490.16*
Americas 11138352914635
Asia/other 471619166616
No. authors
1–4 12743292415638<0.001*
5–7 13145665519748
>7 341226216015
Trial design
Cluster 1043510.02*
Crossover 371312104912
Factorial 001110
Non-inferiority 315482
Parallel 20871867129471
Splitmouth 431513115614
Methodologist involvement
No 24584836932879<0.001*
Yes 471638318521
Center
Single Center 25186907434183<0.01*
Multi Center 411431267217
Significance
No 140485344193470.44*
Yes 15252685622053
No. arms
2 2388210385341830.21*
3 42141085213
4 8365143
5 211131
6 210020
8 001110
Total 292 100 121 100 413 100

*Pearson chi2.

Figure 1

Flowchart of included studies.

*Pearson chi2. Sufficient data to allow replication of the sample size calculation was provided in 121 (29.3%) RCTs (Table 2). Most of the studies pre-specified a power of 80% to correctly identify a difference if one existed (n = 80, 66%), while in 24 studies (20%) power was set at 90%. The cut-off point for a false positive result was 5% (alpha = .05) in almost all of the studies included (n = 116/121). With regard to the type of the outcome under evaluation, continuous outcomes predominated (n = 92/121, 76%), followed by binary outcomes (n = 23/121, 19%), with few studies considering time-to-event or ordinal outcomes.
Table 2

Alpha level, power level and type of outcome for RCTs where recalculation of the sample sizes were feasible (n = 121).

Recalculation Feasible
Alpha (%)No.%
0.01 32
0.025 22
0.05 10990
Inferred 0.05 76
Power (%)
75 11
80 8066
82 11
85 54
86 22
90 2420
94 11
95 65
99 11
Outcome
Binary 2319
Continuous 9276
Time to event 43
Ordinal 22
Total 121 100
The standardized percentage difference between sample size used and recalculated was determined in the 121 RCTs that allowed for replication of the calculations (Table 3). The overall median discrepancy ranged from −237.5% to 84.2%, with a median value of 15.2% (IQR = 35); positive values indicate that the sample size recruited exceeded requirements based on a priori assumptions. The subgroups of studies that overestimated, underestimated or correctly calculated the sample size required, based on a priori assumptions are also presented in Table 4.
Table 3

Median, range and interquartile range for standardized percentage difference where sample recalculation was feasible (n = 121).

NmedianminmaxIQR
Journal
AJODO 163.5−93.345.038.5
BJOMS 119.319.319.30.0
IJP 622.2−53.567.685.6
JCP 6419.1−237.561.930.7
JE 1411.3−73.384.223.7
JADA 546.7−66.065.516.6
JDR 1515.2−232.272.240.0
Continent
Europe 6712.5−237.584.233.3
Americas 3512.9−73.372.251.5
Asia/other 1920.0−140.067.629.1
No. authors
1–4 294.0−237.584.267.4
5–7 6619.3−132.367.627.3
>7 2613.8−232.272.228.8
Trial design
Cluster 44.3−11.122.630.7
Crossover 1228.23.284.241.3
Factorial 126.426.426.40.0
Non-inferiority 5−4.9−232.225.337.6
Parallel 869.1−237.572.238.8
Splitmouth 1348.4−80.065.531.3
Methodologist involvement
No 8319.4−237.584.240.0
Yes 384.5−232.246.139.6
Center
Single Center 9012.7−237.584.240.1
Multi Center 3118.2−232.267.621.9
Significance
No 5319.4−237.572.232.1
Yes 6812.5−232.284.240.3
No. arms
2 10314.3−237.584.235.0
3 1010.2−132.358.344.6
4 630.7−42.955.688.4
5 14.24.24.20.0
8 126.426.426.40.0
Total 121 15.2 −237.5 84.2 35.0
Table 4

Standardized percentage difference per subgroup for RCTs where recalculation of the sample sizes was feasible (n = 121).

Standardized difference
SubgroupRecalculated>actual* n = 35Recalculated<actual** n = 84Recalculated = actual n = 2
median−30.0023.600.00
min−237.501.600.00
max−1.7084.200.00
IQR53.3522.580.00

*Recalculated > actual indicates underestimation of the sample size by the authors of the RCTs and is characterized by negative values for the calculated standardized difference.

**Recalculated < actual indicates overestimation of the sample size by the authors of the RCTs and is characterized by positive values for the calculated standardized difference.

*Recalculated > actual indicates underestimation of the sample size by the authors of the RCTs and is characterized by negative values for the calculated standardized difference. **Recalculated < actual indicates overestimation of the sample size by the authors of the RCTs and is characterized by positive values for the calculated standardized difference. Multivariable logistic regression (Table 5) demonstrated that JCP had the highest odds of adequately reporting sufficient data to permit sample size recalculation followed by AJODO and JDR, with 61% (OR = 0.39, CI: 0.19, 0.80) and 67% (OR = 0.34, CI:0.15, 0.75) lower odds, respectively. The involvement of a methodologist in the statistical analysis and trial methodology of the included studies resulted in 97% higher odds (OR = 1.97, CI: 1.10, 3.53) of appropriate reporting. There was evidence that a multi-center setting also resulted in 86% higher odds of sufficient reporting to allow recalculation (OR = 1.86, CI: 1.01, 3.43). For each additional year of publication until 2012, the odds of inclusion of sample size assumptions increased by 24% (OR = 1.24, CI: 1.12, 1.38). The predicted probabilities from the adjusted model for sufficient sample calculations reporting are shown by year and journal in Figure 2. The journal PD was excluded from the logistic regression analysis, as PD did not contribute any studies with sufficient information to allow sample recalculation.
Table 5

Univariable and multivariable logistic regression derived ORs and 95% confidence intervals (CI) for the feasibility of sample size recalculation for the identified Randomized Controlled Trials (n = 413).

Predictor variablesUnivariable analysisMultivariable analysis
VariableCategory/UnitOR95% CIp-valueOR95% CIp-value
Journal AJODO 0.350.18, 0.680.0020.390.19, 0.800.01
BJOMS 0.030.00, 0.20<0.0010.040.00, 0.290.002
IJP 0.230.09, 0.590.0020.260.10, 0.710.01
JCP Baseline (reference)
JE 0.250.13, 0.50<0.0010.260.13, 0.54<0.001
JADA 0.200.07, 0.550.0020.150.05, 0.43<0.001
JDR 0.590.29, 1.220.160.340.15, 0.750.01
Methodologist involvement No Baseline (reference)
Yes 2.291.39, 3.770.0011.971.10, 3.530.02
Number of centers Single center Baseline (reference)
Multi center 2.031.20, 3.450.011.861.01, 3.430.046
Year (per year) 1.231.12, 1.35<0.0011.241.12, 1.38<0.001
Figure 2

Predicted probabilities of adequate sample size reporting with 95% confidence intervals (CIs) derived from the adjusted model for reporting sample size calculation details based on the journal and year of publication.

Finally, for continuous outcomes, the standardized difference between assumed and observed variance indicated an overall median discrepancy of 2.92% (IQR = 53.8%; Figure 3); negative median values (below zero) show overly optimistic assumption on the expected variance. As for binary and time-to-event outcomes, the overall median ratio of assumed compared to observed ORs was 0.61 (IQR = 1.01; Figure 4) with values less than 1 indicating optimistic assumptions on the expected differences in the odds of events between treatment groups.
Figure 3

Boxplots of percentage standardized difference between assumed and observed variance for continuous outcome from the RCTs where sample size recalculation was feasible (n = 92/121) based on journal of publication.

The horizontal line at zero indicates perfect agreement between assumed and observed variance. Median values below zero indicate optimistic assumptions of variance (smaller than observed) and vice versa.

Figure 4

Boxplots of ratio of Odds Ratios (ORs) of assumed compared to observed ORs or HRs for binary, ordinal and time-to-event outcomes from the RCTs where sample size recalculation was feasible (n = 29/121) based on journal of publication.

The horizontal line at 1 indicates perfect agreement between assumed and observed ORs. Median values below 1 indicate optimistic assumptions of variance (smaller than observed) and vice versa.

Boxplots of percentage standardized difference between assumed and observed variance for continuous outcome from the RCTs where sample size recalculation was feasible (n = 92/121) based on journal of publication.

The horizontal line at zero indicates perfect agreement between assumed and observed variance. Median values below zero indicate optimistic assumptions of variance (smaller than observed) and vice versa.

Boxplots of ratio of Odds Ratios (ORs) of assumed compared to observed ORs or HRs for binary, ordinal and time-to-event outcomes from the RCTs where sample size recalculation was feasible (n = 29/121) based on journal of publication.

The horizontal line at 1 indicates perfect agreement between assumed and observed ORs. Median values below 1 indicate optimistic assumptions of variance (smaller than observed) and vice versa. The median number of participants required based on the sample size assumptions in the 121 included RCTs was 42 (range: 10–832), whereas the median number recruited was 50 (range:10–983) participants. A median of two dropouts per trial was recorded. Of the 121 studies that included sufficient data concerning sample size assumptions, 71 demonstrated some sort of clustering effect for the primary outcome measure. However, only 10/121 trials accounted for the correlated nature of the data in performing the sample size calculations at the design stage; clustering was accounted for in the statistical methods in the majority of these studies (n = 53, 74.6%).

Discussion

The present work is the first large scale study to analyze the veracity of sample size calculations of RCTs, along with other associations in leading specific and general audience dental journals. The overall median discrepancy identified between presented and recalculated sample sizes was 15.2% (−237.5%, 84.2%) after making provision for losses to follow-up, indicating a tendency to slightly over-estimate required numbers. However, this finding was based on a small portion of RCTs, as inadequate data to allow for replication of sample size assumptions was typical (70.7%). This finding mirrors a recent study in orthodontics, which highlighted that replication of calculations was possible in just 29.5% of the RCTs; the median discrepancy between presented and recalculated sample size was 5.3%, although recalculations were conducted on fewer studies (n = 41) [21]. Similarly, in biomedical research, comprehensive sample size calculations, with adequate data permitting replication was identified in 34% of studies [3] and only 19% of studies in the field of plastic surgery [22]. The multivariable logistic regression model demonstrated that articles published in JCP had the highest odds for correctly reporting assumptions for sample size calculation, followed by the AJODO and the JDR. Evidence of underpowered studies in periodontology was highlighted in the past; this may have provoked increased awareness of the necessity for clear and accurate reporting of sample size calculations within this specialty[14]. Another finding of note was the discrepancy between the assumed and observed variances and ORs for the intervention and control groups. This difference may lead to insufficiently powered trials lacking the capacity to identify treatment differences risking incorrect inferences based on inconclusive outcomes [12]. The discrepancy between the final overestimation of the sample size compared to assumptions on variance and odds of the events is related to inflation of the sample size to account for possible losses to follow-up. The main reasons for inadequate assumptions of variances are that the assumptions are based on initial piloting, with the researchers following exactly the same variances for a much larger RCT, disregarding the fact that variance is not fixed [11]. Initial piloting within each trial may help optimize sample calculations. Other methods proposed to overcome the uncertainty of nuisance parameters are “sample size reviews” or “designs with internal pilot studies” that allow recalculation of the sample size during the course of a clinical trial, with subsequent adjustments to the initially planned size [23], [24]. The recruitment of a median number of 50 participants in a dental RCT is often a realistic objective, which may reflect researchers' tendency to arrive at an achievable and feasible size of sample to test the differences between interventions for a research question. However, whether the appropriate sample size is determined from valid and pre-specified assumptions for the intervention groups can rarely be determined. It is possible that effect sizes and assumptions may be manipulated to arrive at the desired number of participants [2]. This practice has led to calls for discontinuation of the usage and reporting of sample size calculations [25]. This study also confirmed the apparent lack of reporting of specific trial characteristics, which are necessary for accurate sample size estimation. In particular, correlated data, which may contribute to clustering effects and outcomes more closely matched within clusters than between them, was poorly handled [26]. The number of studies accounting for these effects was disappointing with only a small fraction reporting sufficient information in the sample estimation assumptions. A limitation of the present work is that outcomes were based solely on reported information from the included RCTs. Protocols of RCTs published in trial registries prior to the commencement of the study would help eliminate possible deviations from a priori assumptions [27]. However, protocols were rarely identified for the RCTs included in the present study. Finally, if reporting of sample size assumptions is to continue, emphasis should be placed on encouraging researchers, authors, editors and peer reviewers to overhaul the reporting quality of submitted clinical trials prior to publication, in line with the CONSORT guidelines for clear and transparent reporting.
  23 in total

Review 1.  A systematic review of power and sample size reporting in randomized controlled trials within plastic surgery.

Authors:  Olubimpe Ayeni; Lisa Dickson; Teegan A Ignacy; Achilleas Thoma
Journal:  Plast Reconstr Surg       Date:  2012-07       Impact factor: 4.730

2.  Sample size calculations in randomised trials: mandatory and mystical.

Authors:  Kenneth F Schulz; David A Grimes
Journal:  Lancet       Date:  2005 Apr 9-15       Impact factor: 79.321

Review 3.  Sample size recalculation in internal pilot study designs: a review.

Authors:  Tim Friede; Meinhard Kieser
Journal:  Biom J       Date:  2006-08       Impact factor: 2.207

4.  The reporting of statistical inferences in selected prosthodontic journals.

Authors:  T J Prihoda; E Schelb; J D Jones
Journal:  J Prosthodont       Date:  1992-09       Impact factor: 2.752

5.  Registration of clinical trials.

Authors:  Philip Benson
Journal:  J Orthod       Date:  2011-06

6.  Reporting of research quality characteristics of studies published in 6 major clinical dental specialty journals.

Authors:  Nikolaos Pandis; Argy Polychronopoulou; Phoebus Madianos; Margarita Makou; Theodore Eliades
Journal:  J Evid Based Dent Pract       Date:  2011-06       Impact factor: 5.267

7.  Low power, type II errors, and other statistical problems in recent cardiovascular research.

Authors:  J L Williams; C A Hathaway; K L Kloster; B H Layne
Journal:  Am J Physiol       Date:  1997-07

8.  The continuing unethical conduct of underpowered clinical trials.

Authors:  Scott D Halpern; Jason H T Karlawish; Jesse A Berlin
Journal:  JAMA       Date:  2002-07-17       Impact factor: 56.272

9.  The reporting of randomized controlled trials in prosthodontics.

Authors:  Asbjørn Jokstad; Marco Esposito; Paul Coulthard; Helen V Worthington
Journal:  Int J Prosthodont       Date:  2002 May-Jun       Impact factor: 1.681

Review 10.  Reporting of sample size calculation in randomised controlled trials: review.

Authors:  Pierre Charles; Bruno Giraudeau; Agnes Dechartres; Gabriel Baron; Philippe Ravaud
Journal:  BMJ       Date:  2009-05-12
View more
  9 in total

Review 1.  Risk of bias and magnitude of effect in orthodontic randomized controlled trials: a meta-epidemiological review.

Authors:  Despina Koletsi; Loukia M Spineli; Evangelia Lempesi; Nikolaos Pandis
Journal:  Eur J Orthod       Date:  2015-07-14       Impact factor: 3.075

2.  [Influence of impact factor on reporting sample size calculations in publications on studies exemplified by AMD treatment : Cross-sectional investigation on the presence of sample size calculations in publications of RCTs on AMD treatment in journals with low and high impact factors].

Authors:  Sabrina Tulka; Berit Geis; Stephanie Knippschild; Christine Baulig; Frank Krummenauer
Journal:  Ophthalmologe       Date:  2020-02       Impact factor: 1.059

3.  Reporting Sample Size Calculation in Randomized Clinical Trials Published in 4 Orthodontic Journals.

Authors:  Marialicia Calderon-Augusto; Carlos Flores-Mir; Luis Ernesto Arriola-Guillén
Journal:  Turk J Orthod       Date:  2021-12

4.  Do sample size calculations in longitudinal orthodontic trials use the advantages of this study design?

Authors:  Samer Mheissen; Jadbinder Seehra; Haris Khan; Nikolaos Pandis
Journal:  Angle Orthod       Date:  2022-05-01       Impact factor: 2.684

5.  Outcome discrepancies and selective reporting: impacting the leading journals?

Authors:  Padhraig S Fleming; Despina Koletsi; Kerry Dwan; Nikolaos Pandis
Journal:  PLoS One       Date:  2015-05-21       Impact factor: 3.240

6.  Comparison of nuisance parameters in pediatric versus adult randomized trials: a meta-epidemiologic empirical evaluation.

Authors:  Ben Vandermeer; Ingeborg van der Tweel; Marijke C Jansen-van der Weide; Stephanie S Weinreich; Despina G Contopoulos-Ioannidis; Dirk Bassler; Ricardo M Fernandes; Lisa Askie; Haroon Saloojee; Paola Baiardi; Susan S Ellenberg; Johanna H van der Lee
Journal:  BMC Med Res Methodol       Date:  2018-01-10       Impact factor: 4.615

7.  CONSORT 2010 statement: extension checklist for reporting within person randomised trials.

Authors:  Nikolaos Pandis; Bryan Chung; Roberta W Scherer; Diana Elbourne; Douglas G Altman
Journal:  BMJ       Date:  2017-06-30

8.  Validity of sample sizes in publications of randomised controlled trials on the treatment of age-related macular degeneration: cross-sectional evaluation.

Authors:  Sabrina Tulka; Berit Geis; Christine Baulig; Stephanie Knippschild; Frank Krummenauer
Journal:  BMJ Open       Date:  2019-10-10       Impact factor: 2.692

9.  Apriori sample size estimation and reporting in original articles published from 2012 to 2020 in two Asian orthodontic journals.

Authors:  Shivangi Ramteke; Sekar Santhosh Kumar; Balasubramanian Madhan
Journal:  J Orthod Sci       Date:  2022-08-24
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.