Literature DB >> 19056791

Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols.

An-Wen Chan1, Asbjørn Hróbjartsson, Karsten J Jørgensen, Peter C Gøtzsche, Douglas G Altman.   

Abstract

OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials.
DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between information presented in the protocol and the publication.
RESULTS: Only 11/62 trials described existing sample size calculations fully and consistently in both the protocol and the publication. The method of handling protocol deviations was described in 37 protocols and 43 publications. The method of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials), methods of handling protocol deviations (19/43) and missing data (39/49), primary outcome analyses (25/42), subgroup analyses (25/25), and adjusted analyses (23/28). Interim analyses were described in 13 protocols but mentioned in only five corresponding publications.
CONCLUSION: When reported in publications, sample size calculations and statistical methods were often explicitly discrepant with the protocol or not pre-specified. Such amendments were rarely acknowledged in the trial publication. The reliability of trial reports cannot be assessed without having access to the full protocols.

Entities:  

Mesh:

Year:  2008        PMID: 19056791      PMCID: PMC2600604          DOI: 10.1136/bmj.a2299

Source DB:  PubMed          Journal:  BMJ        ISSN: 0959-8138


Introduction

Sample size calculations and data analyses have an important impact on the planning, interpretation, and conclusions of randomised trials. Statistical analyses often involve several subjective decisions about which data to include and which tests to use, producing potentially different results and conclusions depending on the decisions taken.1 2 3 4 5 6 7 Methods of analysis that are chosen or altered after preliminary examination of the data can introduce bias if a subset of favourable results is then reported in the publication. The study protocol plays a key role in reducing such bias by documenting a pre-specified blueprint for conducting and analysing a trial. Explicit descriptions of methods before a trial starts help to identify and deter unacknowledged, potentially biased changes made after reviewing the study results.8 9 10 To evaluate the completeness and consistency of reporting, we reviewed a comprehensive cohort of randomised trials and compared the sample size calculations and data analysis methods described in the protocols with those reported in the publications.

Methods

We included all published parallel group randomised trials approved by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark, from 1 January 1994 to 31 December 1995. We defined a randomised trial as a research study that randomly allocated human participants to healthcare interventions. We manually reviewed the committees’ files in duplicate to identify eligible studies as part of a separate study of outcome reporting.9 We confirmed journal articles for each trial by surveying investigators and searching PubMed, Embase, and the Cochrane Controlled Trials Register.9 For each trial, we reviewed the protocol, statistical appendices, amendments, and publications that reported the primary outcome measures. Two reviewers used electronic forms to independently extract data relating to the design, sample size calculation, and statistical analyses in duplicate; we resolved disagreements by discussion. No reviewer extracted data from both the protocol and publication for the same trial. The pre-specified primary outcomes of our study were the proportion of trial protocols and publications that did not provide key information (described below) about sample size calculations and statistical methods and the proportion of trials with discrepancies between the information presented in the protocol and the publication. We considered a 10% or greater difference between calculated sample sizes in the protocol and publication to be a discrepancy, as well as any qualitative or quantitative difference in other information we examined. From publications, we noted the achieved sample size and any statements about early stopping or whether analyses were pre-specified or amended. From both protocols and publications, we categorised the design framework of the trial as superiority, non-inferiority, or equivalence (box 1). We also recorded the fundamental components of sample size calculations (table 1), the basis on which the minimum clinically important effect size (delta) and estimated event rates were derived (for example, previous literature, pilot data), and any reference to re-estimation of sample size on the basis of an interim analysis. When multiple sample size calculations were reported, we focused on the one for the primary outcome measure.
Table 1

 Reporting of sample size calculations in trial protocols and publications

Component of sample size calculationNo of trials reporting each component (n=62)*
ProtocolPublicationBoth
Name of outcome measure513128
Minimum clinically important effect size (delta)533332
Estimated event rate in each study arm†20/2712/169/14
Standard deviation for delta‡18/249/147/12
Alpha (type 1 error rate)503331
Power513432
Calculated sample size623030
All components reported372118

*Among trials reporting at least one component of sample size calculation in trial protocol.

†For trials reporting sample size calculations using binary outcome measures.

‡For trials reporting sample size calculations using continuous outcome measures.

Box 1 Definitions of collected data

Design framework

Superiority trial—Explicitly described as a study designed to show a difference in effects between interventions or not explicitly described as an equivalence or non-inferiority trial Non-inferiority trial—Explicitly described as a study designed to show that one intervention is not worse than another, or a non-inferiority margin is specified, or a one sided confidence interval is presented Equivalence trial—Explicitly described as a study designed to show that one intervention is neither inferior nor superior to another or an equivalence margin is specified

Handling of protocol deviations

Intention to treat analysis—All participants with available data are analysed in the original group to which they were randomly assigned (as randomised), regardless of adherence to the protocol. No data are excluded for reasons other than loss to follow-up Per protocol analysis—Participants with available data are analysed as randomised provided they meet some defined level of adherence to the protocol As treated analysis—Participants are analysed in the group corresponding to the actual intervention received (ignoring original randomisation)

Primary outcome

Main outcome(s) of interest, in the following hierarchical order: 1. Explicitly defined as primary or main 2. Outcome used in the power calculation 3. Main outcome stated in the trial objectives Reporting of sample size calculations in trial protocols and publications *Among trials reporting at least one component of sample size calculation in trial protocol. †For trials reporting sample size calculations using binary outcome measures. ‡For trials reporting sample size calculations using continuous outcome measures. We also documented how deviations from the treatment protocol were handled in terms of both the stated type of analysis (such as intention to treat) and any additional text description (box 1). For the primary outcome measure of the trial (box 1), we recorded the method of handling missing data and the type of statistical test used. We recorded the factors and outcome measures used in any subgroup analyses; the covariates and outcome measures used in any adjusted analyses; and the number of statistical comparisons described or reported between randomised groups, excluding baseline comparisons. Finally, we recorded the use of interim analyses and data monitoring boards.

Results

We identified 70 parallel group randomised trials that received ethics approval in 1994-5 and were subsequently published (table 2).9 The median publication year was 1999 (range 1995-2003). Fifty two of the trials evaluated drugs, and 56 were funded in full or in part by industry. Most trials involved two study arms (n=47), multiple centres (46), and some form of blinding (49). The median achieved sample size per study arm was 66 (10th-90th centile range 13-324). The most common specialty fields were endocrinology (n=11), anaesthesiology (5), cardiology (5), infectious diseases (5), and oncology (5).
Table 2

 Characteristics of published parallel group randomised trials

CharacteristicNo of trials (n=70)
Intervention:
 Drug52
 Surgery/procedure9
 Counselling/lifestyle7
 Equipment2
No of study groups:
 247
 315
 >38
Blinding:
 Blinded in some way49
 None13
 Unclear8
Study centres:
 Single24
 Multiple46
Funding:
 Industry only45
 Industry and non-industry11
 Non-industry only10
 None3
 Unclear1
Characteristics of published parallel group randomised trials Sixty nine trials were designed and reported as superiority trials. One trial was stated to be an equivalence trial in the protocol but reported as a superiority trial in the publication; no explanation was given for the change.

Sample size calculation

Overall, only 11 trials fully and consistently reported all of the requisite components of the sample size calculation in both the protocol and the publication. Completeness of reporting—An a priori sample size calculation was reported for 62 trials; 28 were described only in the protocol and 34 in both the protocol and the publication. Thirty seven protocols and 21 publications reported all of the components of the sample size calculation (figure). Individual components were reported in 74-100% of protocols and 48-75% of publications (table 1). Nine protocols provided only the calculated sample size without any further details about the calculation. Among trials that reported an estimated minimum clinically important effect size (delta), 20/53 protocols and 10/33 publications stated the basis on which the figure was derived.

Reporting of sample size calculations and data analyses in publications compared with protocols

Reporting of sample size calculations and data analyses in publications compared with protocols Comparison of calculated and actual sample sizes—Sixty two trials provided a calculated sample size in the protocol. Of these, 30 subsequently recruited a sample size within 10% of the calculated figure from the protocol; 22 trials randomised at least 10% fewer participants than planned as a result of early stopping (n=3), poor recruitment (2), and unspecified reasons (17); and 10 trials randomised at least 10% more participants than planned as a result of lower than anticipated average age (1), a higher than expected recruitment rate (1), and unspecified reasons (8). A calculated sample size was as likely to be reported accurately in the publication if there was a discrepancy with the actual sample size compared with no discrepancy (11/32 v 14/30). Discrepancies between publications and protocols—Both the publications and the protocols for 34 trials described a sample size calculation. Overall, we noted discrepancies in at least one component of the published sample size calculation when compared with the protocol for 18 trials (figure). Publications for eight trials reported components that had not been pre-specified in the protocol, and 16 had explicit discrepancies between information contained in the publication and protocol (table 3, box 2). None of the publications mentioned any amendments to the original sample size calculation.
Table 3

 Discrepancies in sample size calculations reported in trial publications compared with protocols

Component of sample size calculationNo of trials with discrepancy
TotalNot pre-specified*Different from protocol description
Outcome measure (n=31)†734
Estimated delta (n=33)†1266: 3 larger in protocol; 3 larger in article
Estimated event rates (n=16)‡330
Estimated standard deviation (n=14)§523: 2 larger in protocol; 1 larger in article
Alpha (n=33)†220
Power (n=34)†927: 5 larger in protocol; 2 larger in article
Calculated sample size (n=30)†808¶: 7 larger in protocol; 1 larger in article
Any component (n=34)**18816

*Reported in publication but not mentioned in protocol.

†Among trials reporting component in publication.

‡Among trials reporting event rates for binary outcome measures in publication.

§Among trials reporting standard deviations for continuous outcome measures in publication.

¶Greater than 10% difference in calculated sample size.

**Among trials reporting any component in publication.

Discrepancies in sample size calculations reported in trial publications compared with protocols *Reported in publication but not mentioned in protocol. †Among trials reporting component in publication. ‡Among trials reporting event rates for binary outcome measures in publication. §Among trials reporting standard deviations for continuous outcome measures in publication. ¶Greater than 10% difference in calculated sample size. **Among trials reporting any component in publication. Outcome: disease progression or death rate Protocol: delta 10%; event rates unspecified Publication: delta 6%; event rates 16% and 10% Outcome: mean number of active joints Protocol: delta 2.5 joints Publication: delta 5 joints Outcome: mean symptom score Protocol: 1.4 Publication: 0.49 Outcome: survival without disease progression Protocol: 90% Publication: 80% Outcome: thromboembolic complication rate Protocol: 2200 Publication: 1500 Outcome: global disease assessment Protocol: χ2 test Publication: analysis of covariance Outcome: time to progression or death Protocol: baseline disease severity Publication: duration of previous treatment*, type of previous treatment*, blood count*, disease severity Outcome: neurological score at six months Protocol: baseline neurological score, pupil reaction, age, CT scan classification, shock, haemorrhage Publication: no adjusted analysis reported

Box 2 Anonymised examples of unacknowledged discrepancies in sample size calculations and statistical analyses reported in publications compared with protocols

Statistical analyses

Omitted covariates for adjusted analysis in publication

*Described explicitly as pre-specified despite not appearing in the protocol

Protocol deviations

The specific method of handling protocol deviations in the primary statistical analysis (as defined in box 1) was named or described in 37 protocols and 43 publications (figure). Overall, the primary method described for handling protocol deviations in the publication differed from that described in the protocol for 19/43 trials; table 4 provides details. None of these discrepancies was acknowledged in the journal publication.
Table 4

 Discrepancies in primary method of handling protocol deviations, as reported in publications compared with protocols

Discrepancy and NoPrimary method(s) described
ProtocolPublication
Not pre-specified in protocol*—11:
  5NoneITT
  4NonePer protocol
  1NoneITT and per protocol
  1NoneITT and as treated
Added new method in publication†—3:
  2Per protocolITT and per protocol
  1ITTITT and on treatment analysis
Omitted protocol-specified method†—2ITT and per protocolPer protocol
Changed method†—3
  1Per protocolITT
  1As treatedITT
  1ITTPer protocol
Total discrepancies*—19

ITT=intention to treat.

*Among 43 trials that described methods of handling protocol deviations in publication. †Among 32 trials that described methods of handling protocol deviations in both publication and protocol.

Discrepancies in primary method of handling protocol deviations, as reported in publications compared with protocols ITT=intention to treat. *Among 43 trials that described methods of handling protocol deviations in publication. †Among 32 trials that described methods of handling protocol deviations in both publication and protocol. Thirty protocols and 33 publications used the term “intention to treat” analysis and applied a variety of definitions (table 5). Few of these protocols (n=7) and publications (3) made it explicit whether study participants were analysed in the group to which they were originally randomised. Most protocols (22) and publications (18) incorrectly excluded participants from the intention to treat analysis for reasons other than loss to follow-up (table 5).
Table 5

 Definitions of “intention to treat” analysis used in protocols and publications

Protocols (n=30)Publications (n=33)
Group assignment for analysis
According to original randomisation:520
 Explicitly described53
 Assumed on basis of participant flow diagram08
 Assumed on basis of same participant numbers at baseline and in results09
According to intervention actually received*20
No description2313
Exclusion of participants from analysis
Exclude losses to follow-up only413
Exclude data from participants owing to†:2218
 Inadequate adherence to treatment1912
 Other protocol violations65
 Adverse events or lack of efficacy101
 Other reasons61
Unclear reasons for exclusion42

*Regardless of original randomised group assignment.

†Some trials had multiple reasons for exclusion.

Definitions of “intention to treat” analysis used in protocols and publications *Regardless of original randomised group assignment. †Some trials had multiple reasons for exclusion.

Missing data

The method of handling missing data was described in only 16 protocols and 49 publications (figure). Methods reported in publications differed from the protocol for 39/49 trials. Published methods were often not pre-specified in the protocol (38/49). For one trial, the protocol stipulated that missing data would be counted as failures, whereas in the publication they were excluded from the analysis.

Primary outcome analysis and overall number of tests

Fifty four trials designated at least one outcome measure as primary in the protocol (n=49) or publication (43). The statistical method for analysing the primary outcome measure was described in 39 protocols and 42 publications. Overall, 25 publications that described the statistical test for primary outcome measures differed from the protocol (figure, box 2). The median number of between group statistical tests defined in 44 protocols was 30 (10th-90th centile range 8-218); the other 26 protocols contained insufficient statistical detail. Publications for all 70 trials reported a median of 22 (8-71) tests. Half of the protocols (n=36) and publications (34) did not define whether hypothesis testing was one or two sided. Interestingly, we found one neurology trial that used two sided P values in one publication (all P values >0.1) and a one sided P value in another (P=0.028).

Subgroup analysis

Overall, 25 trials described subgroup analyses in the protocol (n=13) or publication (20). All had discrepancies between the two documents (figure, box 2). Twelve of the trials with protocol specified analyses reported only some (n=7) or none (5) in the publication. Nineteen of the trials with published subgroup analyses reported at least one that was not pre-specified in the protocol. Protocols for 12 of these trials specified no subgroup analyses, whereas seven specified some but not all of the published analyses. Only seven publications explicitly stated whether the analyses were defined a priori; four of these trials claimed that the subgroup analyses were pre-specified even though they did not appear in the protocol.

Adjusted analysis

Overall, 28 trials described adjusted analyses in the protocol (n=18) or publication (18). Of these, 23 had discrepancies between the two documents (figure, box 2). Twelve of the trials with protocol specified covariates reported no adjustment (n=10) or omitted at least one pre-specified covariate (2) from the published analysis. Twelve of the trials with published adjusted analyses used covariates that were not pre-specified in the protocol. Ten of these trials did not mention any adjusted analysis in the protocol, whereas two trials added new covariates to those specified in the protocol. Publications for only one trial explicitly stated whether the covariates were defined a priori.

Interim analyses and data monitoring boards

Interim analyses were described in 13 protocols, but reported in only five corresponding publications. An additional two trials reported interim analyses in the publications, despite the protocol explicitly stating that there would be none. A data monitoring board was described in 12 protocols but in only five of the corresponding publications.

Discussion

We identified a high frequency of unacknowledged discrepancies and poor reporting of sample size calculations and data analysis methods in an unselected cohort of randomised trials. To our knowledge, this is the largest review of sample size calculations and statistical methods described in trial publications compared with protocols. We reviewed key methodological information that can introduce bias if misrepresented or altered retrospectively. Our broad sample of protocols is a key strength, as unrestricted access to such documents is often very difficult to obtain.11 Previous comparisons have been limited to case reports,6 small samples,12 13 specific specialty fields,14 and specific journals.15 Other reviews of reports submitted to drug licensing agencies did not have access to protocols.4 16 17 One limitation is that our cohort may not reflect recent protocols and publications, as this type of review can be done only several years after protocol submission to allow time for publication. Whether the widespread adoption of CONSORT and other reporting guidelines for publications has improved the quality of protocols or reduced the prevalence of unacknowledged amendments in recent years is unclear.18 However, our results are consistent with more recent examples of discrepancies.3 12 13 Furthermore, we previously found that the prevalence of publication restrictions stated in industry initiated trial protocols did not change between 1995 and 2004.19 We also acknowledge that detailed statistical analysis plans may not always be included in the application for scientific or ethical review as a result of varying standards, even though this information has a role in evaluating the validity of a study. However, this does not explain the frequent discrepancies we found between explicit descriptions in protocols and publications (box 2). We found that sample size calculations and important aspects of statistical analysis methods were often incompletely described in protocols and publications. When reported in the publication, they were discrepant with the protocol for 44-100% of trials. The 70 trials in this study were part of a larger review that found unacknowledged changes to primary outcome measures in more than half of 102 trials,9 so we are not surprised to find frequent discrepancies in other aspects of study conduct. Because the choice of parameters for sample size calculations and statistical analyses is based on somewhat subjective judgments, specifying them a priori in order to avoid selective reporting or revisions on the basis of the data is important. This includes defining each component of the sample size calculation20; the analysis plan for primary outcome measures8; the primary population for analysis21; and subgroup,22 23 adjusted, and interim analyses.24 Selective reporting of a favourable subset of analyses done can lead to inaccurate interpretation of results, similar to selective publication or selective outcome reporting.25 26 Even when analysis plans are reported, concerns may exist that published plans were constructed after exploration of the data. Accurate reporting of sample size calculations and data analysis methods is important not only for the sake of transparency but also because the choice of methods and the reasons for choosing them can directly influence the interpretation and conclusions of study results.1 2 3 4 5 6 27 Public access to full protocols is thus needed to reliably appraise trial publications. Several journals have recognised this principle and require submission of protocols with manuscripts.28 29 30 We identified two types of discrepancies: lack of pre-specification, where published information about the sample size calculation or data analysis methods did not appear in the protocol; and unacknowledged amendments, where published information differed from the protocol. Whereas the first type of discrepancy could be explained by varying standards in protocol content, the second represents an explicit change from the time of ethical approval to analysis and publication of the trial. Retrospective amendments to methods of statistical analysis can sometimes be justifiable before unblinding of the trial, but no good reason exists to misrepresent the pre-trial sample size calculation or misleadingly describe the methods as pre-specified when the protocol suggests otherwise. Substantive amendments should be submitted to ethics committees and explicitly acknowledged in the publication so that readers can judge the potential for bias.8

Sample size calculations

Although good reasons exist for reporting sample size calculations in publications,20 31 previous reviews of publications have found poor reporting of sample size calculations and their underlying assumptions.14 32 33 34 35 36 These reviews involved specific specialty fields or journals, and only one had access to trial protocols.14 Inadequate reporting and unacknowledged changes in sample size calculations can introduce bias and lead to misinterpretation of trial results. For example, favourable results could be highlighted by changing the primary outcome measure stated to have been used to calculate sample size,9 or by modifying the pre-specified minimum clinically important effect size (delta), which helps to determine whether one study intervention can be declared superior or equivalent to another.6 31 In addition, publications of studies with poor recruitment might report a modified sample size calculation to give the appearance that a smaller sample size had originally been planned. Although methods for valid changes to sample size parameters mid-trial are emerging,37 no such method was mentioned in any study in our sample.

Data analyses

Although results for primary outcome measures are key determinants of an intervention’s benefits or harms profile, we found that the analysis method for primary outcome measures was often undefined in the protocol or altered in the publication. Such unacknowledged changes compound previously identified discrepancies in primary outcome measures.9 10 The recommended primary analysis population for superiority trials is defined by the intention to treat principle, whereby data from all trial participants are analysed according to their randomly allocated study arm, irrespective of the degree of compliance or crossover to other interventions.38 Missing data from losses to follow-up can be handled with various statistical methods that can each produce different results.39 Previous reviews of high impact journals have found that a high proportion of publications provide insufficient details about methods of handling protocol deviations or missing data.40 41 In addition, a comparison of publications with protocols of cancer trials revealed that intention to treat analysis was done more often than was reported in the publications.14 We found omissions from the trial protocol as well as retrospective choices of analysis populations in almost half of the publications in our cohort. This creates the potential for preferential reporting of per protocol analyses over less favourable intention to treat analyses.4 5 The definition of the term intention to treat was also highly variable and often inaccurately applied in our cohort, as has been found in other reviews.40 41 42 To avoid ambiguity, the study arm in which participants are analysed and the criteria for excluding participants from analyses should be explicitly defined in the protocol and publication.8 Likewise, decisions to examine particular subgroups or to include particular covariates in adjusted analyses may be influenced by extensive exploratory analyses of the data.22 Subgroup analyses in publications are often over-interpreted, poorly reported, and lacking in clarity about whether they were pre-specified.13 27 43 44 45 Although exploratory analyses should be identified as such, we found that assertions about analyses being pre-specified were often misleading when compared with protocols. In addition, most interim analyses described in protocols were not mentioned in the publication, which deprives readers of important information about the decisions to stop or continue a trial as planned.24

Conclusions

Our findings support the need to improve the content of trial protocols and encourage transparent reporting of amendments in publications through research training. In collaboration with journal editors, trialists, methodologists, and ethicists, we have launched the SPIRIT (standard protocol items for randomised trials) initiative to establish evidence based recommendations for the key content of trial protocols.46 Public availability of trial protocols and submissions to regulatory agencies is also necessary to ensure transparent reporting of study methods.16 17 47 48 Prospective trial registration is an effective means of ensuring public access to protocol information, although a limited amount of methodological information is currently recorded on registries.49 50 51 To improve the reliability of published results, investigators should document the sample size calculations and full analysis plans before the trial is started and should then analyse the results with fidelity to the study protocol or describe major amendments in the publication.8 As the guardians of clinical research before study inception, scientific and ethical review committees can help to ensure that statistical analysis plans are well documented in protocols. Only with fully transparent reporting of trial methods and public access to protocols can the results be properly appraised, interpreted, and applied to care of patients. The results and conclusions of randomised trials are influenced by the choice of statistical analysis methods and individual components of sample size calculations If these methodological choices are defined or altered after examination of the data, the potential for biased reporting of favourable results is substantial Trial protocols and publications are often missing important methodological information about sample size calculations and statistical analysis methods When described, methodological information in journal publications is often discrepant with information in trial protocols
  47 in total

Review 1.  The revised CONSORT statement for reporting randomized trials: explanation and elaboration.

Authors:  D G Altman; K F Schulz; D Moher; M Egger; F Davidoff; D Elbourne; P C Gøtzsche; T Lang
Journal:  Ann Intern Med       Date:  2001-04-17       Impact factor: 25.391

2.  Subgroups, treatment effects, and baseline risks: some lessons from major cardiovascular trials.

Authors:  A B Parker; C D Naylor
Journal:  Am Heart J       Date:  2000-06       Impact factor: 4.749

3.  Investigation of within-study selective reporting in clinical research: follow-up of applications submitted to a local research ethics committee.

Authors:  S Hahn; P R Williamson; J L Hutton
Journal:  J Eval Clin Pract       Date:  2002-08       Impact factor: 2.431

4.  Peer review of statistics in medical research. Reporting power calculations is important.

Authors:  Douglas G Altman; David Moher; Kenneth F Schulz
Journal:  BMJ       Date:  2002-08-31

5.  Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research.

Authors:  An-Wen Chan; Karmela Krleza-Jerić; Isabelle Schmid; Douglas G Altman
Journal:  CMAJ       Date:  2004-09-28       Impact factor: 8.262

Review 6.  Randomized trials stopped early for benefit: a systematic review.

Authors:  Victor M Montori; P J Devereaux; Neill K J Adhikari; Karen E A Burns; Christoph H Eggert; Matthias Briel; Christina Lacchetti; Teresa W Leung; Elizabeth Darling; Dianne M Bryant; Heiner C Bucher; Holger J Schünemann; Maureen O Meade; Deborah J Cook; Patricia J Erwin; Amit Sood; Richa Sood; Benjamin Lo; Carly A Thompson; Qi Zhou; Edward Mills; Gordon H Guyatt
Journal:  JAMA       Date:  2005-11-02       Impact factor: 56.272

Review 7.  Research protocols: waiving confidentiality for the greater good.

Authors:  An-Wen Chan; Ross Upshur; Jerome A Singh; Davina Ghersi; François Chapuis; Douglas G Altman
Journal:  BMJ       Date:  2006-05-06

8.  The intention-to-treat approach in randomized controlled trials: are authors saying what they do and doing what they say?

Authors:  Jocelyn Gravel; Lucie Opatrny; Stan Shapiro
Journal:  Clin Trials       Date:  2007       Impact factor: 2.486

9.  Statistical considerations in the intent-to-treat principle.

Authors:  J M Lachin
Journal:  Control Clin Trials       Date:  2000-06

10.  Subgroup analysis and other (mis)uses of baseline data in clinical trials.

Authors:  S F Assmann; S J Pocock; L E Enos; L E Kasten
Journal:  Lancet       Date:  2000-03-25       Impact factor: 79.321

View more
  72 in total

Review 1.  The assessment of the quality of reporting of meta-analyses in diagnostic research: a systematic review.

Authors:  Brian H Willis; Muireann Quigley
Journal:  BMC Med Res Methodol       Date:  2011-12-09       Impact factor: 4.615

Review 2.  Quality of the supportive and palliative oncology literature: a focused analysis on randomized controlled trials.

Authors:  David Hui; Joseph Arthur; Shalini Dalal; Eduardo Bruera
Journal:  Support Care Cancer       Date:  2011-09-21       Impact factor: 3.603

3.  Selective outcome reporting: telling and detecting true lies. The state of the science.

Authors:  Ana Macura; Iosief Abraha; Jamie Kirkham; Gian Franco Gensini; Lorenzo Moja; Alfonso Iorio
Journal:  Intern Emerg Med       Date:  2010-03-19       Impact factor: 3.397

4.  Methodological and ethical quality of randomized controlled clinical trials in gastrointestinal surgery.

Authors:  Valérie Bridoux; Grégoire Moutel; Horace Roman; Babak Kianifard; Francis Michot; Christian Herve; Jean-Jacques Tuech
Journal:  J Gastrointest Surg       Date:  2012-07-10       Impact factor: 3.452

5.  CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials.

Authors:  David Moher; Sally Hopewell; Kenneth F Schulz; Victor Montori; Peter C Gøtzsche; P J Devereaux; Diana Elbourne; Matthias Egger; Douglas G Altman
Journal:  BMJ       Date:  2010-03-23

6.  Subgroup Analysis of Trials Is Rarely Easy (SATIRE): a study protocol for a systematic review to characterize the analysis, reporting, and claim of subgroup effects in randomized trials.

Authors:  Xin Sun; Matthias Briel; Jason W Busse; Elie A Akl; John J You; Filip Mejza; Malgorzata Bala; Natalia Diaz-Granados; Dirk Bassler; Dominik Mertz; Sadeesh K Srinathan; Per Olav Vandvik; German Malaga; Mohamed Alshurafa; Philipp Dahm; Pablo Alonso-Coello; Diane M Heels-Ansdell; Neera Bhatnagar; Bradley C Johnston; Li Wang; Stephen D Walter; Douglas G Altman; Gordon H Guyatt
Journal:  Trials       Date:  2009-11-09       Impact factor: 2.279

7.  Modified intention to treat reporting in randomised controlled trials: systematic review.

Authors:  Iosief Abraha; Alessandro Montedori
Journal:  BMJ       Date:  2010-06-14

8.  Delta inflation: a bias in the design of randomized controlled trials in critical care medicine.

Authors:  Scott K Aberegg; D Roxanne Richards; James M O'Brien
Journal:  Crit Care       Date:  2010-04-29       Impact factor: 9.097

9.  The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed.

Authors:  Sally Hopewell; Susan Dutton; Ly-Mee Yu; An-Wen Chan; Douglas G Altman
Journal:  BMJ       Date:  2010-03-23

Review 10.  Reporting of sample size calculation in randomised controlled trials: review.

Authors:  Pierre Charles; Bruno Giraudeau; Agnes Dechartres; Gabriel Baron; Philippe Ravaud
Journal:  BMJ       Date:  2009-05-12
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.