Literature DB >> 19538744

Design effect in multicenter studies: gain or loss of power?

Emilie Vierron1, Bruno Giraudeau.   

Abstract

BACKGROUND: In a multicenter trial, responses for subjects belonging to a common center are correlated. Such a clustering is usually assessed through the design effect, defined as a ratio of two variances. The aim of this work was to describe and understand situations where the design effect involves a gain or a loss of power.
METHODS: We developed a design effect formula for a multicenter study aimed at testing the effect of a binary factor (which thus defines two groups) on a continuous outcome, and explored this design effect for several designs (from individually stratified randomized trials to cluster randomized trials, and for other designs such as matched pair designs or observational multicenter studies).
RESULTS: The design effect depends on the intraclass correlation coefficient (ICC) (which assesses the correlation between data for two subjects from the same center) but also on a statistic S, which quantifies the heterogeneity of the group distributions among centers (thus the level of association between the binary factor and the center) and on the degree of global imbalance (the number of subjects are then different) between the two groups. This design effect may induce either a loss or a gain in power, depending on whether the S statistic is respectively higher or lower than 1.
CONCLUSION: We provided a global design effect formula applying for any multicenter study and allowing identifying factors - the ICC and the distribution of the group proportions among centers - that are associated with a gain or a loss of power in such studies.

Entities:  

Mesh:

Year:  2009        PMID: 19538744      PMCID: PMC2715424          DOI: 10.1186/1471-2288-9-39

Source DB:  PubMed          Journal:  BMC Med Res Methodol        ISSN: 1471-2288            Impact factor:   4.615


Background

Multicenter studies involve correlation in data because subjects from the same center are more similar than are those from different centers [1]. Such a correlation potentially affects the power of standard statistical tests, and conclusions made under the assumption that data are independent can be invalidated. A usual measure of the clustering effect on an estimator (often a treatment or a group effect) is the design effect (Deff). The Deff is defined as the ratio of two variances: the variance of the estimator when the center effect is taken into account over the variance of the estimator under the hypothesis of a simple random sample [2,3]. The Deff represents the amount by which the sample size needs to be multiplied to account for the design of the study. Ignoring clustering can lead to over- (Deff < 1) or underpowered (Deff > 1) studies. In cluster randomized trials, clustering produces a loss of power and Donner and Klar proposed a method to inflate the sample size to take data correlation into account [4]. On the contrary, in individually randomized trials with equal treatment arm sizes, a center effect induces a gain in power, and sample size can be reduced [5]. Thus, in some situations, correlation in data induces a loss of power, and in others, a gain in power. To our knowledge, complete explanations for this striking discrepancy are lacking. We aimed to produce a measure of clustering in multicenter studies testing the effect of a binary factor on a continuous outcome. We first present the statistical model used and the associated design-effect formula. Then we explore the general form of this design effect under particular study designs. Finally, we give examples to illustrate our results.

Methods and results

Theoretical Issues

The mixed-effects model

Let us consider a multicenter study aimed at comparing two groups on a continuous outcome. Several situations can be considered. If subjects are randomly assigned to a group (e.g., a treatment arm), the study is a randomized trial; otherwise, it is an observational study, and the group data depicts exposure to a binary risk factor. Data are distributed as follows: where Ydenotes the response from the kth subject, of the ith group, in the jth center. The overall response mean is μ. Each center is of size m= m1+ m2, and each group is of size , with N = n1 + n2 being the total number of subjects in the study. The group effects {α} are fixed, with . We assume that centers are a random sample of a large population of centers, so the center effects {B} are independent and identically distributed (iid) . The residual errors {ε} are assumed to be and independent of {B}. The center effect, quantified by the intraclass correlation coefficient (ICC), ρ, and defined as the proportion of the total variance that is due to the between-center variability, can be defined from model (1) as follows [6]:

Group effect variance

Two-way ANOVA

The group effect variance can be shown to equal (Appendix 1):

One-way ANOVA

Ignoring the center effect, model (1) reduces to: where Yrepresents the response from the kth subject in the ith group. The random errors {} are iid . Thus, the variance of the group effect is as follows: and we have (Table 1):
Table 1

One-way ANOVA for data distributed according to the two-way mixed-effects model (1).

SourceDFSSE(MS)
Group2 - 1
ResidualN - 2

TotalN - 1

When data are distributed according to the mixed model (1) but analyzed by performing a one-way ANOVA – as if data were distributed according to model (4) – the expectation of the residual mean squares (denoted in the framework of model (4)) can actually be expressed as a function of and , the variance components associated to the true underlying statistical model (i.e. the mixed model (1)).

One-way ANOVA for data distributed according to the two-way mixed-effects model (1). When data are distributed according to the mixed model (1) but analyzed by performing a one-way ANOVA – as if data were distributed according to model (4) – the expectation of the residual mean squares (denoted in the framework of model (4)) can actually be expressed as a function of and , the variance components associated to the true underlying statistical model (i.e. the mixed model (1)).

The Design Effect

The Deff measures the effect of clustering on the group effect variance. It is defined as the ratio of the group effect variances (3) over (5). Using equation (6) we have: Multicenter randomized trials often recruit a large number of subjects. Then, assuming a large total sample size and numerous centers, the {m} are small in comparison with N, and can be approximated by 1. Expression (7) then becomes: where ρ is the ICC as defined in (2) and .

Simulation study

We first conducted a simulation study aiming validating the approximate formula we proposed. We considered equal and varying center sizes for 12 combinations of the total sample size and number of centers (100 subjects for 5, 10 or 20 centers, 200 subjects for 5, 10, 20 or 50 centers, 500 subjects for 5, 10, 20, 50 or 100 centers), 4 group distributions (from balanced groups within centers to randomization of centers, which are then nested within the groups) and two ICC values (0.01 and 0.10). One thousand simulations were conducted using SAS 9.1 (SAS Institute, Cary, NC) for each combination of the parameters. Table 2 presents the average exact design effect estimate and average relative difference between exact and approximate design effect calculations for all these situations, for varying center sizes (20% of centers recruit 80% of subjects). Although such extreme imbalance in center sizes is unlikely to occur (and not advisable, mainly in cluster trial designs including very few centers, such as 5 or 10 centers), it allows testing the robustness of our formula even in such extreme situations. Similar results were found for equal center sizes (data not shown). Results show that the approximate design effect formula always slightly underestimates the exact formula since all relative differences are positive. These differences increase with the ICC and decrease, as expected, while the number of centers increases but are not influenced by the total number of subjects. Moreover, they globally increase with the design effect. All of these results are below (or equal) 0.0771, indicating that our formula applies in the majority of multicenter designs, with a better accuracy (relative differences lesser than 0.052) for designs including more than 10 centers.
Table 2

Validation of the approximate design effect formula.

ICC = 0.01
N subjects100200500
N centers5102051020505102050100

S1 Deff0.99690.99380.99210.99660.99360.99220.99110.99650.99330.99190.99130.9908
rdiff0.00650.00320.00160.00650.00320.00160.00060.00650.00320.00160.00060.0003

S2 Deff0.99720.99490.99280.99720.99560.99380.99170.99800.99890.99560.99310.9918
rdiff0.00650.00320.00140.00650.00320.00160.00050.00650.00330.00160.00060.0003

S3 Deff1.01021.03061.01471.02171.06221.04311.01321.05751.17881.11431.04871.0204
rdiff0.00660.00350.00160.00660.00360.00180.00060.00660.00360.00190.00070.0003

S4 Deff1.10381.03231.02851.20261.05381.06041.01841.47881.12901.15881.05591.0186
rdiff0.00770.00510.00270.00770.00520.00300.00110.00770.00530.00300.00130.0006


ICC = 0.10

N subjects100200500
N centers5102051020505102050100

S1 Deff0.96550.93560.91970.96420.93370.92090.91050.96310.93130.91770.91240.9076
rdiff0.06430.03180.01550.06490.03200.01600.00610.06490.03240.01610.00630.0031

S2 Deff0.97090.94690.92690.96960.95470.93590.91710.97930.98270.95490.93000.9174
rdiff0.06560.03180.01420.06480.03230.01570.00530.06510.03250.01610.00630.0028

S3 Deff1.11011.30181.17211.20951.64711.42561.13371.66622.71752.16851.49651.2049
rdiff0.06540.03490.01660.06590.03540.01820.00630.06620.03580.01850.00740.0034

S4 Deff2.07181.33601.27253.16691.57501.62521.19346.27082.57592.58861.56871.2017
rdiff0.07680.05070.02720.07700.05170.02990.01100.07710.05130.02990.01260.0059

ICC: Intraclass Correlation Coefficient

Simulations are conducted with varying center sizes: 20% of centers recruit 80% of subjects. Average exact design effect estimate (Deff) and average relative difference (rdiff) between exact and approximate design effect formula are given for 4 situations (Si, i = 1,2,3,4), two ICC values, and obtained for 1000 simulations.

S1: Equal group sizes. In each center, the probability, for a subject, to be in group 1 is 1/2

S2: Slight variations in group 1 proportions among centers. The ratio between the sizes of group 1 and group 2 varies uniformly between 0.8 and 1.25 among centers

S3: Important variations in group 1 proportions among centers. The ratio between the sizes of group 1 and group 2 varies uniformly between 0.1 and 10 among centers

S4: "Cluster design". The center is nested within the group and the probability, for each center, to be in group 1 is 1/2

Validation of the approximate design effect formula. ICC: Intraclass Correlation Coefficient Simulations are conducted with varying center sizes: 20% of centers recruit 80% of subjects. Average exact design effect estimate (Deff) and average relative difference (rdiff) between exact and approximate design effect formula are given for 4 situations (Si, i = 1,2,3,4), two ICC values, and obtained for 1000 simulations. S1: Equal group sizes. In each center, the probability, for a subject, to be in group 1 is 1/2 S2: Slight variations in group 1 proportions among centers. The ratio between the sizes of group 1 and group 2 varies uniformly between 0.8 and 1.25 among centers S3: Important variations in group 1 proportions among centers. The ratio between the sizes of group 1 and group 2 varies uniformly between 0.1 and 10 among centers S4: "Cluster design". The center is nested within the group and the probability, for each center, to be in group 1 is 1/2

Some specific designs

Stratified Multicenter Individually Randomized Trial

Assuming that randomization is balanced and stratified on centers, we then have equal group size () and equal number of subjects in the two groups in each center (∀ j = 1,..., Q, ). The Deff reduces to: In a stratified multicenter individually randomized trial, the Deff is smaller than 1 and its value decreases as the ICC increases, which involves a gain in power allowing a reduction in sample size, as shown by Vierron et al. [5].

Matched Pair Design

Some studies yield observations that are individually matched, such as cross-over trials, trials on matched subjects (which are, for example, matched by age or sex) or data (e.g. two eyes from the same subject) or before-after studies. Assuming pairs of matched data, pairs can be considered as centers, thus leading to a particular case of the stratified multicenter individually randomized trial with m1= m2= 1. Then the Deff equals: In a matched pair design, the variance of the differences between paired responses equals: where σ2 is the variance of observations in a standard parallel group design. Then, correcting the classical sample size formula for two independent samples with the Deff (1 - ρ) and replacing the σ2(1 - ρ) term by leads to the sample size formula used for paired data studies [7]: where d is the difference in mean responses from the two groups.

Cluster Randomized Trial and Expertise-based Randomized Trial

In a cluster randomized trial, clusters rather than subjects are randomly assigned to a treatment group. Considering centers as clusters, for each center we then have m1= 0 or m2= 0. Such a design is also encountered in individually randomized trials in which clustering is imposed by the intervention design and is nested within groups, such as when subjects are assigned to two treatment arms for which the intervention is delivered by several physicians, each participating in only one arm of the study [8,9]. In this case, equation (8) reduces to: where . With roughly equal cluster sizes and assuming the same number of subjects in each arm (), the Deff can be approximated as follows: where is the mean cluster size. This value is the inflation factor [4], used for sample size calculation in cluster randomized trials.

Multicenter Observational Study

In a multicenter observational study, group sizes are likely to differ, at the level of the center (i.e., m1≠ m2) or globally (i.e., n1 ≠ n2). Nevertheless, with identical group distributions among centers (i.e., the proportion of subjects in group 1 is p ∈ ]0;1[, whatever the center is), the design effect reduces to: Thus, in an observational study, with all centers having identical group distributions – even if the global group sizes are not equal (i.e., even if n1 ≠ n2) – taking into account the center effect leads to increased power, as with stratified individually randomized trials. No design effect: Deff = 1. From formula (8), Deff = 1 leads to: Rewriting S as , we obtain a statistic that estimates, for group 1, the difference between the observed group size (i.e., m1) and its expected value under the assumption of centers having identical group proportions (i.e., ). Therefore, when this statistic – providing a measure of heterogeneity of the group distributions among centers (thus the level of association between the group and the center) – is below 1, the Deff is also below 1 and using a statistical model that takes into account the center effect leads to increased power. On the contrary, when the group distributions differ strongly among centers, the S statistic, and then the Deff, is greater than 1, thus leading to a loss of power. At the extreme case where centers are totally nested within groups, the loss of power can be very important and it has been shown that omitting the center effect in analyzes leads to type I error [4]. The link between the power of multicenter studies and the design effect can be established as follows. Be nthe size of group i, ES the expected effect size and zthe quantile of the standard normal distribution such that P(Z ≤ z) = γ (Z being N(0,1)). The sample size calculation formula allowing testing the group effect on a continuous outcome and corrected for the design effect is [7,10]: Then, the power of any multicenter study depends on the design effect according to the following relation: where Φ(·) is defined as the cumulative density function of N(0,1). As the design effect increases and exceeds 1, the power decreases and sample size has to be inflated to reach the nominal power. On the contrary, when the design effect value is below 1, the power is larger than the nominal one, allowing reducing the required sample size.

Example

Table 3 presents data for hypothetical studies of 10 centers of unequal sizes. In each case, the proportion of subjects in group 1 equals 25% but this proportion varies more or less among centers according to the design of the study. The center sizes imbalance is voluntary less important than in the simulation study and represents a more likely study design. This example shows clearly that, when the proportion of subjects in group 1 varies slightly around the global proportion (the "quite homogeneous" column) the design effect is below 1 then indicating a gain in power. On the contrary, when this proportion varies strongly (the "heterogeneous" column), the design effect exceeds 1, involving a loss of power. In the last column, we present the extreme case where centers are nested within the groups. This situation, which can be identified with that of a cluster randomized trial, leads to an important loss in power as shown by the very large design effect.
Table 3

Design effects calculations for three different group distributions among centers.

Group distribution among centersQuite homogeneousHeterogeneousCluster design
Group size per centerm1jm2j%*m1jm2j%*m1jm2j%*

Center 1 (n = 57)1641281146190570
Center 2 (n = 38)102826241463380100
Center 3 (n = 44)113325737160440
Center 4 (n = 15)3122011470150
Center 5 (n = 41)93222833200410
Center 6 (n = 19)5142610953190100
Center 7 (n = 37)82922928240370
Center 8 (n = 52)12402344880520
Center 9 (n = 12)392511180120
Center 10 (n = 28)82029101836280100

S0.145.7933.77
Deff (ρ = 0.10)0.911.484.28

*group 1 proportion in each center

The global proportion of subjects in group 1 is 25%, for each group distribution, and the Intraclass Correlation Coefficient is equal to 0.10.

Design effects calculations for three different group distributions among centers. *group 1 proportion in each center The global proportion of subjects in group 1 is 25%, for each group distribution, and the Intraclass Correlation Coefficient is equal to 0.10. To illustrate the impact of heterogeneity between the global group sizes on the design effect, we considered hypothetical situations, less likely to occur, where 10 centers recruit 20 subjects each, for balanced designs (i.e., n1 = n2, Table S4a in Additional file 1) and imbalanced designs (i.e., n1 ≠ n2, Table S4b in Additional file 1), and for different levels of heterogeneity of group distributions among centers and two ICC values. As expected, the Deff increases with S and increases with the ICC. Moreover, if we focus on the "strongly heterogeneous" column, we observe a higher Deff with imbalance between the two groups (Table S4b in Additional file 1, Deff = 1.757 for ρ = 0.1) than with balance between the groups (Table S4a in Additional file 1, Deff = 1.620 for ρ = 0.1), which can be analytically explained (Appendix 2). Thus, the impact of heterogeneity of the group distributions among centers is greater with increased imbalance between the two group sizes. See additional file 1 for results from this example.

Discussion and conclusion

In a multicenter study, the design effect measures the effect of clustering due to multisite recruitment of subjects. As shown in formula (18), the power of such a study is directly affected by the design effect value. Our work aimed explaining why some situations of multicenter studies, such as individually randomized trials, lead to a gain in power whereas others, such as cluster randomized trials lead to a loss of power. We derived a simple formula assessing the clustering effect in a multicenter study aiming to estimate the effect of a binary factor on a continuous outcome, through an individual level analysis with a mixed effect model: Deff = 1+(S-1)ρ. The design effect depends on ρ, the correlation between observations from the same center. It also depends on S, a statistic that quantifies the degree of heterogeneity of group distributions among centers, and in other words, the level of association between the binary factor and the center. S increases with the heterogeneity of the group distributions among centers, which leads to an increased Deff and a loss of power, and falls below 1 when the group distributions are identical between centers, thus leading to a Deff below 1 and a gain in power. It is now known that balanced designs such as individually randomized trials increase their power when including the center effect in analyses [5], and that cluster randomized trials should increase their sample size to reach the nominal power and account for the center effect in the analyses to protect against type I error inflation [4]. Our simple formula throws light on the relation between these two situations and allows calculating the design effect for any multicenter design. We used in our developments a weighted method to assess the group effect: this method gives equal weight to each subject, whatever the size of his/her center is. Different methods of analysis could be used. In the frame of multicenter randomized trials, Lin et al. and Senn et al. discuss this point and show that a weighted analysis is more powerful than an unweighted one, particularly when there is unbalance in sample sizes between centers [11,12]. The weighted method is then often recommended for analyses of data from multicenter randomized trials, what justifies our choices for model (1) [13]. However, in clusters randomized trials, Kerry et al. show that the minimum variance weights are the most efficient weights in the estimation of the design effect in the presence of important imbalance between the clusters sizes, but that weighting the clusters by their sizes give similar – though over estimated – results, except when clusters are large [14]. Our formula aims to apply to any multicenter study, whatever its design is, from individually to cluster randomized trials. Then, it may not use the most powerful method of calculation for some particular multicenter designs but has the great advantage to be simple and general. Apart from the mixed effect model (1) we described, we did not develop the practical aspect of the analysis stage of a multicenter study. Several statistical software packages are available to perform analyses of correlated data, such as data from multicenter designs. Zhou et al. and Murray et al. review many of these programs and detail, among others, appropriate procedures and available options allowing specifying data modeling [15,16]. Moreover, some tutorials present step-by-step illustrations of the use of SAS and SPSS mixed model procedures [17,18]. Lastly, Pinheiro and Bates provide an overview of the application of mixed-effects models in S and S-PLUS which are easily transposable to the R software [19]. In the field of cluster randomized trials, several authors worked on the planning of studies through the design effect and sample size calculations and proposed extensions of classical formula, for example to account for imbalance in cluster sizes [20,21]. Our formula does not aim to substitute for these more specific and precise formula but to connect several multicenter designs through a design effect formula. This result helps in understanding the impact of the correlation on power of multicenter studies, whatever their designs are, and is particularly useful for observational studies where the center effect question is not often taken into account at the planning and/or at the analysis stages [22,23]. However, when extended design effects formulas exist, dealing with a particular problem such as that of imbalance cluster sizes in cluster randomized trials, we recommend using them. This simple result could now be extended to designs including, for example, several nested or crossed levels of correlation. One can then consider cluster-cluster randomization, or cluster then individual randomization and all observational designs including multiple levels of correlation between outcomes. Such designs could bring mixture of gain and loss of power, according to the multiple correlation levels considered. For example, Diehr et al. studied the case of matched-pair cluster designs and Giraudeau et al. the case of cluster randomized cross-over designs [24,25]. A lot of situations like these ones could be explored to extend our result to more complex designs. To conclude, clustering of data is a logical consequence of multicenter designs [26,27]. Some designs allow for controlling some factors (e.g., balancing and homogenizing the treatment distribution in individually randomized trials), whereas others exclude such possibility. This latter situation occurs mainly in observational studies, for which there is no way to control the prevalence or distribution of any factor. Since multicenter studies range in design, from homogeneous and balanced designs to "cluster" distribution designs, the design effect can induce a gain or a loss of power as we described. The main advantage of the design effect formula we proposed is its simplicity and its ability to apply to any multicenter study. Its potential weakness would be the difficulty, for an investigator who plans a multicenter study, to obtain an accurate estimate of S, the degree of heterogeneity of the group distributions between centers, and of the ICC. In the field of cluster randomized trials, important efforts have been done to improve ICC estimates reporting, which should now be followed for any multicenter study [28,29]. In the same way, recommendations should be made for encouraging the reporting of Deff calculation, or of the S statistic, from any multicenter study publication. Associated with an ICC estimate, this information could help researchers in planning new multicenter – particularly observational – studies.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

The study was designed by EV and BG. EV performed the statistical analysis and drafted the article, which was then revised by BG. All authors approved the final manuscript.

Appendix 1

Calculation of the group effect variance with a two-way ANOVA

In the mixed-effects model (1), the variance of the mean response in group i is as follows: The group effect variance is defined as follows: Since the centers are independent, we have: corr(Y; Y) = 0 for j ≠ j' and corr(Y; Y) = ρ for responses from the same center. Then: which leads to:

Appendix 2

Rewriting the S statistic with the between-center group size variances

Assuming centers are of equal sizes, ∀ j = 1,..., Q, and we have: where V1 is the between-center variance for sizes of group 1. Let be the mean size for group i, then V1 can be rewritten as follows: where is the center size variance and is the between-center variance for sizes of group 2. Assuming centers are of equal sizes, we have ∀ j = 1,..., Q, ; thus V= 0 and V1 = V2. The statistic is then: Hence, assuming centers are of equal sizes, for a given total sample size N, number of centers Q, and between-center group size variance V, the higher the difference between and 1 the higher the statistic S. Then, the Deff increases with the degree of imbalance between the two group sizes. This result generalizes to designs with unequal center sizes, because the S statistic always depends on . However, quantitative prediction of the impact of the ratio on the Deff is not straightforward because the center size variance, V, and the covariance term between Vand V2 are, in this case, not null.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/9/39/prepub

Additional file 1

Table S4 – Design effects calculations for hypothetical studies with different group distributions among centers. These are tables S4a and S4b, illustrating the last example presented in the manuscript. Tables present design effects calculations for different study designs, different group distributions among centers, two Intraclass Correlation Coefficient values and for balanced and imbalanced designs. Click here for file
  20 in total

1.  Design and analysis of controlled trials in naturally clustered environments: implications for medical informatics.

Authors:  Jen-Hsiang Chuang; George Hripcsak; Daniel F Heitjan
Journal:  J Am Med Inform Assoc       Date:  2002 May-Jun       Impact factor: 4.497

Review 2.  Design and analysis of group-randomized trials: a review of recent methodological developments.

Authors:  David M Murray; Sherri P Varnell; Jonathan L Blitstein
Journal:  Am J Public Health       Date:  2004-03       Impact factor: 9.308

3.  The use of random effects models to allow for clustering in individually randomized trials.

Authors:  Katherine J Lee; Simon G Thompson
Journal:  Clin Trials       Date:  2005       Impact factor: 2.486

4.  Sample size calculation for multicenter randomized trial: taking the center effect into account.

Authors:  Emilie Vierron; Bruno Giraudeau
Journal:  Contemp Clin Trials       Date:  2006-11-17       Impact factor: 2.226

5.  The evaluation of treatment when center-specific selection criteria vary with respect to patient risk.

Authors:  Elizabeth R DeLong; Laura P Coombs; T Bruce Ferguson; Eric D Peterson
Journal:  Biometrics       Date:  2005-12       Impact factor: 2.571

6.  Sample size for cluster randomized trials: effect of coefficient of variation of cluster size and analysis method.

Authors:  Sandra M Eldridge; Deborah Ashby; Sally Kerry
Journal:  Int J Epidemiol       Date:  2006-08-30       Impact factor: 7.196

7.  Sample size calculation for cluster randomized cross-over trials.

Authors:  B Giraudeau; P Ravaud; A Donner
Journal:  Stat Med       Date:  2008-11-29       Impact factor: 2.373

8.  Profiling care provided by different groups of physicians: effects of patient case-mix (bias) and physician-level clustering on quality assessment results.

Authors:  Sheldon Greenfield; Sherrie H Kaplan; Richard Kahn; John Ninomiya; John L Griffith
Journal:  Ann Intern Med       Date:  2002-01-15       Impact factor: 25.391

Review 9.  Adjustments for center in multicenter studies: an overview.

Authors:  A R Localio; J A Berlin; T R Ten Have; S E Kimmel
Journal:  Ann Intern Med       Date:  2001-07-17       Impact factor: 25.391

10.  Unequal cluster sizes for trials in English and Welsh general practice: implications for sample size calculations.

Authors:  S M Kerry; J M Bland
Journal:  Stat Med       Date:  2001-02-15       Impact factor: 2.373

View more
  26 in total

Review 1.  Centre effects and case-mix in early rheumatoid arthritis observational cohorts: a narrative review.

Authors:  Mark Yates; Katie Bechman; Sam Norton; Elena Nikiphorou; James Galloway
Journal:  Rheumatology (Oxford)       Date:  2019-11-01       Impact factor: 7.580

Review 2.  Many multicenter trials had few events per center, requiring analysis via random-effects models or GEEs.

Authors:  Brennan C Kahan; Michael O Harhay
Journal:  J Clin Epidemiol       Date:  2015-04-02       Impact factor: 6.437

3.  Assessing the effect of high-repetitive single limb exercises (HRSLE) on exercise capacity and quality of life in patients with chronic obstructive pulmonary disease (COPD): study protocol for randomized controlled trial.

Authors:  Andre Nyberg; Britta Lindström; Karin Wadell
Journal:  Trials       Date:  2012-07-23       Impact factor: 2.279

4.  Clustering in surgical trials--database of intracluster correlations.

Authors:  Jonathan A Cook; Thomas Bruckner; Graeme S MacLennan; Christoph M Seiler
Journal:  Trials       Date:  2012-01-04       Impact factor: 2.279

5.  Ongoing monitoring of data clustering in multicenter studies.

Authors:  Lauren B Guthrie; Emily Oken; Jonathan A C Sterne; Matthew W Gillman; Rita Patel; Konstantin Vilchuck; Natalia Bogdanovich; Michael S Kramer; Richard M Martin
Journal:  BMC Med Res Methodol       Date:  2012-03-13       Impact factor: 4.615

Review 6.  A systematic review of the use of an expertise-based randomised controlled trial design.

Authors:  Jonathan A Cook; Andrew Elders; Charles Boachie; Ted Bassinga; Cynthia Fraser; Doug G Altman; Isabelle Boutron; Craig R Ramsay; Graeme S MacLennan
Journal:  Trials       Date:  2015-05-30       Impact factor: 2.279

7.  Filariasis serosurvey, New Caledonia, South Pacific, 2013.

Authors:  Maguy Daures; Julie Champagnat; Anne Pfannstiel; Frédérique Ringuenoire; Jean-Paul Grangeon; Didier Musso
Journal:  Parasit Vectors       Date:  2015-02-15       Impact factor: 3.876

8.  Prevalence, Awareness and Reporting of Symptoms of Obstructive Sleep Apnoea among Hospitalized Adult Patients in Nigeria: A Multicenter Study.

Authors:  Olufemi Desalu; Cajetan Onyedum; Emmanuel Sanya; Joseph Fadare; Adekunle Adeoti; Fatai Salawu; Rotimi Oluyombo; Micheal Olamoyegun; Micheal Fawale; Babajide Gbadegesin; Hamzat Bello
Journal:  Ethiop J Health Sci       Date:  2016-07

9.  Comorbidities and characteristics of coronary heart disease patients: their impact on health-related quality of life.

Authors:  Ksenija Tušek-Bunc; Davorina Petek
Journal:  Health Qual Life Outcomes       Date:  2016-11-15       Impact factor: 3.186

10.  Protocol for a randomised controlled trial for Reducing Arthritis Fatigue by clinical Teams (RAFT) using cognitive-behavioural approaches.

Authors:  S Hewlett; N Ambler; C Almeida; P S Blair; E Choy; E Dures; A Hammond; W Hollingworth; J Kirwan; Z Plummer; C Rooke; J Thorn; K Tomkinson; J Pollock
Journal:  BMJ Open       Date:  2015-08-06       Impact factor: 2.692

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.