Literature DB >> 26467219

Statistical power as a function of Cronbach alpha of instrument questionnaire items.

Moonseong Heo1, Namhee Kim2, Myles S Faith3.   

Abstract

BACKGROUND: In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs.
METHODS: We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups.
RESULTS: It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power.
CONCLUSION: Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. DISCUSSION: Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.

Entities:  

Mesh:

Year:  2015        PMID: 26467219      PMCID: PMC4606843          DOI: 10.1186/s12874-015-0070-6

Source DB:  PubMed          Journal:  BMC Med Res Methodol        ISSN: 1471-2288            Impact factor:   4.615


Background

Use of instrument questionnaire items is essential for measurement of outcome of interest in innumerable numbers of clinical trials. Many trials use well-established instruments; for example, major depressive disorders are often evaluated by scores on the Hamilton Rating Scale of Depression (HRSD) [1] in psychiatry trials. However, it is by far more often the case when instruments germane to a research outcome are not available. In such cases, of course, questionnaire items need to be developed to measure the outcome, and their psychometric properties should be evaluated for construct validity, internal consistency, and reliability among others [2, 3]. The internal consistency of instrument items quantifies how similarly in a interrelated fashion the items represent an outcome construct that the instrument is aiming to measure [4], whereas reliability is defined as the squared correlation between true score and observed score [3]. Cronbach alpha also known as coefficient alpha [5], hereafter denoted by C, has been very widely used to quantify the internal consistency and reliability of items in clinical research and beyond [6] although internal consistency and reliability are not exchangeable psychometric concepts in general. For this reason, some argue that C should not be used for quantifying either concept (e.g.,[7, 8]). One the other hand, for special cases where items under study are parallel such that items are designed as replicates to measure a unidimensional construct or attribute, C can quantify internal consistency and reliability as well [2] although in general C is not necessarily a measure of unidimensionality or homogeneity [4, 8]. In this paper, we consider parallel items; for example, items within a same factor could be considered parallel for a unidimensional construct. In this sense, items of HRSD are not parallel since it measures depression, a multidimensional construct with many factors. The Cronbach alpha by mathematical definition is an adjusted proportion of total variance of the item scores explained by the sum of covariances between item scores, and thus ranges between 0 and 1 if all covariance elements are non-negative. Specifically, for an instrument with k items with a general covariance matrix Σ among the item scores, C is defined as where trace(.) is the sum of the diagonal elements of a square matrix, 1 is a column vector with k unit elements, and 1 is the transpose of 1. This quantification is therefore based on the notion that relative magnitudes of covariances between item scores compared to those of corresponding variances serves as a measure of similarities of the items. Consequently, items with higher C are preferred to measure the target outcome. However, C is a lower bound for reliability, but is not equal to reliability unless the items are parallel or essentially τ-equivalent [3, 8]. The sum of the instrument items serves as a scale for the outcome, and is used for statistical inference including testing statistical hypotheses. At the design stage of clinical trials, information about magnitude of reliability or internal consistency of developed parallel items is crucial for power analysis and sample size determinations. Nonetheless, power functions based on C have been lacking for various study designs. In this paper, to derive closed-from power functions, we formulate a statistical model for parallel items that relates the item scores to a measurement error problem. Under this model, C (1) is explicitly expressed in terms of an inter-item correlation. We examine relationship among C, a test-retest correlation and reliability of scale scores that enables testing significance of C through Fisher z-transformation. We explicitly express statistical power as a function of C for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. Simulation study results compare derived theoretical power with empirical power and discussion and conclusion follow.

Methods

Statistical model

We consider the following model for item score Y to the j-th parallel item for the i-th subject: The parameter μ represents the “true score” of the target (outcome) construct for the i-th subject. At the population level, its expectation and variance are assumed to be and , which we call the true score variance. The error term represents the deviate of the item score Y from the true score μ, i.e., is the measurement error of Y. The expectation and variance of for all subjects are assumed to be , i.e., unbiasedness assumption, that is, E(Y) = μ and EE(Y) = E(μ) = μ, where E denotes the expectation over j. It is also assumed that , which we call the measurement error variance. We further assume the following: μ and e are mutually independent, i.e., μ ⊥ e; and the elements of e’s are independent for a given subject, i.e., conditional independence, that is, e ⊥ e|μ for j ≠ j′. Note that this conditional independence does not imply marginal impendence between Y and Y. In short, model (2) is a mixed-effects linear model for data with a two-level structure in a way that repeated item scores are nested within individuals. Under those assumptions, we have , that is, the total variance of the item scores is the sum of the true score variance and the measurement error variance. Inter-item (score) covariance can be obtained as for j ≠ j′. Therefore, the diagonal elements of covariance matrix Σ under model (2) are identical and so are the off-diagonal elements. This compound symmetry covariance structure, also known as essential τ-equivalence, is the covariance matrix of parallel items each of which targets the underlying true score for a unidimensional construct. Furthermore, the compound symmetry covariance structure can be regarded as a covariance matrix of “standardized” item scores with unequal variances and covariances. Inter-item (score) correlation, denoted here by ρ, can accordingly be obtained as Although item scores are correlated within subjects, they are independent between subjects. Note that this inter-item correlation is not necessarily equal to item-score reliability that quantifies a correlation between true and observed scores. In this paper, we assume that the true score variance , instead of the total variance σ2, is fixed at the population level, and it does not depend on the item scores of the subjects. Stated differently, the total variance σ2 depends only on which depends on item scores and thus σ2 is assumed to be an increasing function of only measurement errors of the item scores. Let us call this assumption the fixed true score variance assumption, which is crucial and reasonable from the perspective of measurement error theory in general. This assumption is crucial because it makes the total variance as a function of only measurement error variance as mentioned above, and it is reasonable because at the population level true score variance should not be varying whereas magnitudes of measurement error variance depend on reliability of items. Consequently, the true score variance is not a function of inter-item correlation ρ, but the measurement error variance is a decreasing function of ρ since from equation (3) we have It follows that as the item scores are closer or more similar to each other within subjects, the measurement errors will be smaller, which follows that the total variance is also a decreasing function of ρ since We assume that the magnitudes of both and are known and thus that of σ2 for the purpose of derivation of power functions based on normal destructions instead of t-distributions, although replacement by t-distributions should be straightforward yet with little difference in results for sizable sample sizes.

Cronbach alpha, scale score and its variance

We assume that there are k items in an instrument, i.e., j =1, 2, …, k. The C (1) of k items under model (2) and aforementioned assumptions can be expressed as It is due to the fact that under model (2) where I is a k-by-k identity matrix. C in equation (6) is seen to be an increasing function of both ρ and k as depicted in Fig. 1. Therefore, the number of items needs to be fixed for comparison of C of several candidate sets of items. It follows that for a fixed number of items, higher C is associated with smaller measurement error of items through higher inter-item correlation ρ. From equation (6), ρ can be expressed in terms of C as follows:
Fig. 1

Relationship between Cronbach alpha (C ) and inter-item correlation (ρ) over varying number of items (k)

Relationship between Cronbach alpha (C ) and inter-item correlation (ρ) over varying number of items (k) Of note, the corresponding correlation matrix is denoted by , an equi-correlation matrix. The k correlated items are often summed up to a scale that is intended to measure the target construct. The scale score is denoted here bywhich can be viewed as an observed summary score for the i-th subject. Suppressing the subscription i in S, its mean and variance can be obtained as follows:and With respect to the mean (8), average scale score S/k when used as observed score is an unbiased estimate of true score μ for the i-th subject. The reliability, denoted here by R, defined as the squared correlation between true score and observed score can be obtained as follows: This equation supports Theorem 3.1 of Novick and Lewis [9] that R = C if and only if the items are parallel. Since statistical analysis results do not depend on whether S/k or S is used, we use the sum S in what follows. With respect the total variance (9), if the total variance, instead of the true score variance, is assumed to be fixed, Var(S) is an increasing function of ρ, which conforms to an elementary statistical theory that variance of sum of correlated variables increases with increasing correlation. On the contrary, under the fixed true score variance assumption, it can be seen that Var(S) is a decreasing function of ρ since equation (9) can be re-expressed in terms of via equation (5) as follows: The last equation is due to equation (7). It follows that Var(S) is also a decreasing function of C. In sum, increase of ρ decreases the magnitude of σ2 which in turn decreases the magnitude of Var(S); therefore such indirect decreasing effect of ρ on Var(S) is larger than direct increasing effect of ρ on Var(S) in equation (9).

Cronbach alpha and test-retest correlation

Reliability R of instruments is sometimes evaluated by test-retest correlation [3]. Based on model (2), the test and retest item scores can be specified as and , respectively with a common μ for both test and retest scores for each subject, i = 1, 2,…, N. The test-retest correlation can then be measured by the correlation, denoted by Corr(S, S), between scale scores and representing the scale scores of test and retest, respectively. Under the aforementioned assumptions for model (2) it can be shown thatand from equation (10) It follows that: This equation shows that the test-rest correlation is the same as both C and R due to equations (6) and (10), which provides another interpretation of C. This property is especially useful when there is only one item available, in which case estimation of C or ρ is impossible by definition. However, the test and retest scores can be thought of as two correlated parallel item scores, and thus their correlation can serve as C of the single item. It is particularly fitting since ρ = C = R based on either equation (6), (7), or (14) when k = 1. Taken together, the power of testing significance of C against any null value should be equivalent to that of testing significance of a correlation using a Fisher’s z-transformation as long as items are parallel, that is,for a two-tailed significance level α, where Φ is the cumulative distribution function of a standardized normal distribution, and Φ−1 is its inverse function, i.e., Φ(Φ−1(x)) = Φ−1(Φ(x)) = x. We note that although it is necessary to be added for validation of unbiasedness of the test statistics under the null hypothesis, the probability under the other rejection area will be ignored for all test statistics considered herein. For general covariance structures for non-parallel items, however, many other tests for significance of reliability and C have been developed [10-17].

Pre-post comparison

We consider application of a paired t-test to the case of comparison of within-group means of scale scores between pre- and post-interventions. Based on model (2), the pre- and post-intervention item scores can be specified as and , respectively; the mean of the post-intervention item scores are shifted by δ, an intervention effect. Consequently, we havewhere and are the pre- and post-intervention scale scores, respectively. A moment estimate of δ from (15) can be estimated aswhere and N is the total number of subject. Its variance can be obtained as It is because from equations (12) and (13) we have The following test statistic can then be used for testing H0: δ = 0 Now, the statistical power φ of T for detecting non-zero δ can be expressed as follows: This statistical power is an increasing function of ρ for a fixed σ, which we assume. It follows that the power is also an increasing function of C as seen next. When δ is standardized by σ and ρ is replaced by equation (7), equation (19) can further be expressed in terms of and C as follows: This power function is seen to be independent of k, the number of items. Stated differently, the power will be the same between two instruments with different numbers of items as long as their C’s are the same even if the correlation of items will be smaller for the instrument with fewer items. When sample size determination is needed for a study using an instrument of any number of items with a known C for a desired statistical power φ, typically 80 %, it can be determined from equation as follows:where The sample size (21) is seen to be a decreasing function of increasing C and Δ. In a possibly rare case in which determination of number of items with known correlations among them is needed for development of an instrument, it has to be determined from equation (19), instead of equation (20), as follow:

Comparison of within-group effects between groups

In clinical trials, it is often of interest to compare within-group changes between groups. For instance, a clinical trial can be designed to compare of pre-post effect of an experimental treatment between treatment and control groups, that is, an interaction effect between group and time point. Based on model (2), the pre- and post-intervention item scores can be specified as and for the control group and for the treatment group. The primary interest will be testing Ho: δ = δ1 – δ0 = 0, i.e., whether or not the difference in pre-post differences between groups will be the same. Consequently, we havewhere and can be similarly defined. A moment estimate of δ from (24) can be obtained aswhere N is the number of subjects per group, , and, can similarly be defined. The variance of is Therefore, the following test statistic can be used for testing the null hypothesis Ho: δ = 0, The statistical power φ of T for detecting non-zero δ can thus be expressed as follows: Again, this statistical power is an increasing of ρ and of C as well as seen next. When δ is standardized by σ and ρ is replaced by equation (7), equation (28) can further be expressed in terms of and C as follows: Again, this power function is seen to be independent of k, the number of items. Sample size for a desired statistical power φ can be determined from (27) as follows: Again, this sample size (30) is seen to be a decreasing function of increasing C and Δ. When number of items is needed for development of an instrument, it can be determined from equation (28) as follow:

Two-sample between-group comparison

Comparison of means between groups using an instrument is widely tested in clinical trials. Based on model (2), the intervention item scores from control and treatment groups can be specified as and , respectively. The primary interest will be testing Ho: δ = 0, i.e., whether or not the means are the same between the two groups. Under this formulation, we havewhere and represents scale scores under treatment and control groups, respectively. A moment estimate of δ can be obtained from (32) aswhere , and N is the number of participants per group. The variance of can be obtained as The corresponding test statistic T can be built as And the power function φ of T can be expressed as It should be noted that this statistical power (36) is also an increasing function of ρ in contrast to a situation when a fixed total variance assumption is more reasonable in which both and are a function of ρ but σ2 is not. For example, observations without measurement errors from clusters are often assumed to be correlated and power of between-group tests using such correlated observations is a decreasing function of ρ [18]. Again, when δ is standardized by σ and ρ is replaced by equation (7), equation (33) can further be expressed in terms can further be expressed in terms of and C as follows: Again, this power function is seen to be independent of k, the number of items. Sample size for a desired statistical power φ can be determined from (37) as follows: Again, the sample size (38) is seen to be a decreasing function of increasing C and Δ. When number of items is needed for development of an instrument, it can be determined from equation (36) as follow:

Results

To validate equation (14) and the power functions (20), (29), and (37), we conduct simulation study for each test. For the simulation, the random item scores are generated based on model (2) assuming both μ and e are normally distributed although this assumption is not required in general. Under this normal assumption, however, it can be shown that all the moment estimates herein are the maximum likelihood estimates [19]. We then compute scale scores by summing up the item scores for each individual. We fix a two-tailed significance level of α = 0.05 and = 1 without loss generality for all simulations, and determine and σ2 through ρ determined by given k and C. We randomly generate 1000 data sets for each combination of design parameters that include effect size Δ, number of items k, and sample size N. We then compute empirical power by counting data sets from which two-tailed p-values are smaller than 0.05; that is, where p represents a two-sided p-value from the s-th simulated data set. For the testing, we applied corresponding t-tests assuming the variances of the moment estimates are unknown, which is practically reasonable. We used SAS v9.3 for the simulations.

Test-retest correlation

The results are presented in Table 1 that shows the empirically estimated test-retest correlations (i.e., average of 1000 estimated Pearson correlations for each set of design parameter specifications) are approximately the same as the pre-assigned C, regardless of sample size N, which is as small as 30, and number of items k. Therefore, equality between C and test-retest correlation (14) is well validated.
Table 1

Empirical simulation-based estimates of test-retest correlation Corr(S , S ) in equation (14)

Corr(S test, S retest)
Total N = 30Total N = 50
C α k = 5 k = 10 k = 5 k = 10
0.10.100.100.100.10
0.20.200.200.200.20
0.30.300.290.300.30
0.40.390.390.400.39
0.50.490.500.490.50
0.60.590.590.600.60
0.70.690.690.700.70
0.80.790.800.800.79
0.90.900.900.900.90

Note: Total N: total number of subjects; C : Cronbach alpha; k: number of items

Empirical simulation-based estimates of test-retest correlation Corr(S , S ) in equation (14) Note: Total N: total number of subjects; C : Cronbach alpha; k: number of items

Pre-post intervention comparison

Table 2 shows that the theoretical power φ (20) is very close to the empirical power obtained through the simulations. The results validate that the power φ increases with increasing C (or equivalently increasing correlation for the same k) in the “pre-post” test settings, regardless of sample size N and number of items k. Furthermore, it shows that the statistical power does not depend on k for a given C even if correlation ρ does.
Table 2

Statistical power of the pre-post test T (18): σ  = 1

k = 5 k = 10
Total N Δ PP C α φ PP \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $${\tilde{\varphi}}_{\mathit{\mathsf{P}}\mathit{\mathsf{P}}}$$ \end{document}φ˜PP φ PP \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $${\tilde{\varphi}}_{\mathit{\mathsf{P}}\mathit{\mathsf{P}}}$$ \end{document}φ˜PP
300.40.50.3410.3370.3410.310
0.60.4750.4590.4750.458
0.70.6580.6260.6580.649
0.80.8730.8490.8730.830
0.90.9960.9970.9960.995
500.30.50.3230.3090.3230.296
0.60.4510.4240.4510.433
0.70.6300.6330.6300.614
0.80.8510.8490.8510.844
0.90.9940.9950.9940.992

Note: Total N: total number of subjects; k: number of items; ; C : Cronbach alpha; φ : theoretical power (20); : simulation-based empirical power

Statistical power of the pre-post test T (18): σ  = 1 Note: Total N: total number of subjects; k: number of items; ; C : Cronbach alpha; φ : theoretical power (20); : simulation-based empirical power

Between-group whithin-group comparison

Table 3 shows that the theoretical power φ (29) is very close to the empirical power obtained through the simulations. Therefore, the results validate that the statistical power φ increases with increasing C for testing hypotheses concerning between-group effects on within-group changes regardless of N, sample size per group, and k. Again, it shows that the statistical power does not depend on k for a given C even if correlation ρ does.
Table 3

Statistical power of the between-group within-group test T (25): σ  = 1

k = 5 k = 10
N per group Δ BW C α φ BW \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $${\tilde{\varphi}}_{\mathit{\mathsf{B}}\mathit{\mathsf{W}}}$$ \end{document}φ˜BW φ BW \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $${\tilde{\varphi}}_{\mathit{\mathsf{B}}\mathit{\mathsf{W}}}$$ \end{document}φ˜BW
300.40.50.1940.1790.1830.194
0.60.2680.2640.2540.268
0.70.3870.3750.3590.387
0.80.5910.6180.5940.591
0.90.9080.8840.9010.908
500.30.50.1640.1840.2140.184
0.60.2420.2540.2610.254
0.70.3870.3670.3650.367
0.80.5110.5640.5910.564
0.90.8930.8890.8930.889

Note: N per group: number of subjects per group; k: number of items; ; C : Cronbach alpha; φ : theoretical power (27); : simulation-based empirical power

Statistical power of the between-group within-group test T (25): σ  = 1 Note: N per group: number of subjects per group; k: number of items; ; C : Cronbach alpha; φ : theoretical power (27); : simulation-based empirical power Table 4 shows again that the theoretical power φ (37) is very close to the empirical power obtained through the simulations. The results validate that the statistical power increases with increasing Cronbach α even for two-sample testing in cross-sectional settings that does not involve within-group effects. it shows that the statistical power does not depend on k for a given C even if correlation ρ does. Again, it shows that the statistical power does not depend on k for a given C even if correlation ρ does.
Table 4

Statistical power of the between-group within-group test T (32): σ  = 1

k = 5 k = 10
N per group Δ TS C α φ TS \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $${\tilde{\varphi}}_{\mathit{\mathsf{T}}\mathit{\mathsf{S}}}$$ \end{document}φ˜TS φ TS \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} $${\tilde{\varphi}}_{\mathit{\mathsf{T}}\mathit{\mathsf{S}}}$$ \end{document}φ˜TS
500.70.50.6970.6760.6970.697
0.60.7740.7580.7740.760
0.70.8340.8120.8340.813
0.80.8790.8720.8790.882
0.90.9130.9010.9130.895
1000.50.50.7050.6820.7050.679
0.60.7820.7910.7820.769
0.70.8410.8200.8410.832
0.80.8850.8790.8850.908
0.90.9180.9290.9180.912

Note: N per group: number of subjects per group; k: number of items; ; C : Cronbach alpha; φ : theoretical power (34); : simulation-based empirical power

Statistical power of the between-group within-group test T (32): σ  = 1 Note: N per group: number of subjects per group; k: number of items; ; C : Cronbach alpha; φ : theoretical power (34); : simulation-based empirical power

Discussion

We demonstrate by deriving explicit power functions that higher internal consistency or reliability of unidimensional parallel instrument items measured by Cronbach alpha C results in greater statistical power of several tests regardless of whether comparisons are made within or between groups. In addition, the test-retest reliability correlation of such items is shown to be the same as Cronbach alpha C. Due to this property, testing significance of C can be equivalent to testing that of a correlation through the Fisher z-transformation. Furthermore, all of the power functions derived herein can even be applied to trials using single item instrument with measurement error since the power function depends only on C which can be estimated via test-retest correlations for single item instruments as mentioned earlier. The demonstrations are made theoretically, and validations are made through simulation studies that show that the derived test statistics and their corresponding power functions are very close to each other. Therefore, the sample size determination formulas (21), (30), and (38) are valid and so are the determinations of number of items (22), (31), and (39) in different settings. In fact, for longitudinal studies aiming to compare within-group effects using such as T (18) and T (27), the fixed true score variance assumption is not critical since the true score μ’s in model (2) are cancelled by taking differences of Y between pre and post-interventions and thus makes the variance of the pre-post differences depend only on measurement error variance . For example, the variance equations (17) and (26) can be expressed in term of only , a decreasing function of ρ, through equation (4) as follows: and . In other words, both the power functions φ (20) and φ (29) are increasing function of C or ρ regardless of whether total variance or true score variance is assumed fixed. In contrast, however, for cross-sectional studies aiming to compare between-group effects using T (35), the fixed true score variance assumption is critical since the variance equation (34) cannot be expressed only in term of only , and furthermore it can be shown that under a fixed total variance assumption (34) is an increasing function of ρ (see equation (10)) and so is the power function. In sum, the fixed true score variance assumption enables all of the power functions to be an increasing function of C or ρ in a unified fashion. For example, Leon et al. [20] used a real data set of HRSD ratings to empirically demonstrate that the statistical power of a two-sample between-group test is increasing with increased C, although they increased C by increasing number of items k, not necessarily by increasing ρ for a fixed number of items. In most cases, item scores are designed to be binary or ordinal scores on a likert scale. Therefore, the applicability of the derived power functions and sample size formulas to such cases could be in question since the scores are not normally distributed. Furthermore, it is not easy to build a model like (2) for non-normal scores particularly because measurement error variances depend on the true construct value. For example, variance of a binary score is a function of its mean. Perhaps, construction of marginal models in the sense of generalized estimating equations [21] can be considered for derivation of power functions assumption even if this approach is beyond the scope of the present study. After all, we believe that our study results should be able to be applied to non-normal scores by virtue of the central limit theorem. Another prominent limitation of our study is the very strong assumption of essentially τ-equivalent parallel items which may not be realistic at all [8], albeit conceivable for a unidimensional construct. Therefore, further development of power functions under relaxed conditions reflecting more real world situations should be a valuable future study.

Conclusion

Instruments with greater Cronbach alpha should be used for any type of research since they have smaller measurement error and thus have greater statistical power for any research settings, cross-sectional or longitudinal. However, when items are parallel targeting a unidimensional construct, Cronbach alpha of an instrument should be enhanced by developing a set of highly correlated items but not by unduly increasing the number of items with inadequate inter-item correlations.
  9 in total

1.  Estimating the reliability of a test split into two parts of equal or unequal length.

Authors:  Leonard S Feldt; Richard A Charter
Journal:  Psychol Methods       Date:  2003-03

2.  A rating scale for depression.

Authors:  M HAMILTON
Journal:  J Neurol Neurosurg Psychiatry       Date:  1960-02       Impact factor: 10.154

3.  Statistical approaches to achieving sufficiently high test score reliabilities for research purposes.

Authors:  Richard A Charter
Journal:  J Gen Psychol       Date:  2008-07

4.  Cronbach's alpha.

Authors:  J M Bland; D G Altman
Journal:  BMJ       Date:  1997-02-22

5.  Models for longitudinal data: a generalized estimating equation approach.

Authors:  S L Zeger; K Y Liang; P S Albert
Journal:  Biometrics       Date:  1988-12       Impact factor: 2.571

6.  Coefficient alpha and the reliability of composite measurements.

Authors:  M R Novick; C Lewis
Journal:  Psychometrika       Date:  1967-03       Impact factor: 2.500

7.  More reliable outcome measures can reduce sample size requirements.

Authors:  A C Leon; P M Marzuk; L Portera
Journal:  Arch Gen Psychiatry       Date:  1995-10

8.  Randomization by cluster. Sample size requirements and analysis.

Authors:  A Donner; N Birkett; C Buck
Journal:  Am J Epidemiol       Date:  1981-12       Impact factor: 4.897

9.  On the Use, the Misuse, and the Very Limited Usefulness of Cronbach's Alpha.

Authors:  Klaas Sijtsma
Journal:  Psychometrika       Date:  2008-12-11       Impact factor: 2.500

  9 in total
  25 in total

1.  The association between Self-Reported Medication Adherence scores and systolic blood pressure control: a SPRINT baseline data study.

Authors:  William E Haley; Olivia N Gilbert; Robert F Riley; Jill C Newman; Christianne L Roumie; Jeffrey Whittle; Ian M Kronish; Leonardo Tamariz; Alan Wiggers; Donald E Morisky; Molly B Conroy; Eugene Kovalik; Nancy R Kressin; Paul Muntner; David C Goff
Journal:  J Am Soc Hypertens       Date:  2016-09-07

2.  Quality of Life in Palliative Care.

Authors:  Mellar P Davis; David Hui
Journal:  Expert Rev Qual Life Cancer Care       Date:  2017-11-08

3.  Pharmacists' knowledge, attitudes, beliefs, and barriers toward breast cancer health promotion: a cross-sectional study in the Palestinian territories.

Authors:  Ramzi Shawahna; Hiba Awawdeh
Journal:  BMC Health Serv Res       Date:  2021-05-06       Impact factor: 2.655

4.  Barriers to Accessing Nighttime Supervisors: a National Survey of Internal Medicine Residents.

Authors:  Jillian S Catalanotti; Alec B O'Connor; Michael Kisielewski; Davoren A Chick; Kathlyn E Fletcher
Journal:  J Gen Intern Med       Date:  2021-01-28       Impact factor: 6.473

5.  Normative data on regional sweat-sodium concentrations of professional male team-sport athletes.

Authors:  Mayur K Ranchordas; Nicholas B Tiller; Girish Ramchandani; Raj Jutley; Andrew Blow; Jonny Tye; Ben Drury
Journal:  J Int Soc Sports Nutr       Date:  2017-10-30       Impact factor: 5.150

6.  Clubfoot treatment with Ponseti method-parental distress during plaster casting.

Authors:  Christian Walter; Saskia Sachsenmaier; Markus Wünschel; Martin Teufel; Marco Götze
Journal:  J Orthop Surg Res       Date:  2020-07-17       Impact factor: 2.359

7.  Determinants of health-related quality of life among warfarin patients in Pakistan.

Authors:  Muhammad Shahid Iqbal; Fares M S Muthanna; Yaman Walid Kassab; Mohamed Azmi Hassali; Fahad I Al-Saikhan; Muhammad Zahid Iqbal; Abdul Haseeb; Muhammad Ahmed; Salah-Ud-Din Khan; Atta Abbas Naqvi; Md Ashraful Islam; Majid Ali
Journal:  PLoS One       Date:  2020-06-17       Impact factor: 3.240

8.  Mental Health of Refugees and Migrants during the COVID-19 Pandemic: The Role of Experienced Discrimination and Daily Stressors.

Authors:  Eva Spiritus-Beerden; An Verelst; Ines Devlieger; Nina Langer Primdahl; Fábio Botelho Guedes; Antonio Chiarenza; Stephanie De Maesschalck; Natalie Durbeej; Rocío Garrido; Margarida Gaspar de Matos; Elisabeth Ioannidi; Rebecca Murphy; Rachid Oulahal; Fatumo Osman; Beatriz Padilla; Virginia Paloma; Amer Shehadeh; Gesine Sturm; Maria van den Muijsenbergh; Katerina Vasilikou; Charles Watters; Sara Willems; Morten Skovdal; Ilse Derluyn
Journal:  Int J Environ Res Public Health       Date:  2021-06-11       Impact factor: 3.390

9.  Some recommendations for developing multidimensional computerized adaptive tests for patient-reported outcomes.

Authors:  Niels Smits; Muirne C S Paap; Jan R Böhnke
Journal:  Qual Life Res       Date:  2018-02-23       Impact factor: 4.147

10.  Development of a Quality of Sexual Life Questionnaire for Breast Cancer Survivors in Mainland China.

Authors:  Li-Wei Jing; Chao Zhang; Feng Jin; Ai-Ping Wang
Journal:  Med Sci Monit       Date:  2018-06-16
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.