Literature DB >> 27148136

Model Fit after Pairwise Maximum Likelihood.

M T Barendse1, R Ligtvoet2, M E Timmerman3, F J Oort2.   

Abstract

Maximum likelihood factor analysis of discrete data within the structural equation modeling framework rests on the assumption that the observed discrete responses are manifestations of underlying continuous scores that are normally distributed. As maximizing the likelihood of multivariate response patterns is computationally very intensive, the sum of the log-likelihoods of the bivariate response patterns is maximized instead. Little is yet known about how to assess model fit when the analysis is based on such a pairwise maximum likelihood (PML) of two-way contingency tables. We propose new fit criteria for the PML method and conduct a simulation study to evaluate their performance in model selection. With large sample sizes (500 or more), PML performs as well the robust weighted least squares analysis of polychoric correlations.

Entities:  

Keywords:  discrete data; fit statistics; pairwise maximum likelihood analysis; weighted least squares analysis

Year:  2016        PMID: 27148136      PMCID: PMC4838635          DOI: 10.3389/fpsyg.2016.00528

Source DB:  PubMed          Journal:  Front Psychol        ISSN: 1664-1078


1. Introduction

Tests and questionnaires usually consist of items with discrete ordinal response scales. In the factor analysis of discrete item responses, multivariate normally distributed scores are assumed to underly the discrete item responses (e.g., Wirth and Edwards, 2007; Rhemtulla et al., 2012). Let X = (X1, X2, …, X) denote the vector of the k variables with discrete response scales, with realizations x ∈ {1, 2, …, m}, so that each item i has m response options. The observed score x on item i is related to the unobserved score on the underlying continuum through where τ and τ are the threshold parameters for the category of item i. An item with m categories really only has m − 1 thresholds, as τ0 ⇒ − ∞ and τ ⇒ ∞. Herinafter, to simplify notation, we assume that the number of response options is equal across items, m = m for all i. As the underlying continuous variable is not observed, its mean and variance are not identified without further constraints. One can either fix the mean and variance (e.g., zero mean and unit variance), or one can fix two of the thresholds (e.g., at zero and unity). The latter is not possible with dichotomous items, because they are associated with just a single threshold. Various estimation methods have been proposed for the factor analysis of (observed) discrete responses with (unobserved) underlying continuous scores. Here we discuss the weighted least squares method, the multivariate maximum likelihood method, and the bivariate maximum likelihood method. The weighted least squares method was introduced as a two–step method. In the first step, the polychoric correlations between the observed variables are estimated. In the second step, the parameters of the structural equation model are estimated on the basis of the polychoric correlations. The general WLS fit function for discrete data, based on Browne (1984) who described the WLS fit for continuous data, is given by where is a vector with the non–redundant elements of the k × k matrix of polychoric correlations and g is a vector with the corresponding elements of the k × k matrix of model–implied correlations. The weight matrix W is a positive definite matrix of order v × v, with v = k(k + 1)∕2. It contains consistent estimates of the asymptotic variances and covariances of the polychoric correlations (e.g., Jöreskog, 1990, 1994). Other authors also included the observed and model implied threshold values in the and g vectors, and the associated asymptotic covariances of in matrix W, which resulted in two–step (see Lee et al., 1995) and three–step approaches (e.g., Muthén, 1984, 1989; Lee et al., 1990b). As the weight matrix can only be accurately estimated with large sample sizes (e.g., Rigdon and Ferguson, 1991; Muthén and Kaplan, 1992; Dolan, 1994), it is practically unfeasible to use the WLS function with the full weight matrix. An alternative is to use the WLS function with diagonal matrix W, containing only the diagonal elements of W to estimate the parameter estimates. However, for inference, one needs the full weight matrix as implemented in the so–called robust WLS. The three–step robust WLS with mean–and–variance corrected chi-square and standard errors (WLSMV; Muthén et al., 1997; Asparouhov and Muthén, 2010), also referred to as RDWLS (see Katsikatsou et al., 2012), has been advocated because of good performance in simulation studies (e.g., Beauducel and Herzberg, 2006; Barendse et al., 2015). In the multivariate maximum likelihood estimation method (Lee et al., 1990a), the maximum likelihood estimator is used to estimate the variances, covariances, means, and thresholds of all X* simultaneously, in a single step. The method is also known as the full information maximum likelihood method, as one maximizes the likelihoods of the complete response patterns. This implies that one uses all information in the data, and does not have to rely on polychoric correlations, like in the WLS related estimation methods. Let ρ denote the vector containing the correlations between all pairs of continuous variables and with i, j = 1…k, and i < j. The expected proportion π of response vector x, given correlations ρ and thresholds τ, is given by where f denotes the k-dimensional normal density. Let index r refer to a complete item response pattern (x1, x2, …, x), and let p denote the observed proportion of respondents with response pattern r in the sample. The log–likelihood of response pattern r is given by which is maximized to obtain the estimates for the parameters ρ and τ. As maximizing this log–likelihood requires numerical evaluation of high–dimensional integration over x* (Equation 3) in order to obtain the probability function of a response vector, Jöreskog and Moustaki (2001) already concluded that FIML is only feasible with a small numbers of variables (e.g., four or less). This seriously limits the application of FIML in practice. In the bivariate maximum likelihood estimation method, high numerical integration is avoided by considering bivariate information only. In this one–step method, the sum of the log–likelihoods of all possible bivariate response patterns is maximized, rather than that of the full multivariate response patterns. For two items i and j, the expected proportion of respondents with scores x, x is given by for τ = (τ1, τ2, …, τ) and τ = (τ1, τ2, …, τ). In order to obtain the likelihood estimates of the parameters ρ and τ, τ, instead of maximizing the multivariate likelihood, we maximize the sum of all bivariate log–likelihoods: where p denotes the sample proportion of responses x and x. Jöreskog and Moustaki (2001) denoted this method the underlying bivariate normal method. They originally suggested to use both the univariate and bivariate distributions. Based on results of their simulation study (Katsikatsou et al., 2012) concluded that the univariate distributions have no additional value in the parameter estimation. The estimation method that only relies on bivariate likelihoods is referred to as the pairwise maximum likelihood (PML) method. The PML estimation method has the advantage over FIML that it is computationally feasible, but it has the disadvantage that it only uses the bivariate distributions of the observed variables, and thus does not utilize all available information. As an overall measure of fit, Jöreskog and Moustaki (2001) proposed to use the average of all bivariate likelihood ratio test statistics, but this statistic cannot be used as a goodness–of–fit test as its distribution is unknown. Maydeu-Olivares (2006) and Maydeu-Olivares and Joe (2006) introduced a family of fit statistics for testing composite null hypotheses in multidimensional contingency tables. As the PML method has been recognized as a special case of the maximum composite likelihood method (Varin, 2008; Varin et al., 2011) and the bivariate maximum likelihood estimation method, it can be used to obtain a residual based fit statistics (Maydeu-Olivares, 2006; Maydeu-Olivares and Joe, 2006) and standard errors (Xi, 2011) for the PML estimation method. In a simulation study, Xi (2011) found the composite likelihood fit statistic and standard errors estimates of the bivariate maximum likelihood estimation method to be appropriate, when compared to a full information expectation maximization algorithm. However, these test statistics are not yet readily available, as the have not yet been implemented in a computer program. In the present paper, we propose three new fit statistics. We investigate these test statistics in a simulation study and compare them with the overall goodness–of–fit that is associated with robust WLS estimation. The new fit statistics have been made available in the open source SEM software lavaan (see Appendix in Supplementary materials; Rosseel, 2012).

2. Methods

To evaluate the three new fit statistics (explained in Section 2.2) for the PML estimation method, we conduct a simulation study in which we vary sample size (200, 500, and 1,000) and the number of response options (2, 3, and 4) in a fully crossed design, yielding nine conditions. With 1,000 replications, we obtain 9,000 datasets that are analyzed using the PML and robust WLS estimation methods.

2.1. Data generation

We partly replicate the simulation study conducted by Katsikatsou et al. (2012). They generated item scores on six items according to a two factor model with factor loadings common factor variances and covariances and residual variances Continuous item scores are drawn from a multivariate normal distribution with variances and covariances and zero means. For each sample size (200, 500, and 1000), we generate 1000 datasets of continuous scores. These scores are categorized into two categories (threshold 0, yielding expected proportions 0.50 and 0.50), three categories (thresholds –0.6 and 0.6, yielding expected proportions of 0.27, 0.45, and 0.27), and four categories (thresholds –1.2, 0, and 1.2, yielding expected proportions 0.11, 0.39, 0.39, and 0.11; in line with Katsikatsou et al., 2012).

2.2. Model fit statistics

In the PML method, model parameters are estimated by maximizing the sum of the log–likelihoods of all bivariate responses patterns, for all pairs of items. As the distribution of this sum is not known, we propose three measures of fit that are based on likelihood ratios: C, C, and C. The C and C fit statistics compare the model–implied proportions of response patterns with, respectively, the observed proportions of full response patterns (signified by subscript F) and the expected proportions under the assumption of multivariate normality (signified by subscript M). The C fit statistic compares the model–implied proportions of pairs of item responses to the observed proportions of pairs of item responses (signified by subscript P). Specifically, C compares the log–likelihood of the expected proportions of the multivariate response patterns (Equation 4) with the observed proportions of response patterns. Multiplied by two times the sample size, we obtain that is asymptotically chi–square distributed with degrees of freedom equal to the difference between the number of possible response patterns and the number of model parameters to be estimated minus one (Agresti, 2002, pp. 590–591), where n is the number of parameters to be estimated. As the number of possible response patterns m is usually much larger than sample size N, most response patterns will not be observed at all, yielding many empty cells in the multivariate m table, thereby causing bias in the C statistic. As a possible solution (Jöreskog and Moustaki, 2001) considered the number of response patterns that is actually observed only, and calculated degrees of freedom as where u denotes the number of observed response patterns. The fit statistic C compares the log–likelihood of the the model–implied proportions of response patterns of the model–of–interest with the model–implied proportions of the model that only assumes an underlying multivariate normal distribution (without any further restrictions): where C is C for Model 1, the model of interest, and C is C for Model 0, the model that assumes underlying multivariate normality and that has all polychoric correlations ρ and all thresholds τ as its parameters. Statistic C has asymptotically a chi–square distribution with degrees of freedom equal to the difference in the numbers of parameters of Models 0 and 1, where k(k − 1)∕2 is the number of polychoric correlations, k(m − 1) is the number of thresholds, and n1 is the number of parameters of the model of interest. If the bias in C and C caused by empty cells in the m table cancels out in C, then C may outperform C. The fit statistic C is based on pairs of responses only, by comparing the observed and model–implied proportions of those pairs. For items i and j (Agresti, 2002), which has an asymptotic chi–square distribution with degrees of freedom equal to the information (which is (m2 − 1)) minus the number of parameters [i.e., 2(m−1) thresholds and 1 correlation], To test the overall goodness-of-fit of the model, we consider all C and select C = maximum (C). As there are k(k − 1)∕2 possible pairs of items, this C should be applied with a Bonferroni adjusted level of significance α*, with to keep the family–wise error rate at α. The hypothesis of overall goodness-of-fit is tested at α and rejected when C is significant at α*. Notice that with dichotomous items, m = 2, df = 0, so that the hypothesis of an underlying bivariate normal distribution cannot be tested. So, statistic C can only be applied when there are more than two response options. We will compare the performance of these statistics with the chi-square measure of overall goodness–of–fit that is associated with robust WLS estimation, which we will refer to as C. To account for the violation of distributional assumptions, this C statistic is subject to a scaling correction (Muthén et al., 1997; Satorra and Bentler, 2001; Asparouhov and Muthén, 2010). Here we will use the mean–and–variance corrected chi–square statistic (Asparouhov and Muthén, 2010).

2.3. Analysis

We fit three models to each of the 9,000 datasets: a baseline model, a one–factor model, and a two–factor model. The baseline model includes all polychoric correlations and thresholds. If the baseline model does not fit then we must reject the hypothesis of an underlying multivariate normal distribution. The one–factor model has a free 6 × 1 matrix Λ and the 1 × 1 matrix Φ is fixed at unity. The two-factor model corresponds to the data generation model and has a 6 × 2 matrix Λ with a pattern of free factor loadings that corresponds with Λ above, a 2 × 2 symmetric matrix Φ with diagonal elements fixed at unity and a free off–diagonal element. In both the one–factor and two–factor model Θ is a 6 × 6 diagonal matrix equal to I−diag(ΛΦΛ′). We use two estimation methods: PML and robust WLS. Model fit will be evaluated with measures C, C, and C after PML estimation and with measure C after robust WLS estimation. The computer program Mx (Neale et al., 2002) is used for PML estimation, and the computer program Mplus 6.11 (Muthén and Muthén, 2010) for robust WLS estimation. The computer program R is used to calculate the fit measures C, C, and C (using the “mvtnorm” package; R version 2.12.0; R Development Core Team, 2010). The performance of the four fit measures will be evaluated by calculating the proportions of model rejection in each of the conditions. The baseline model and the two–factor model should fit. When testing at a 5% level of significance, these two models should be rejected in 5% of all cases. The one–factor model should not fit and should always be rejected.

3. Results

Before presenting the results of the different methods for the evaluation of model fit, we briefly comment on the accuracy and efficiency of parameter estimation through PML. The accuracy is evaluated by calculating the absolute differences between the parameter estimates and the population values. The standard deviations indicate the efficiency of the parameter estimates. Across all conditions, the average absolute difference of the factor loadings is 0.001 and the average standard deviation is 0.052. The average absolute difference of the correlation between the latent variables across all conditions is 0.002 and the standard deviation across all conditions is 0.069. Noteworthy, PML shows a slightly higher accuracy than robust WLS in terms of the estimates of the factor loadings and the correlation, with average absolute differences of 0.001 for PML and 0.003 for robust WLS. The efficiency is about the same. Katsikatsou et al. (2012) already reported on the accuracy and efficiency of the parameter estimates in the case of four point response options. Our results are consistent with the results of Katsikatsou et al. (2012).

3.1. The C fit statistic

Table 1 gives the results of fit evaluation with the C statistic for the baseline model, the one–factor model, and the two–factor model. For each condition, the means and standard deviations of the fit statistic are calculated across 1,000 replications. Rejection rates and 95% confidence intervals are given two times: Once with degrees of freedom based on the number of possible response patterns (df, Equation 12) and with degrees of freedom based on the number of observed response patterns (, Equation 13). Means and standard deviations of are given as well.
Table 1

.

ConditionsCFCF with dfFCF with dfF*
NScaleM(CF)SD(CF)dfRR95% CIM(df)SD(df)RR95% CI
Q2.5Q97.5Q2.5Q97.5
BASELINE MODEL
2002-point48.0798.584420.1350.1140.15625.0992.8270.8380.8150.861
3-point292.77421.4917010.000--91.0585.4361.000--
4-point495.96629.0524,0620.000--114.0355.4931.000--
5002-point47.92110.052420.1540.1320.17636.7331.9180.3560.3260.386
3-point396.94425.5817010.000--173.2717.1401.000--
4-point757.36737.7094,0620.000--249.0468.6571.000--
10002-point44.5249.815420.0880.0700.10640.3540.7860.1390.1180.160
3-point470.10927.0187010.000--243.5328.0691.000--
4-point982.59541.8254,0620.000--389.42310.8471.000--
ONE-FACTOR MODEL
2002-point90.35915.587510.9190.9020.93634.0992.8270.9990.9971.000
3-point363.11925.9757100.000--100.0585.4361.000--
4-point580.38033.6064,0710.000--123.0355.4931.000--
5002-point140.11223.840511.000--45.7331.9181.000--
3-point561.39034.5517100.000--182.2717.1401.000--
4-point957.35646.4864,0710.000--258.0468.6571.000--
10002-point221.74131.113511.000--49.3540.7861.000--
3-point790.21645.7087100.6330.6030.663252.5328.0691.000--
4-point1,374.97359.9764,0710.000--396.42310.8471.000--
TWO-FACTOR MODEL
2002-point55.8179.096490.1200.1000.14032.0992.8270.7960.7710.821
3-point300.43621.5247080.000--98.0585.4361.000--
4-point503.77429.1094,0690.000--121.0355.4931.000--
5002-point55.42010.588490.1600.1370.18343.7331.9180.3320.3030.361
3-point404.48125.6527080.000--180.2717.1401.000--
4-point765.21237.6494,0690.000--256.0468.6571.000--
10002-point52.08610.773490.0960.0780.11447.3540.7860.1350.1140.156
3-point477.72627.3087080.000--250.5328.0691.000--
4-point990.35141.9334,0690.000--398.42310.8471.000--

Means (M) and standard deviations (SD) of the fit statistic, rejection rates (RR) at a 5% level of significance, and 95% confidence intervals (CI) of the rejection rates are calculated across the 1000 simulated datasets.

. Means (M) and standard deviations (SD) of the fit statistic, rejection rates (RR) at a 5% level of significance, and 95% confidence intervals (CI) of the rejection rates are calculated across the 1000 simulated datasets. The fit of the baseline model is a test of the assumption of underlying multivariate normality, so we would expect rejection rates that equal the level of significance (5%). The overall rejection rates with the df based on the number of possible response patterns (df) in conditions with two-point response scales are too high (13.5%, 15.4%, 8.8%). With three-point and four–point response scales, df is very large so that the baseline model never gets rejected. The same is true for the two–factor model that should fit the data, but is rejected too often in the conditions with two-point response scales (12.0%, 16.0%, 9.6%) and never rejected in the other conditions. The one-factor model is not correct and should be rejected, which is the case in conditions with two-point response scales but not in conditions with three-point scales and four-point scales. We attribute the bad results in conditions with three-point scales and four-point scales to the large numbers of empty cells in the multivariate contingency tables. In the cases of three-point response scales and four-point response scales the numbers of possible response patterns are 729 and 4096, whereas the total numbers of observations are only 200, 500, or 1000, rendering the C statistic unsuitable. The overall rejection rates of the baseline model with degrees of freedom based on the number of observed response patterns (i.e., ) are consistently much too high, in all conditions, showing that the use of is not justified.

3.2. The C fit statistic

Table 2 gives the results of fit evaluation with the C statistic for the one–factor model and the two–factor model. The one–factor model is almost always rejected, except in the condition with sample size 200 and two–point response scales (with 0.987 rejection rate). The rejection rates for the two–factor model should be about equal to the level of significance (5%), but vary from 6.8% to 9.6%.
Table 2

.

ConditionsCMdfRR (CM)95% CI
NScaleM(CM)SD(CM)Q2.5Q97.5
ONE–FACTOR MODEL
2002-point42.28013.88590.9870.9800.994
3-point70.34418.10991.000--
4-point84.41521.17891.000--
5002-point92.19120.96591.000--
3-point164.44628.85191.000--
4-point199.98931.57791.000--
10002-point177.21728.77691.000--
3-point320.10740.68091.000--
4-point392.37943.94691.000--
TWO–FACTOR MODEL
2002-point7.7384.26070.0910.0730.109
3-point7.6614.09970.0720.0560.088
4-point7.8084.17370.0830.0660.100
5002-point7.4993.87070.0680.0520.084
3-point7.5374.17070.0810.0640.098
4-point7.8454.38770.0960.0780.114
10002-point7.5614.14370.0850.0680.102
3-point7.6174.16470.0820.0650.099
4-point7.7563.95870.0800.0630.097

Means (M) and standard deviations (SD) of the fit statistic, rejection rates (RR) at a 5% level of significance, and 95% confidence intervals (CI) of the rejection rates are calculated across the 1000 simulated datasets.

. Means (M) and standard deviations (SD) of the fit statistic, rejection rates (RR) at a 5% level of significance, and 95% confidence intervals (CI) of the rejection rates are calculated across the 1000 simulated datasets. Overall, we consider the C results satisfactory. Apparently, the sparseness of data and (almost) empty cells that invalidate the use of the C statistic does not seem to affect the C statistic much.

3.3. The C fit statistic

The C results are given in Table 3. As explained above, the C statistic cannot be used with two-point response scales. For all other conditions Table 3 gives the means, standard deviations, and rejection rates of the highest C among the 15 bivariate tests that are conducted with each dataset. To guard against inflation of the family–wise error rate, the level of significance is adjusted to 5% / 15 = 0.33%.
Table 3

.

ConditionsCPdfRR (CP)95% CI
NScaleM(CP)SD(CP)Q2.5Q97.5
BASELINE MODEL
2003-point8.6532.78330.0500.0360.064
4-point16.0443.65680.0460.0330.059
5003-point8.3932.89330.0550.0410.069
4-point15.9533.74080.0500.0360.064
10003-point8.4192.84630.0490.0360.062
4-point16.1413.64080.0440.0310.057
ONE–FACTOR MODEL
2003-point16.5775.30730.6700.6410.699
4-point24.3386.15980.5390.5080.570
5003-point31.5549.17330.9960.9921.000
4-point42.91810.27380.9950.9910.999
10003-point58.08312.34331.000--
4-point76.56214.50781.000--
TWO–FACTOR MODEL
2003-point8.9182.78930.0570.0430.071
4-point16.3063.67580.0520.0380.066
5003-point8.6262.90830.0600.0450.075
4-point16.1903.74480.0490.0360.062
10003-point8.6722.84430.0540.0400.068
4-point16.4093.65980.0540.0400.068

Means (M) and standard deviations (SD) of the fit statistic, rejection rates (RR) at a 5% level of significance, and 95% confidence intervals (CI) of the rejection rates are calculated across the 1000 simulated datasets.

. Means (M) and standard deviations (SD) of the fit statistic, rejection rates (RR) at a 5% level of significance, and 95% confidence intervals (CI) of the rejection rates are calculated across the 1000 simulated datasets. The rejection rates for the baseline model vary between 4.4% and 5.5%, and for the two–factor model between 3.5% and 6.0%, which is reasonably close to the significance level of 5%. The one–factor model is almost always rejected in conditions with sample sizes of 500 and 1,000. However, in the small sample conditions rejection rates are only 67.0% and 53.9%.

3.4. The C fit statistic

For the purpose of comparison, Table 4 gives the C results after analysing all data sets with the robust WLS method of estimation. The one-factor model is almost always rejected. The rejection rates for the two-factor model vary between 3.9% and 6.4%.
Table 4

.

ConditionsCWdfRR (CW)95% CI
NScaleM(CW)SD(CW)Q2.5Q97.5
ONE–FACTOR MODEL
2002-point46.01414.77590.9940.9890.999
3-point73.98219.19391.000--
4-point88.83222.78491.000--
5002-point102.98623.24391.000--
3-point177.77330.82791.000--
4-point213.13434.42891.000--
10002-point201.82833.76191.000--
3-point348.52343.99691.000--
4-point421.14947.47791.000--
TWO–FACTOR MODEL
2002-point7.0323.78070.0440.0310.057
3-point6.9333.55770.0410.0290.053
4-point6.9923.68670.0600.0450.075
5002-point7.0283.55770.0530.0390.067
3-point6.8043.60470.0390.0270.051
4-point7.1863.53070.0440.0310.057
10002-point7.1143.96770.0640.0490.079
3-point7.0563.80370.0540.0400.068
4-point7.0123.54870.0460.0330.059

Means (M) and standard deviations (SD) of the fit statistic, rejection rates (RR) at a 5% level of significance, and 95% confidence intervals (CI) of the rejection rates are calculated across the 1000 simulated datasets.

. Means (M) and standard deviations (SD) of the fit statistic, rejection rates (RR) at a 5% level of significance, and 95% confidence intervals (CI) of the rejection rates are calculated across the 1000 simulated datasets. The C results with the two-factor model are somewhat better (closer to 5% rejection rates) than the C results. The C results are about similar to the C results, except for the rejection rates of the one-factor model in small sample size conditions, in which the C statistic seems to have more power.

4. Discussion

We proposed three new statistics for goodness of overall fit of models that are fitted through the pairwise maximum likelihood (PML) method. With the C statistic we test the difference between the model–implied proportions of multivariate response patterns and the observed proportions of multivariate response patterns. With the C statistic we test the difference between model–implied proportions of multivariate response patterns and the proportions of response patterns that are implied by the assumption of underlying multivariate normally distributed continuous variables. With the C statistic we test the difference between model–implied proportions of bivariate response patterns and observed proportions of bivariate response patterns. The C statistic appeared unsuitable for the evaluation of model fit. The performance of the C statistic was good, although the rejection rates for the two factor model were consistently a little too high (varying between 6.8% and 9.6% instead of 5%). The C statistic showed the best results with rejection rates close to the expected values (around 5% for models that should fit, and close to 100% for models that should not fit), except for relatively small sample sizes of 200 with which the rejection rates for the wrong one–factor model were substantially too low. For all fit statistics, we only reported results of testing at the 5% level of significance, as the results at the 1% level of significance were very similar. As an aside, we note that in the condition with four response options and sample size 500, we have reported the results of a second drawing of 1,000 datasets. The first drawing produced by chance unexpected low C rejection rates for the baseline model (i.e., 0.032 with a confidence interval of 0.021–0.043) and one–factor model (i.e., 0.035 with a confidence interval of 0.025–0.046) that did not seem representative. No other statistics were affected. The performance of the PML fit statistics is only partly dependent on sample size. The C statistic is not suitable with any sample size, as we observe the negative consequences of very large contingency tables affected by sparseness of data (e.g., Agresti and Yang, 1987; Reiser and VandenBerg, 1994; Reiser and Lin, 1999; Jöreskog and Moustaki, 2001; Bartholomew and Leung, 2002). The alternative way of calculating degrees of freedom of Jöreskog and Moustaki (2001) on the basis of the number of observed response patterns instead of the number of possible response patterns, appeared unsuitable. In practice, one can deal with sparseness by for example combining cells, reducing the number of categories, or eliminating the most offending variables (see Agresti and Yang, 1987; Jöreskog and Moustaki, 2001). However, it was not possible to implement this in this simulation study. The C statistic seems not that much affected by sparseness of data. The C statistic uses bivariate tables only, but its power for rejecting the one–factor model is mediocre when the sample size is small. Still, the C rejection rates for the correct models are not affected by small sample size. In our simulation study we also varied the number of response options, but this manipulation did not affect the results of the C and C fit statistics much. We compared the results of the PML fit statistics results with results of robust weighted least squares (WLS) with the adjusted chi–square statistic C. The performance of C was very similar to the performance of C, and in small sample conditions C outperformed C in rejecting the one–factor model. Still, robust WLS estimation is very different from PML estimation. Robust WLS is a multiple–step method that relies on the estimated polychoric correlations. The model–implied correlations are then fitted to fixed polychoric correlations, so there is no direct relation between the model–implied correlations and the observed discrete responses. That is why we really expected PML to behave better than robust WLS. However, in the present simulation study of six variables measuring two common factors, robust WLS did at least as well as PML. We still do not know how robust WLS and PML compare in larger datasets, with more variables, and more complex models. As WLS relies on a two-step procedure in which summary statistics are calculated first, we would expect the single-step PML procedure to outperform the WLS procedure. PML may also show advantages over WLS in case of incomplete data. Finally, we think that the PML method is a feasible alternative to FIML in case of larger data sets. Overall, the PML method seems a promising method that can be used to estimate all structural equation models, such as exploratory factor analysis models, multigroup models and longitudinal models (Moustaki, 2003; Vasdekis et al., 2012). We used Mx to apply the PML method, but the PML fit estimates can also be obtained with OpenMX (Boker et al., 2011) and lavaan (Rosseel, 2012). To facilitate their use, the C, C, and C statistics have been implemented in lavaan (see Appendix in Supplementary materials; Rosseel, 2012).

Author contributions

All authors meet the criteria for authorship. All authors contributed substantially to the conception and design of the work, and drafting and finalizing the paper. MB, RL, and FO designed and programmed the simulation study.

Conflict of interest statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
  8 in total

1.  A goodness of fit test for sparse 2p contingency tables.

Authors:  David J Bartholomew; Shing On Leung
Journal:  Br J Math Stat Psychol       Date:  2002-05       Impact factor: 3.380

2.  A general class of latent variable models for ordinal manifest variables with covariate effects on the manifest and latent variables.

Authors:  Irini Moustaki
Journal:  Br J Math Stat Psychol       Date:  2003-11       Impact factor: 3.380

3.  Factor Analysis of Ordinal Variables: A Comparison of Three Approaches.

Authors:  K G Jöreskog; I Moustaki
Journal:  Multivariate Behav Res       Date:  2001-07-01       Impact factor: 5.923

4.  Item factor analysis: current approaches and future directions.

Authors:  R J Wirth; Michael C Edwards
Journal:  Psychol Methods       Date:  2007-03

5.  When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions.

Authors:  Mijke Rhemtulla; Patricia É Brosseau-Liard; Victoria Savalei
Journal:  Psychol Methods       Date:  2012-07-16

6.  A two-stage estimation of structural equation models with continuous and polytomous variables.

Authors:  S Y Lee; W Y Poon; P M Bentler
Journal:  Br J Math Stat Psychol       Date:  1995-11       Impact factor: 3.380

7.  Asymptotically distribution-free methods for the analysis of covariance structures.

Authors:  M W Browne
Journal:  Br J Math Stat Psychol       Date:  1984-05       Impact factor: 3.380

8.  OpenMx: An Open Source Extended Structural Equation Modeling Framework.

Authors:  Steven Boker; Michael Neale; Hermine Maes; Michael Wilde; Michael Spiegel; Timothy Brick; Jeffrey Spies; Ryne Estabrook; Sarah Kenny; Timothy Bates; Paras Mehta; John Fox
Journal:  Psychometrika       Date:  2011-04-01       Impact factor: 2.500

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.