Literature DB >> 35769736

Modeling Not-Reached Items in Cognitive Diagnostic Assessments.

Lidan Liang1,2,3, Jing Lu1, Jiwei Zhang4, Ningzhong Shi1.   

Abstract

In cognitive diagnostic assessments with time limits, not-reached items (i.e., continuous nonresponses at the end of tests) frequently occur because examinees drop out of the test due to insufficient time. Oftentimes, the not-reached items are related to examinees' specific cognitive attributes or knowledge structures. Thus, the underlying missing data mechanism of not-reached items is non-ignorable. In this study, a missing data model for not-reached items in cognitive diagnosis assessments was proposed. A sequential model with linear restrictions on item parameters for missing indicators was adopted; meanwhile, the deterministic inputs, noisy "and" gate model was used to model the responses. The higher-order structure was used to capture the correlation between higher-order ability parameters and dropping-out propensity parameters. A Bayesian Markov chain Monte Carlo method was used to estimate the model parameters. The simulation results showed that the proposed model improved diagnostic feedback results and produced accurate item parameters when the missing data mechanism was non-ignorable. The applicability of our model was demonstrated using a dataset from the Program for International Student Assessment 2018 computer-based mathematics cognitive test.
Copyright © 2022 Liang, Lu, Zhang and Shi.

Entities:  

Keywords:  Bayesian analysis; cognitive diagnosis assessments; missing data mechanism; not-reached items; sequential model

Year:  2022        PMID: 35769736      PMCID: PMC9236559          DOI: 10.3389/fpsyg.2022.889673

Source DB:  PubMed          Journal:  Front Psychol        ISSN: 1664-1078


Introduction

In educational and psychological assessments, examinees often do not reach the end of the test which may be due to test fatigue or insufficient time. The percentage of not-reached items in large-scale cognitive testing varies across individuals, items, and countries. According to the 2006 Program for International Student Assessment (PISA) study, an average of 4% of items are not reached (OECD, 2009). In the PISA 2015 (OECD, 2018) computer-based mathematics cognitive dataset, the percentage of not-reached items in Chinese Taipei is approximately 3%, and the percentage of not-reached items for the science cluster in a Canadian sample is 2% (Pohl et al., 2019). According to the PISA 2018 (OECD, 2021) computer-based mathematics cognitive data, the proportion of nonresponses for each item ranges from 0 to 17.3% in some countries, and the maximum percentage of not-reached items is as high as 5%. Thus, the missing proportion at the item level is relatively high. In addition, the percentage of nonresponses per nation (OECD countries) ranges from 4% to15% according to the PISA 2006 study (OECD, 2009). Even though the overall proportion of item nonresponses is small, the rate of not-reached responses for a single item or specific examinee may be large. Previous literature focused on missing data in the item response theory (IRT) framework, which has shown that simply ignoring nonresponses or treating them as incorrect leads to biased estimates of item and person parameters (Lord, 1974, 1983; Ludlow and O’Leary, 1999; Huisman, 2000). Often, Rubin (1976) missing data mechanisms are worth reviewing for statistical inference. The complete data include observed data and unobservable missing data, and there are three types of missing data mechanisms (Rubin, 1976; Little and Rubin, 2002): missing completely at random (MCAR), missing at random (MAR), and not missing at random (NMAR). MCAR refers to the probability of missing data as independent of both observed and missing data. MAR refers to the probability of missing data as only dependent on observed data. NMAR refers to the probability of missing data as dependent on the unobserved missing data itself, which is not ignorable. In general, MCAR and MAR mechanisms do not affect the parameter estimations of interest or the followed-up inference, thus missing data can be ignored in these two specific missing data mechanisms. However, Rose et al. (2010, 2017) showed that the proportion of examinees’ correct scores based on the observed item responses was negatively correlated with the item nonresponse rate, which suggests that simple questions are easy to answer, and numerous difficult items may be omitted. Item nonresponses may depend on the examinee’s ability and the difficulty of the items, and therefore the ignorable missing data mechanism assumption (MCAR or MAR) becomes highly questionable. This leads to the development of measurement models that consider the NMAR mechanism. Specifically, several scholars have proposed multidimensional IRT (MIRT) models to handle missing responses (e.g., Holman and Glas, 2005; Glas and Pimentel, 2008; Pohl et al., 2019; Lu and Wang, 2020). For example, Glas and Pimentel (2008) used a combination of two IRT models to model not-reached items for speeded tests according to the framework of the IRT. Subsequently, Rose et al. (2010) proposed latent regression models and multiple-group IRT models for non-ignorable missing data. Debeer et al. (2017) developed two item response tree models to handle not-reached items in various application scenarios. Recently, cognitive diagnosis (von Davier, 2008, 2018, 2014; Xu and Zhang, 2016; Zhan et al., 2018; Zhang et al., 2020) has received considerable attention from researchers because cognitive diagnostic test enables the evaluation of the mastery of skills or attributes of respondents and allows diagnostic feedback for teachers or clinicians, which in turn aids in decision-making regarding remedial guidance or targeted interventions. In addition, the cognitive diagnostic test has improved on traditional tests. General educational examinations only provide test or ability scores in large-scale testing. However, we can neither conclude that examinees mastered the knowledge nor understand why examinees answered questions incorrectly from a single score. Moreover, it is impossible to infer differences in knowledge state and cognitive structures between individuals with the same score. Thus, the information provided by traditional IRT is not suitable for the needs of individual learning and development. To date, numerous cognitive diagnostic models (CDMs) have been developed, such as the deterministic inputs, noisy “and” gate (DINA) model (de la Torre and Douglas, 2004; de la Torre, 2009); the noisy inputs, deterministic, “and” gate model (NIDA; Maris, 1999); the deterministic inputs, noisy “or” gate (DINO) model (Templin and Henson, 2006); the log-linear CDM (Henson et al., 2009); and the generalized DINA model (de la Torre, 2011). Subsequently, a higher-order DINA (HO-DINA) model (de la Torre and Douglas, 2004) was proposed to link latent attributes via higher-order ability. Furthermore, Ma (2021) proposed a higher-order CDM with polytomous attributes for dichotomous response data. Numerous studies have focused on item nonresponses in IRT models (Finch, 2008; Glas and Pimentel, 2008; Debeer et al., 2017). However, only a few studies have discussed missing data in cognitive assessments. Ömür Sünbül (2018) limited missing data mechanisms to MCAR and MAR in the DINA model and investigated different imputation approaches for dealing with item nonresponses, such as coding item responses as incorrect and using person mean imputation, two-way imputation, and expectation-maximization algorithm imputation. Heller et al. (2015) argued that CDMs may have underlying relationships with knowledge space theory (KST), which has been explored in several previous studies (e.g., Doignon and Falmagne, 1999; Falmagne and Doignon, 2011). Furthermore, de Chiusole et al. (2015) and Anselmi et al. (2016) have developed models for KST to consider different missing data mechanisms (i.e., MCAR, MAR, and NMAR). However, in their work, missing response data may not have been handled effectively, which may have biased results. Shan and Wang (2020) introduced latent missing propensities for examinees in the DINA model. They also included a potential category parameter, which affects the tendency to miss items. However, they did not provide a detailed explanation of the category parameters. Moreover, their model did not distinguish the type of item nonresponses. The confound of different types of missing data produces inaccurate attribute profile estimations, which consequently results in incorrect diagnostic classifications. To the best of our knowledge, there has been no model developed to date that describes not-reached items in cognitive diagnosis. Thus, a missing model for not-reached items is proposed to fill this gap in cognitive diagnosis assessments. Specifically, a higher-order DINA model is used to model responses and an IRT model to describe missing indicators, which is a sequential model with linear restrictions on item parameters (Glas and Pimentel, 2008). The model is connected by bivariate normal distributions between examinees’ latent ability parameters and missing propensity parameters and between item intercept and interaction parameters. The rest of this paper is organized as follows. First, an IRT model is introduced as a missing indicator model for not-reached items. Then, a higher-order DINA model is used for the observed responses and the correlation between person parameters. Second, the Markov chain Monte Carlo (MCMC) algorithm (Patz and Junker, 1999; Chen et al., 2000) is developed to estimate the model parameters of the proposed model. Simulation studies are conducted to assess the performance of the proposed model for different simulation conditions. Third, a real dataset from the PISA 2018 (OECD, 2021) computer-based mathematics data is analyzed. Concluding remarks and future perspectives are provided thereafter.

Model Construction

A two-dimensional data matrix with element Y is considered, where examinees are indexed as i = 1,…,N and items are indexed as j = 1,…,J. If the ith examinee answers the jth item, the response is observed, and the Y is equal to the observation y, otherwise, it is missing data. For convenience, the sign “d” is used to mark the missing data and the relevant parameters.

Missing Data Model for Not-Reached Items

Glas and Pimentel (2008) proposed a sequential model with a linear restriction on the item parameters to model the not-reached items. Specifically, the missing indicator matrix D with element d is given by: where d = 1indicates that the ith examinee drops out the jth item. Because of the small overall proportion of not-reached responses, the appropriate model must have few parameters to be estimable (Lord, 1983). The one-parameter logistic model (1PLM; Rasch, 1960) is adopted to model the missing indicators, thus the dropping-out probability of examinee i on item j is: and where represents the so-called item difficulty parameter for item j, and denotes the ith examinee’s dropping-out propensity. Also, when j = J, where η0 is the difficulty threshold of the last item, and η1 models a uniform change in the probability as a function of the item position in the test. Usually, the parameter η1 is negative, and hence it is more likely to drop out the test at later position items of the test.

Higher-Order Deterministic Inputs, Noisy “And” Gate Model

The DINA model describes the probability of the item response as a function of latent attributes, and the probability of the ith examinee responding to item j correctly is as follows: where s and g are the slipping and guessing probabilities of the jth item, respectively, 1−s−g = IDI is the jth item discrimination index (de la Torre, 2008), and α is the kth attribute of the ith examinee, with α = 1 if examinee i masters attribute k and α = 0 if examinee does not master attribute k. The Q matrix (Tatsuoka, 1983) is an J×K matrix, with q,q = 1 denoting that the attribute k is required for answering the jth item correctly and q = 0 if the attribute k is not required for answering the jth item correctly. Equation (4) can be reparameterized as the reparameterized DINA model (DeCarlo, 2011). In addition, thus Equation (4) can be reformed as, where β and δ are the item intercept and interaction parameter, respectively, and they are assumed to follow a bivariate normal distribution as follows: The higher-order structure is very flexible because it can reduce the number of model parameters and can provide higher-order abilities and more accurate attribute structures. Because the attributes in a test are often correlated, the higher-order structure (de la Torre and Douglas, 2004; Zhan et al., 2018) for the attributes is expressed as, where P(α = 1) is the probability that the ith examinee masters the kth attribute, is the higher-order ability of examinee i, and γ and λ are the slope and intercept parameters of attribute k, respectively. The slope parameter γ is positive because the knowledge attribute is mastered better with the increased ability .

Missing Mechanism Models

If the observation probability p(y|d, β, δ, α) does not depend on θ, when θ and θ are independent, then the missing data are ignorable. In this situation, this model is treated as a MAR model. Let p(y|d, β, δ, α) be the measurement model for the observed data. In addition, let be the measurement model for the missing data indicators, and p(θ) and p(θ) are densities of θ and θ, respectively. To model non-ignorable missing data, it is assumed that and follow a bivariate normal distribution N(μP, ΣP); thus, the two models describe the two missing mechanisms (i.e., MAR and NMAR). Next, we introduce the two missing data models for the not-reached items.

Missing at Random Model

The expression of the MAR model is as follows, and the likelihood function form of the MAR model can be written as, where the MAR model is regarded as a model that ignores the missing data process. In fact, the latent variables and are independent in the MAR model. In other words, the model for the missing data process can be ignored in estimating the item response model.

Not Missing at Random Model

The NMAR model is often called the non-ignorable model, and in this case, and are correlated. A covariance matrix is used to describe the relationship between the latent higher-order ability parameters and the missing propensity parameters in this model. Thus, the likelihood function of the NMAR model can be written as, where the person parameters are assumed to follow a bivariate normal distribution, with mean vector and covariance matrix:

Model Identifications

In Equations (2) and (9), the linear parts of 1PLM and the HO-DINA model can be written as follows: To eliminate the trade-offs between ability and dropping-out threshold parameter and between the higher-order ability person parameter θ and the attribute intercept λ, the mean population level of person parameters is set to zero, that is, . 1 is fixed to eliminate the scale trade-off between and γ (Lord and Novick, 1968; Fox, 2010). In addition to the identifications, two local independence assumptions are made, that is, the α values are conditionally independent given , and the Y values are conditionally independent given α.

Bayesian Model Assessment

In the Bayesian framework, two common Bayesian model evaluation criteria, the deviance information criteria (DIC; Spiegelhalter et al., 2002) and the logarithm of the pseudo-marginal likelihood (LPML, Geisser and Eddy, 1979; Ibrahim et al., 2001) are used to compare the differences in the missing mechanism models according to the results of MCMC sampling. Let, The DIC is given by, On the basis of the posterior distribution of Dev(Y,D,Ω), the DIC was defined as, where , which is the posterior mean deviance and is a Bayesian measure of fit, r = 1,…,R denotes the rth iteration of the algorithm, and , which is the effective number of parameters, is a Bayesian measure of complexity, with . A smaller DIC indicates a better model fit. The conditional predictive ordinate (CPO) index of the two models was computed. Let Q = max1≤{−logf(YD|Ω)}. Thus, The summary statistic for log is the sum of their logarithms, which is termed the LPML and is given by, where the model with a larger LPML indicates a better fit to the data.

Simulation Studies

Three simulation studies were conducted to evaluate different aspects of the proposed model. Simulation study I was conducted to assess whether the MCMC algorithm could successfully recover parameters of the proposed model under different numbers of examinees and items. Simulation study II was conducted to investigate the parameter recovery of different numbers of attributes for the same examinees and items. Simulation study III intended to show the differences in model parameter estimates between the NMAR and MAR models for different dropping-out proportions and correlations among person parameters.

Data Generation

In the three simulation studies, the item parameters were sampled from the following distributions: (βδ)∼MVN((μβμδ), Σ), μβ = −2.197,μδ = 4.394, Σ = (1−0.8−0.81). These values were used in Shan and Wang (2020) study. The dropping-out proportions across three levels (i.e., low, medium, and high) were varied by setting different combinations of η0 and η1. That is, the dropping-out proportion was 3.8 (low) when η0 = 1,η1 = −0.7; the dropping-out proportion was 12 (medium) when η0 = 1,η1 = −0.32; and the dropping-out proportion was 25% (high) when η0 = 1,η1 = −0.18. The attribute intercept parameters were λ=(−1,−0.5, 0, 0.5, 1), and the attribute slope parameters wereγ = 1.5 for all attributes, which were consistent with those in the study by Shan and Wang (2020). Three Q matrices with different numbers of attributes (Figure 1) were considered, and the three Q matrices were taken from Xu and Shang (2018) study and Shan and Wang (2020) study.
FIGURE 1

K-by-J Q matrices in simulation studies, where black means “1” and white means “0.” K is the number of attributes and J is the number of items.

K-by-J Q matrices in simulation studies, where black means “1” and white means “0.” K is the number of attributes and J is the number of items. The person parameters and were simulated from the bivariate normal distribution 0.25. Three levels of correlation between and were considered for 0 (uncorrelated), −0.5 (medium), and −0.8 (high). The missing data due to dropping-out items were simulated in the following manner. The three levels of dropping-out proportions were 3.8% (low), 12% (medium), and 25% (high).

Model Calibration

The priors of η0 and η1 were η0 ∼ N(0,2) and η1 ∼ N(0,2), respectively. The priors of the item parameters β and δ were assumed to have a bivariate normal distribution: (βδ)∼N((μβμδ), Σ). The priors of the person parameters were assumed to follow a bivariate normal distribution: (θθ)∼N((00), Σ). The priors of the higher-order structure parameters were expressed as λ∼N(0,4) and γ ∼ N(0,4)I(γ > 0), the priors of the covariance matrix of the person were expressed as σθ∼U(−1,1) and ∼Inv-(2,2), the priors of the covariance matrix of the item parameters were expressed as Σ Inv-Wishart , and the hyperpriors were specified as ΣI0 = (1001), v = 2,k = 1,μβ∼N(−2.197,2), and μδ∼N(4.394,2)I(μδ > 0). The hyperpriors specified above were on a logit scale for β and δ and were consistent with those reported by Zhan et al. (2018). The mean guessing effect was set at 0.1, which was roughly equal to a logit value −2.197 for μβ. A standard deviation of on the logit scale for μβindicated that the simulated mean guessing effect changed from 0.026 to 0.314. In addition, the mean slipping effect was also set at 0.1, which indicated that μδ was approximately 4.394 on the logit scale. The simulated mean slipping effect changed from 0.007 to 0.653 under a standard deviation of on the logit scale for δ. The initial values of the model parameters were as follows: β = 0, δ = 0for j = 1,…,J,, for i = 1,…,N, σθ = 0, , η0 = 0,η1 = 0,μβ = 0,μδ = 0, Σ = (1001), μ = (00), and . In addition, λ = 0,γ = 1for k = 1,,K, and α=(α11⋯α1⋮⋱⋮α⋯α), where α (i = 1,…,N,k = 1,…,K) were sampled from 0 to 1 randomly. The proposal variances were chosen to give Metropolis acceptance rates between 25% and 40%. The Markov chain length was set at 10,000 so that the potential scale reduction factor (PSRF; Brooks and Gelman, 1998) was less than 1.1 for all parameters, which implied proper chain convergence. Five thousand iterations were treated as burn-in. The final parameter estimates were obtained as the average of the post-burn-in iterations. In terms of evaluation criteria, the bias and root mean squared error (RMSE) are used to assess the accuracy of the parameter estimates. In particular, the bias for parameter η was, and the RMSE for parameter η is defined as, where η is the true value of the parameter, and is the estimate for the rth replication. There were R = 30 replications for each simulation condition. The recoveries of attributes are evaluated using the attribute correct classification rate (ACCR) and the pattern correct classification rate (PCCR): where is the indicator function that is, if , otherwise .

Simulation Study I

In simulation study I, the different numbers of examinees and items were considered to estimate the model parameters under a fixed number of five attributes. Three conditions were considered in this simulation: (a) 500 examinees and 30 items, (b) 1,000 examinees and 30 items, and (c) 500 examinees and 20 items. The correlation between and was −0.3, and the dropping-out proportion was medium. Table 1 presents the bias and RMSE of the ability parameters and item parameters, as well as the attribute parameter estimates. For the 30 items and the 5 attributes (please see the first four columns of Table 1), the item parameter estimates improve when the number of examinees increases from 500 to 1,000, the bias and RMSE of δ and μβdecrease, and the RMSE of β,μδ, and item covariance matrix elements reduce. For the 500 examinees and the 5 attributes (please see the middle four columns of Table 1), the person parameter estimates improve when the number of items increases from 20 to 30, and θ and are more accurate. The ACCRs and PCCRs are presented in Table 2. The ACCRs and PCCRs could be recovered satisfactorily with a larger sample and longer test length. The ACCRs and PCCRs decrease when the number of examinees or test length decreases (please see the first three columns in Table 2), and the changes are particularly marked when the test length is reduced. Figure 2 shows the PSRF of several items and attribute parameters under 500 examinees and 30 items. It is observed that the item intercept parameter β, the interaction parameter δ, the attribute slope parameter γ, and the attribute intercept parameter λ converge at 5,000 iterations, and the convergence of β and δ are significantly faster than that of λ and γ.
TABLE 1

Bias and RMSE of the parameter estimates in simulation studies I and II.

N = 1000
N = 500
N = 500
N = 500
J = 30
J = 30
J = 20
J = 20
K = 5
K = 5
K = 5
K = 3
ParameterBiasRMSEBiasRMSEBiasRMSEBiasRMSE
β0.009 0.167 −0.002 0.198 −0.1340.272−0.0200.282
δ −0.001 0.274 −0.051 0.339 0.0720.3450.0170.351
μβ −0.111 0.203 −0.120 0.215 −0.2960.374−0.1920.268
μδ0.035 0.181 −0.017 0.196 0.2360.3560.1910.313
λ10.0780.1370.0630.1790.0660.191−0.1090.181
λ20.0290.100−0.1330.193−0.1490.199−0.0300.130
λ30.0520.104−0.0580.143−0.1270.202−0.2040.245
λ40.0400.106−0.0690.145−0.1210.178
λ50.2010.239−0.0890.188−0.1810.246
γ10.1290.2490.2960.4570.2220.403−0.1790.451
γ20.0340.1890.0650.268−0.2880.360−0.1560.545
γ3−0.0630.182−0.0020.2520.3590.527−0.3010.626
γ4−0.0270.180−0.2020.298−0.1390.276
γ50.0390.2060.1530.326−0.0830.282
σβ2 −0.152 0.281 −0.051 0.282 −0.0350.374−0.3530.429
σβδ0.093 0.244 −0.027 0.280 −0.1180.4150.1310.318
σδ2 −0.103 0.282 0.066 0.340 0.3150.6110.1320.457
η0−0.051 0.086 −0.014 0.097 0.0530.112−0.1300.161
η1 −0.004 0.013 0.005 0.017 0.0080.019−0.0130.022
σθhθd−0.0560.077−0.0460.0910.0010.0830.0570.105
σθd2 −0.0010.081 0.008 0.094 0.018 0.101 −0.0290.075
θh0.0710.625 −0.043 0.594 −0.044 0.612 −0.0440.701
θd−0.0390.4800.0060.4750.0060.4680.0060.479

The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model.

TABLE 2

ACCRs and PCCRs in simulation studies I and II.

N = 1000
N = 500
N = 500
N = 500
J = 30
J = 30
J = 20
J = 20
K = 5K = 5K = 5K = 3
ACCR0.9680.9660.9220.985
0.9800.9760.9660.993
0.9840.9850.9600.982
0.9860.9770.984
0.9860.9810.954
PCCR 0.910 0.898 0.811 0.961

The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model.

FIGURE 2

The trace plots of PSRF values for simulation study I.

Bias and RMSE of the parameter estimates in simulation studies I and II. The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model. ACCRs and PCCRs in simulation studies I and II. The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model. The trace plots of PSRF values for simulation study I.

Simulation Study II

This simulation study was conducted to investigate the parameter recovery of different numbers of attributes for fixed 500 examinees and 20 items. The correlation between and was set at −0.3, and the dropping-out proportion was medium. The last four columns of Table 1 show the results of simulation study II. The RMSE of the estimates of item and person parameters with attribute K = 5 are smaller than those with attribute K = 3. The RMSE of the attribute slope parameters and intercept parameters recover more satisfactorily with attribute K = 3 than with attribute K = 5. The last two columns of Table 2 show the ACCRs and PCCRs for simulation study II. The ACCRs with attribute K = 3 are higher than those with attribute K = 5 and improve from 0.957 to 0.987 on average. Moreover, the PCCRs are significantly higher when the number of attributes decreases. That is, the PCCR with attribute K = 5 is 0.811, and the PCCR with attribute K = 3 is 0.961.

Simulation Study III

The purpose of this simulation study was to investigate the parameter recovery with the NMAR model, MAR model, and HO-DINA model that ignores the not-reached items under different simulation conditions. The data were generated using the proposed model with the NMAR mechanism. A total of 500 examinees answered 30 items, and each item had 5 attributes. Three dropping-out proportions (i.e., 3.8% [low], 12% [medium], and 25% [high]) and three correlations between and (i.e., 0 [uncorrelated], −0.5 [medium], and −0.8 [high]) were manipulated. Thus, there were 3 × 3 simulation conditions. Table 3 shows the bias and RMSE of the parameters of three models with low dropping-out proportions under different correlations between and . Results show that the parameter estimates from the three models are similar when the correlation between and is 0. When the correlation between and increases, the bias and RMSE of η1, β, Σ, and γ in the NMAR model are much smaller than those in the MAR and HO-DINA models. Moreover, for low dropping-out proportions, when the correlation between and increases, the bias of the person parameters of the three models changes very little, whereas the RMSE of the person parameters in the MAR and HO-DINA models increases significantly. As expected, the NMAR model has higher accuracy of parameters than that of the other two models. Furthermore, the parameter estimates of the MAR and HO-DINA models are similar for all simulation conditions because and are uncorrelated in both the MAR and HO-DINA models, which ignore the not-reached items. Table 4 shows the bias and RMSE of the parameters of the three models with medium dropping-out proportions under different correlations between and . Similar parameter estimates are obtained from the three models when the correlation between and is 0. When the correlation between and increases, not only the bias but also the RMSE of the person parameters are lower in the NMAR model than those in the MAR and HO-DINA models, and the other results are similar to those with low dropping-out proportions. Table 5 shows the bias and RMSE of the parameters of the three models with high dropping-out proportions under different correlations between and . We find that the parameter estimates improve significantly with high dropping-out proportions. Figure 3 shows the bias of the estimates of item mean vector and the item covariance matrix elements in the NMAR and MAR models under different dropping-out proportions and correlations between and . The results show that the estimates of the parameters are more accurate in the NMAR model than those in the MAR model when the correlation is increased. Moreover, it is observed that the bias of the parameters of the NMAR model is close to 0 as the correlation between and increases. In contrast, the bias of the parameters of the MAR model is significantly larger than that of the NMAR model. Figure 4 shows the RMSE of the estimates of the item mean vector and the item covariance matrix elements in the NMAR and MAR models under different dropping-out proportions and correlations between and . The results show that the RMSE of the item mean vector in the NMAR model improves slightly than that in the MAR model. Moreover, the RMSE of the item covariance matrix elements shows significant improvements, and the estimates of the item covariance matrix elements are precise when the correlation is high. Figure 5 shows the ACCRs and PCCRs under nine simulation conditions. Detailed results are provided in Supplementary Table 1. It is found that ACCRs and PCCRs in the NMAR model are improved significantly when the missing proportion or the correlation between and is high. This indicates that the MAR model could not recover the attribute pattern effectively when the missing data mechanism is indeed non-ignorable. Table 6 shows the model selection results. The differences in DIC and LPML are not obvious when the correlation between and is 0. The DICs of the NMAR model are smaller than those of the MAR model under nine simulation conditions. Moreover, the LPMLs of the NMAR model are higher than those of the MAR model. Thus, the DIC and LPML indices are able to select the true model accurately.
TABLE 3

Bias and RMSE of parameter estimates of three models with low dropping-out proportion under different correlations between and in simulation study III.

ρ=0
ρ=−0.5
ρ=−0.8
ParameterStatisticsNMARMARHO-DINANMARMARHO-DINANMARMARHO-DINA
η0Bias0.0030.0010.036−0.001 0.015 −0.019
RMSE0.1230.125 0.155 0.174 0.134 0.162
η1Bias0.0050.004 −0.004 −0.109 −0.003 −0.107
RMSE0.0550.055 0.065 0.137 0.059 0.131
βBias−0.018−0.016−0.015 −0.003 0.124 0.121 −0.029 0.093 0.093
RMSE0.2340.2330.234 0.239 0.299 0.297 0.237 0.285 0.286
δBias0.0390.0470.0450.022−0.017−0.0150.0630.0210.021
RMSE0.3360.3450.346 0.341 0.369 0.369 0.346 0.369 0.369
μβBias−0.136−0.117−0.115−0.1200.0060.004−0.146−0.022−0.022
RMSE0.2280.2170.2180.2180.2010.2010.2350.2040.204
μδBias0.0730.0670.0640.0540.0160.0170.0950.0520.052
RMSE0.2160.2280.229 0.205 0.255 0.255 0.223 0.259 0.263
σβ2 Bias−0.052−0.053−0.056 −0.067 0.074 0.075 −0.055 0.096 0.096
RMSE0.2900.2900.289 0.291 0.322 0.322 0.291 0.331 0.332
σβδBias0.008−0.005−0.003 0.051 −0.275 −0.276 0.028 −0.316 −0.314
RMSE0.2860.2990.296 0.281 0.446 0.445 0.289 0.479 0.478
σδ2 Bias0.0540.2250.222 −0.021 0.656 0.657 0.004 0.703 0.700
RMSE0.3580.4470.443 0.333 0.812 0.811 0.355 0.856 0.855
λ1Bias0.0390.0170.017 0.098 0.298 0.285 0.056 0.220 0.224
RMSE0.1680.1720.173 0.193 0.370 0.363 0.181 0.331 0.330
λ2Bias−0.096−0.111−0.108−0.103−0.051−0.055−0.096−0.049−0.048
RMSE0.1680.1800.1780.1680.1600.1630.1660.1590.159
λ3Bias−0.051−0.053−0.052−0.127−0.003−0.011−0.0910.0300.033
RMSE0.1470.1490.1500.1880.1630.162 0.167 0.169 0.168
λ4Bias−0.089−0.084−0.083 −0.068 0.023 0.018−0.0800.0020.003
RMSE0.1620.1610.1610.1520.1490.1500.1530.1410.141
λ5Bias−0.102−0.076−0.081−0.1420.0190.017−0.1350.0060.007
RMSE0.1940.1860.1900.2140.1850.1870.2100.1810.180
γ1Bias0.1220.1730.179 0.178 0.294 0.263 0.277 0.501 0.520
RMSE0.3460.3710.374 0.387 0.472 0.433 0.451 0.698 0.710
γ2Bias−0.0040.0440.035 −0.117 0.246 0.245 −0.084 0.246 0.247
RMSE0.2760.2840.275 0.281 0.380 0.381 0.271 0.372 0.377
γ3Bias0.0800.1040.111 0.126 0.474 0.477 0.141 0.485 0.494
RMSE0.3010.3120.313 0.323 0.577 0.583 0.332 0.594 0.603
γ4Bias−0.103−0.077−0.078−0.1140.0250.021−0.178−0.037−0.035
RMSE0.2670.2630.2640.2740.2520.2560.2870.2350.233
γ5Bias−0.0520.005−0.006 −0.075 0.114 0.115 −0.039 0.137 0.132
RMSE0.2890.2860.290 0.284 0.307 0.310 0.280 0.313 0.309
θ d Bias−0.002−0.002 0.011 0.011 0.017 0.018
RMSE0.4990.492 0.454 0.667 0.377 0.668
θ h Bias−0.044−0.046−0.046 −0.044 −0.044 −0.047 −0.044 −0.046 −0.045
RMSE0.5810.5810.580 0.582 0.591 0.592 0.578 0.591 0.591
σθd2 Bias−0.0020.007 0.013 1.022 0.015 1.023
RMSE0.0890.088 0.095 1.160 0.081 1.097
σθhθdBias0.0110.0150.010
RMSE0.1310.1130.082

NMAR means not missing at random model, MAR means missing at random model, HO-DINA means higher-order DINA model. The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model.

TABLE 4

Bias and RMSE of parameter estimates of three models with medium dropping-out proportion under different correlations between and in simulation study III.

ρ=0
ρ=−0.5
ρ=−0.8
ParameterStatisticsNMARMARHO-DINANMARMARHO-DINANMARMARHO-DINA
η0Bias0.0140.009 −0.006 −0.159 −0.033 −0.181
RMSE0.1330.131 0.131 0.216 0.123 0.226
η1Bias0.0010.001 −0.001 −0.039 −0.011 −0.048
RMSE−0.002−0.003 −0.001 −0.024 −0.001 −0.019
βBias−0.022−0.019−0.019 −0.028 0.114 0.113−0.0210.1190.118
RMSE0.2490.2480.249 0.265 0.323 0.322 0.249 0.309 0.309
δBias0.0710.0820.0810.042−0.002−0.0010.0520.0050.007
RMSE0.3650.3780.377 0.360 0.401 0.400 0.357 0.389 0.391
μβBias−0.137−0.121−0.120−0.146−0.0010.001−0.1340.0080.004
RMSE0.2290.2260.2230.2380.2060.2040.2290.2070.202
μδBias0.1020.1030.1020.0770.0290.0260.0800.0320.037
RMSE0.2320.2500.247 0.224 0.266 0.264 0.226 0.269 0.268
σβ2 Bias−0.031−0.031−0.033 −0.032 0.105 0.108 −0.046 0.095 0.095
RMSE0.3080.3070.306 0.299 0.341 0.342 0.299 0.338 0.338
σβδBias−0.015−0.024−0.023 −0.015 −0.344 −0.346 0.029 −0.304 −0.304
RMSE0.3100.3190.319 0.296 0.504 0.505 0.286 0.471 0.471
σδ2 Bias0.1070.2770.274 0.075 0.764 0.765 0.016 0.710 0.712
RMSE0.3930.4900.488 0.361 0.919 0.919 0.340 0.864 0.866
λ1Bias0.0470.0260.028 0.109 0.349 0.344 0.070 0.267 0.268
RMSE0.1700.1730.172 0.195 0.414 0.410 0.187 0.375 0.372
λ2Bias−0.104−0.116−0.114−0.106−0.055−0.052−0.110−0.051−0.048
RMSE0.1740.1840.1820.1710.1630.1630.1730.1580.157
λ3Bias−0.044−0.047−0.044−0.1120.0250.032−0.0970.0270.026
RMSE0.1460.1480.1500.1800.1710.1800.1680.1650.161
λ4Bias−0.091−0.086−0.083−0.0640.0340.034−0.0810.0110.009
RMSE0.1650.1620.162 0.152 0.154 0.1550.1560.1440.144
λ5Bias−0.107−0.083−0.082−0.1530.0030.005−0.1380.0330.037
RMSE0.1970.1940.1920.2210.1820.1810.2140.1900.191
γ1Bias0.1190.1830.168 0.113 0.301 0.285 0.236 0.723 0.712
RMSE0.1700.1730.172 0.195 0.414 0.410 0.187 0.375 0.372
γ2Bias−0.0060.0320.029 −0.110 0.267 0.269 −0.098 0.233 0.232
RMSE0.2680.2770.271 0.280 0.393 0.398 0.274 0.365 0.367
γ3Bias0.0960.1040.124 0.127 0.504 0.516 0.127 0.473 0.472
RMSE0.3130.3070.322 0.332 0.611 0.632 0.323 0.580 0.578
γ4Bias−0.122−0.093−0.089−0.0910.0560.046−0.176−0.046−0.054
RMSE0.2770.2690.2640.2670.2650.2630.2950.2400.237
γ5Bias−0.059−0.006−0.011 −0.079 0.087 0.084 −0.045 0.152 0.161
RMSE0.2840.2980.285 0.285 0.293 0.289 0.284 0.325 0.333
θ d Bias−0.002−0.002 0.011 0.013 0.017 0.019
RMSE0.4840.483 0.443 0.577 0.379 0.586
θ h Bias−0.044−0.045−0.045 −0.044 −0.047 −0.045 −0.044 −0.045 −0.045
RMSE0.5850.5830.583 0.581 0.593 0.592 0.574 0.592 0.593
σθd2 Bias0.0080.011 0.013 0.598 0.051 0.648
RMSE0.0010.001 0.017 0.494 0.029 0.411
σθhθdBias−0.0030.0080.001
RMSE0.0230.0050.008

The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model.

TABLE 5

Bias and RMSE of parameter estimates of three models with high dropping-out proportion under different correlations between and in simulation study III.

ρ=0
ρ=−0.5
ρ=−0.8
ParameterStatisticsNMARMARHO-DINANMARMARHO-DINANMARMARHO-DINA
η0Bias−0.013−0.019 0.016 −0.221 0.014 −0.174
RMSE0.1340.132 0.146 0.271 0.130 0.237
η1Bias−0.002−0.003 −0.001 −0.024 −0.001 −0.019
RMSE0.0120.011 0.013 0.028 0.011 0.024
βBias−0.025−0.021−0.021 −0.027 0.175 0.177 −0.010 0.187 0.185
RMSE0.2750.2740.273 0.284 0.383 0.384 0.267 0.373 0.371
δBias0.0580.0710.0690.060−0.001−0.0030.043−0.021−0.019
RMSE0.3920.4050.404 0.392 0.441 0.443 0.378 0.427 0.425
μβBias−0.142−0.120−0.124−0.1440.0560.061−0.1260.0710.067
RMSE0.2350.2250.2280.2400.2170.2180.2270.2160.215
μδBias0.0910.0890.0920.0930.0350.0280.0750.0110.015
RMSE0.2340.2520.253 0.236 0.271 0.272 0.219 0.263 0.216
σβ2 Bias−0.032−0.033−0.032 −0.012 0.084 0.086 −0.025 0.084 0.085
RMSE0.3020.3020.302 0.313 0.339 0.339 0.304 0.332 0.334
σβδBias−0.004−0.013−0.017 −0.047 −0.336 −0.339 −0.004 −0.313 −0.314
RMSE0.3020.3160.314 0.322 0.505 0.507 0.309 0.485 0.486
σδ2 Bias0.0830.2710.271 0.118 0.802 0.806 0.037 0.738 0.740
RMSE0.3830.4910.489 0.405 0.960 0.965 0.378 0.898 0.901
λ1Bias0.0470.0270.021 0.110 0.490 0.502 0.089 0.474 0.467
RMSE0.1820.1810.184 0.201 0.553 0.566 0.198 0.552 0.544
λ2Bias−0.110−0.120−0.122−0.0990.0130.015−0.1020.0060.003
RMSE0.1820.1900.191 0.170 0.173 0.1740.1740.1640.163
λ3Bias−0.055−0.055−0.055−0.1160.1020.1040.091 0.152 0.144
RMSE0.1560.1580.156 0.186 0.206 0.207 0.171 0.237 0.232
λ4Bias−0.096−0.089−0.092 −0.074 0.098 0.098 −0.076 0.085 0.084
RMSE0.1700.1670.168 0.160 0.188 0.188 0.159 0.178 0.177
λ5Bias−0.077−0.045−0.051−0.1470.1410.147 −0.140 0.171 0.164
RMSE0.1960.1970.195 0.223 0.244 0.247 0.225 0.269 0.263
γ1Bias−0.1330.1740.186 0.147 0.672 0.720 0.251 1.029 0.995
RMSE0.3740.3900.400 0.375 0.892 0.952 0.439 1.294 1.249
γ2Bias−0.0200.0590.058 −0.102 0.334 0.340 −0.058 0.341 0.333
RMSE0.2870.3020.294 0.285 0.458 0.461 0.271 0.452 0.444
γ3Bias−0.0910.1170.117 0.111 0.537 0.532 0.143 0.591 0.584
RMSE0.3280.3300.328 0.332 0.648 0.644 0.341 0.700 0.693
γ4Bias−0.130−0.104−0.102−0.1220.0550.053−0.1460.0260.026
RMSE0.2890.2800.2770.2900.2710.2700.2940.2650.261
γ5Bias−0.0140.0460.039 −0.078 0.147 0.152 −0.050 0.226 0.213
RMSE0.3040.3210.314 0.289 0.330 0.334 0.296 0.386 0.376
θ d Bias−0.002−0.002 0.011 0.016 0.018 0.021
RMSE0.4790.475 0.442 0.549 0.374 0.526
θ h Bias−0.044−0.045−0.046 −0.043 −0.046 −0.045 −0.044 −0.045 −0.046
RMSE0.5950.5940.594 0.590 0.607 0.608 0.584 0.607 0.607
σθd2 Bias0.0010.001 0.017 0.494 0.029 0.411
RMSE0.0880.085 0.102 0.555 0.097 0.463
σθhθdBias0.0230.0050.008
RMSE0.1020.0920.085

The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model.

FIGURE 3

Bias of parameter estimates in the mean item vector and the item covariance matrix elements under different dropping-out proportions and correlations between and in simulation study III. Note that the Bias_NMAR is the bias of parameter estimates in NMAR model, and Bias_MAR is the bias of parameter estimates in the MAR model.

FIGURE 4

RMSE of parameter estimates in the mean item vector and the item covariance matrix elements under different dropping-out proportions and correlations between and in simulation study III. Note that the Bias_NMAR is the bias of parameter estimates in the NMAR model, and Bias_MAR is the bias of parameter estimates in the MAR model.

FIGURE 5

The ACCRs and PCCRs of NMAR and MAR models under different correlations between and and different dropping-out proportions in simulation study III.

TABLE 6

DICs and LPMLs of NMAR and MAR models under different correlations between and and different dropping-out proportions in simulation study III.

Low dropping-out proportion
Medium dropping-out proportion
High dropping-out proportion
NMARMARNMARMARNMARMAR
ρ=0DIC12139.312146.312283.912290.612084.812090.3
LPML−6348.4−6352.7−6465.8−6468.3−6532.1−6539.9
ρ=−0.5DIC12152.612541.412225.512653.312113.812570.5
LPML−6354.7−6592.1−6431.9−6660.7−6539.8−6747.6
ρ=−0.8DIC12132.312517.412215.612672.112029.812461.9
LPML−6333.8−6579.2−6412.4−6663.1−6476.2−6681.6
Bias and RMSE of parameter estimates of three models with low dropping-out proportion under different correlations between and in simulation study III. NMAR means not missing at random model, MAR means missing at random model, HO-DINA means higher-order DINA model. The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model. Bias and RMSE of parameter estimates of three models with medium dropping-out proportion under different correlations between and in simulation study III. The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model. Bias and RMSE of parameter estimates of three models with high dropping-out proportion under different correlations between and in simulation study III. The boldfaced values indicate that much smaller Bias and RMSE are obtained from the model. Bias of parameter estimates in the mean item vector and the item covariance matrix elements under different dropping-out proportions and correlations between and in simulation study III. Note that the Bias_NMAR is the bias of parameter estimates in NMAR model, and Bias_MAR is the bias of parameter estimates in the MAR model. RMSE of parameter estimates in the mean item vector and the item covariance matrix elements under different dropping-out proportions and correlations between and in simulation study III. Note that the Bias_NMAR is the bias of parameter estimates in the NMAR model, and Bias_MAR is the bias of parameter estimates in the MAR model. The ACCRs and PCCRs of NMAR and MAR models under different correlations between and and different dropping-out proportions in simulation study III. DICs and LPMLs of NMAR and MAR models under different correlations between and and different dropping-out proportions in simulation study III.

Real Data Analysis

This study analyzed one dataset from the computer-based PISA 2018 (OECD, 2021) mathematics cognitive test with nine items in Albania, which was also used in the study by Shan and Wang (2020). According to the PISA 2018 (OECD, 2021) mathematics assessment framework, four attributes belonging to the mathematical content knowledge were assessed: change and relationship (α1), quantity (α2), space and shape (α3), and uncertainty and data (α4). Item responses were coded 0 (no credit), 1 (full credit), 6 (not reached), 7 (not applicable), 8 (invalid), and 9 (nonresponse). There were 798 examinees after removing examinees with codes 7 (not applicable) and 8 (invalid). In addition, 224 examinees with code 9 were also removed from this study because this study mainly focused on dropping-out missingness. Thus, the final sample was 574. The overall not-reached proportion was about 2%, and the not-reached proportions at the item level were from 0.7% to 3.3%. The item IDs and Q matrices are presented in Table 7.
TABLE 7

The Q matrix in the real data.

AttributeCM033Q01CM474Q01CM155Q01CM155Q04CM411Q01CM411Q02CM803Q01CM442Q02CM034Q01
α1001100000
α2100000001
α3010010010
α4000001100
The Q matrix in the real data. The DIC and LPML of the NMAR model in the real data were 5,760.28 and −3,040.03, respectively, and the DIC and LPML of the MAR model were 6,521.21 and −3,213.94, respectively. These two model fit indices indicated that the NMAR model fits the real data better than the MAR model. Thus, the NMAR model was adopted to fit this real dataset. Tables 8, 9 show the estimated values and standard deviations of the item, person, and attribute parameters. Results show that the correlation coefficient of the person parameters is negative (i.e., −0.516), which indicates that the examinees with the higher abilities are less likely to drop out of the test. The estimated attribute slope parameters are positive, which implies that the knowledge attribute is better mastered with the increased ability . The item mean parameter μβ is estimated to be −1.749, which shows that the mean guessing probability is approximately 0.15. In addition, for the estimation of item parameters, only β for CM033Q01 is positive, while the β values for other items are negative, which implies that the guessing probability of item CM033Q01 is higher than 0.5 and the guessing probability of all other items is lower than 0.5. All δ are positive, which satisfies g < 1−s, as expected. Supplementary Figure 1 shows the proportions of attribute patterns for examinees with not-reached items, which illustrate that the most prevalent attribute pattern for examinees with not-reached items is (0000), which is unsurprising.
TABLE 8

Estimates and standard errors of the parameters for the real data.

Statisticsσθhθd σθd2 μβμδ σβ2 σβδ σδ2 λ1λ2λ3λ4γ1γ2γ3γ4
Est.−0.2240.159−1.7492.3803.058−0.8871.2571.5052.0811.8512.1843.9573.6453.9213.585
SD0.1490.0400.3790.2922.1081.2410.9790.3990.4270.4430.3820.4410.4320.4460.482

Est. is the estimated value, SD is the standard deviation.

TABLE 9

Estimates and standard errors of the item parameters for the real data.

ParameterStatistics033Q01474Q01155Q01155Q04411Q01411Q02803Q01442Q02034Q01
βjEst.0.350−0.251−0.239−1.213−1.522−1.296−4.061−4.325−2.424
SD0.1320.1250.1520.1670.2230.1510.6870.7760.250
δjEst.2.4331.4183.2651.5592.5410.7813.4853.2182.326
SD0.5200.2250.5610.2800.3960.3230.7550.8010.371

Est. is the estimated value, SD is the standard deviation.

Estimates and standard errors of the parameters for the real data. Est. is the estimated value, SD is the standard deviation. Estimates and standard errors of the item parameters for the real data. Est. is the estimated value, SD is the standard deviation.

Conclusion

Not-reached items occurred frequently in cognitive diagnosis assessments. Missing data could help researchers understand examinees’ attributes, skills, or knowledge structures. Studies dealing with item nonresponses have used imputation approaches in cognitive diagnosis models, which may lead to biased parameter estimations. Shan and Wang (2020) introduced latent missing propensities of examinees for a cognitive diagnosis model that was governed by the potential category variables. However, their model did not distinguish the type of item nonresponses, which could result in inaccurate inferences regarding cognitive attributes and patterns. In this study, a missing data model for not-reached items in cognitive diagnosis assessments was proposed. A DINA model was used as the response model, and a 1PLM was used as the missing indicator model. The two models were connected by two bivariate normal distributions for person parameters and item parameters. This new model was able to obtain more fine-grained attributes or knowledge structure as diagnostic feedback for examinees. Simulation studies were conducted to evaluate the performance of the MCMC algorithm using the proposed model. The results showed that not-reached items provide useful information for further understanding the knowledge structure of examinees. Additionally, the HO-DINA model for the cognitive diagnosis assessments explained examinees’ cognitive processes, thus precise estimations of parameters were obtained from the proposed NMAR model. We compared the recovery of parameters under the two missing mechanisms, which revealed that the bias and RMSE of person parameters decreased significantly when using the proposed NMAR model when the missing proportion and the correlation of ability parameters were high. Moreover, considerable differences in the ACCRs and PCCRs between the NMAR and MAR models were found. With regard to model selection, the proposed NMAR model fitted the data better than the MAR model when the missing data mechanism was non-ignorable. The proposed NMAR model was successfully applied to the 2018 computer-based PISA mathematics data. Several limitations of the study warrant mentioning, alongside future research avenues. First, this study only modeled not-reached items; however, examinees may skip the items in a cognitive test, which is another type of missing data that needs to be explored further. Second, missing data mechanisms in cognitive assessments may depend on individual factors, such as sex, culture, and race. In addition, different training and problem-solving strategies of examinees, and different school locations may also affect the pattern of nonresponses. Future studies can extend our model to account for the above-mentioned factors. Third, future studies could also incorporate the additional sources of process data, such as the response times, to explore the missing data mechanisms.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: https://www.oecd.org/PISA/.

Author Contributions

LL completed the writing of the article. JL provided the original thoughts. LL and JL provided key technical support. JZ, JL, and NS completed the article revisions. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
  13 in total

1.  Modeling missing data in knowledge space theory.

Authors:  Debora de Chiusole; Luca Stefanutti; Pasquale Anselmi; Egidio Robusto
Journal:  Psychol Methods       Date:  2015-12

2.  Modelling non-ignorable missing-data mechanisms with item response theory models.

Authors:  Rebecca Holman; Cees A W Glas
Journal:  Br J Math Stat Psychol       Date:  2005-05       Impact factor: 3.380

3.  Measurement of psychological disorders using cognitive diagnosis models.

Authors:  Jonathan L Templin; Robert A Henson
Journal:  Psychol Methods       Date:  2006-09

4.  Identifiability of Diagnostic Classification Models.

Authors:  Gongjun Xu; Stephanie Zhang
Journal:  Psychometrika       Date:  2015-07-09       Impact factor: 2.500

5.  On the Link between Cognitive Diagnostic Models and Knowledge Space Theory.

Authors:  Jürgen Heller; Luca Stefanutti; Pasquale Anselmi; Egidio Robusto
Journal:  Psychometrika       Date:  2015-04-03       Impact factor: 2.500

6.  Using Response Times to Model Not-Reached Items due to Time Limits.

Authors:  Steffi Pohl; Esther Ulitzsch; Matthias von Davier
Journal:  Psychometrika       Date:  2019-05-03       Impact factor: 2.500

7.  Modeling Omitted and Not-Reached Items in IRT Models.

Authors:  Norman Rose; Matthias von Davier; Benjamin Nagengast
Journal:  Psychometrika       Date:  2016-11-15       Impact factor: 2.500

8.  An Upgrading Procedure for Adaptive Assessment of Knowledge.

Authors:  Pasquale Anselmi; Egidio Robusto; Luca Stefanutti; Debora de Chiusole
Journal:  Psychometrika       Date:  2016-04-12       Impact factor: 2.500

9.  A Higher-Order Cognitive Diagnosis Model with Ordinal Attributes for Dichotomous Response Data.

Authors:  Wenchao Ma
Journal:  Multivariate Behav Res       Date:  2021-01-12       Impact factor: 5.923

10.  Cognitive Diagnosis Modeling Incorporating Item-Level Missing Data Mechanism.

Authors:  Na Shan; Xiaofei Wang
Journal:  Front Psychol       Date:  2020-11-30
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.