| Literature DB >> 32210894 |
Zhaoyuan Zhang1, Jiwei Zhang2, Jing Lu1, Jian Tao1.
Abstract
With the increasing demanding for precision of test feedback, cognitive diagnosis models have attracted more and more attention to fine classify students whether has mastered some skills. The purpose of this paper is to propose a highly effective Pólya-Gamma Gibbs sampling algorithm (Polson et al., 2013) based on auxiliary variables to estimate the deterministic inputs, noisy "and" gate model (DINA) model that have been widely used in cognitive diagnosis study. The new algorithm avoids the Metropolis-Hastings algorithm boring adjustment the turning parameters to achieve an appropriate acceptance probability. Four simulation studies are conducted and a detailed analysis of fraction subtraction data is carried out to further illustrate the proposed methodology.Entities:
Keywords: Bayesian estimation; DINA model; Metropolis-Hastings algorithm; Pólya-Gamma Gibbs sampling algorithm; cognitive diagnosis models; potential scale reduction factor
Year: 2020 PMID: 32210894 PMCID: PMC7076190 DOI: 10.3389/fpsyg.2020.00384
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
The Q matrix design in the simulation study 1.
| 1 | 1 | 0 | 0 | 0 | 0 | 16 | 0 | 1 | 0 | 1 | 0 |
| 2 | 0 | 1 | 0 | 0 | 0 | 17 | 0 | 1 | 0 | 0 | 1 |
| 3 | 0 | 0 | 1 | 0 | 0 | 18 | 0 | 0 | 1 | 1 | 0 |
| 4 | 0 | 0 | 0 | 1 | 0 | 19 | 0 | 0 | 1 | 0 | 1 |
| 5 | 0 | 0 | 0 | 0 | 1 | 20 | 0 | 0 | 0 | 1 | 1 |
| 6 | 1 | 0 | 0 | 0 | 0 | 21 | 1 | 1 | 1 | 0 | 0 |
| 7 | 0 | 1 | 0 | 0 | 0 | 22 | 1 | 1 | 0 | 1 | 0 |
| 8 | 0 | 0 | 1 | 0 | 0 | 23 | 1 | 1 | 0 | 0 | 1 |
| 9 | 0 | 0 | 0 | 1 | 0 | 24 | 1 | 0 | 1 | 1 | 0 |
| 10 | 0 | 0 | 0 | 0 | 1 | 25 | 1 | 0 | 1 | 0 | 1 |
| 11 | 1 | 1 | 0 | 0 | 0 | 26 | 1 | 0 | 0 | 1 | 1 |
| 12 | 1 | 0 | 1 | 0 | 0 | 27 | 0 | 1 | 1 | 1 | 0 |
| 13 | 1 | 0 | 0 | 1 | 0 | 28 | 0 | 1 | 1 | 0 | 1 |
| 14 | 1 | 0 | 0 | 0 | 1 | 29 | 0 | 1 | 0 | 1 | 1 |
| 15 | 0 | 1 | 1 | 0 | 0 | 30 | 0 | 0 | 0 | 1 | 1 |
Figure 1The trace plots of the arbitrarily selected item and class membership probability parameters.
Figure 2The Bias of intercept, interaction and the latent class parameters under four different noise levels. The Q Matrix denotes the skills required for each item along the x axis, where the black square = “1” and white square = “0.” The α denotes the examinee who belongs to the cth latent class whether has mastered kth skill, where the black square = “1” for the presence of a skill and white square = “0” for the absence of a skill, . Note the Bias values are estimated from 25 replications.
Figure 3The MSE of intercept, interaction and class membership probability parameters under four different diagnosticity cases. The Q Matrix denotes the skills required for each item along the x axis, where the black square = “1” and white square = “0.” The α denotes the examinee who belongs to the cth latent class whether has mastered kth skill, where the black square = “1” for the presence of a skill and white square = “0” for the absence of a skill, . Note the MSE values are estimated from 25 replications.
The average Bias and MSE for , , and .
| 0.0023 | 0.0046 | 0.0042 | −0.0039 | |
| −0.1077 | −0.1016 | −0.0235 | −0.0248 | |
| −0.0000 | −0.0000 | −0.0000 | −0.0000 | |
| 0.0048 | 0.0163 | 0.0139 | 0.0088 | |
| 0.0141 | 0.0254 | 0.0172 | 0.0198 | |
| 0.0000 | 0.0000 | 0.0000 | 0.0000 | |
| 0.0089 | 0.0089 | 0.0023 | −0.0020 | |
| −0.0890 | 0.0588 | −0.0003 | −0.0041 | |
| −0.0000 | 0.0000 | −0.0000 | 0.0000 | |
| 0.004 | 0.0117 | 0.0078 | 0.0041 | |
| 0.0107 | 0.0239 | 0.0159 | 0.0181 | |
| 0.0000 | 0.0000 | 0.0000 | 0.0000 | |
Note that the Bias and MSE denote the average Bias and MSE for the parameters. .
Figure 4The trace plots of PSRF values for the simulation study 2.
Evaluating accuracy of parameter estimation using the two algorithms in the simulation study 2.
| 0.0023 | 0.0048 | 0.0016 | 0.0069 | 0.0021 | 0.0081 | |
| −0.1077 | 0.0141 | −0.1042 | 0.0152 | −0.1087 | 0.0174 | |
| −0.0000 | 0.0000 | −0.0007 | 0.0005 | −0.0004 | 0.0011 | |
Note that the Bias and denote the average Bias and MSE for the parameters. .
Evaluating the accuracy of parameters based on different prior distributions in the simulation study 3.
| Type I | Bias | 0.0024 | −0.1044 | −0.0000 |
| MSE | 0.0047 | 0.0134 | 0.0000 | |
| Type II | Bias | 0.0026 | −0.1059 | −0.0000 |
| MSE | 0.0047 | 0.0138 | 0.0000 | |
| Type III | Bias | 0.0022 | −0.1068 | −0.0000 |
| MSE | 0.0048 | 0.0140 | 0.0000 | |
| Type IV | Bias | 0.0023 | −0.1077 | −0.0000 |
| MSE | 0.0048 | 0.0141 | 0.0000 |
Note that the Bias and denote the average Bias and MSE for the parameters. .
Evaluating accuracy of attribute and class membership probability parameter estimations using PGGSA and Gibbs algorithm in the simulation study 4.
| LNL | PGGSA | 0.8740 | 0.9693 | −0.0000 | 0.0000 |
| Gibbs | 0.8722 | 0.9688 | −0.0000 | 0.0000 | |
| HNL | PGGSA | 0.5643 | 0.8696 | −0.0000 | 0.0000 |
| Gibbs | 0.5697 | 0.8718 | −0.0000 | 0.0000 | |
| SHG | PGGSA | 0.7480 | 0.9336 | −0.0000 | 0.0000 |
| Gibbs | 0.7429 | 0.9308 | −0.0000 | 0.0000 | |
| GHS | PGGSA | 0.8436 | 0.9310 | −0.0000 | 0.0000 |
| Gibbs | 0.8484 | 0.9338 | −0.0000 | 0.0000 | |
Note that the CMP denotes the class membership probability. Bias and MSE denote the average Bias and MSE for the class membership probability parameters.
Figure 5The trace plots of PSRF values for the real data.
The Q matrix design and MCMC estimations of and .
| 1 | 1 | 0 | 0 | 0 | 0 | −2.3274 | 0.0277 | [−2.4998, −1.9766] | 3.3884 | 0.0662 | [2.8484, 3.8721] |
| 2 | 1 | 1 | 1 | 1 | 0 | −1.2990 | 0.0225 | [−1.5639, −1.0087] | 3.4200 | 0.0947 | [2.8714, 4.0615] |
| 3 | 1 | 0 | 0 | 0 | 0 | −1.2247 | 0.0276 | [−1.5357, −1.0000] | 4.2999 | 0.0294 | [3.9575, 4.4999] |
| 4 | 1 | 1 | 1 | 1 | 1 | −1.8944 | 0.0358 | [−2.2841, −1.5472] | 3.8815 | 0.1217 | [3.2857, 4.4977] |
| 5 | 0 | 0 | 1 | 0 | 0 | −1.7971 | 0.1042 | [−2.4667, −1.2948] | 2.9899 | 0.1145 | [2.5007, 3.6131] |
| 6 | 1 | 1 | 1 | 1 | 0 | −2.3961 | 0.0113 | [−2.4999, −2.1653] | 3.7058 | 0.0817 | [3.1377, 4.2461] |
| 7 | 1 | 1 | 1 | 1 | 0 | −2.1109 | 0.0322 | [−2.4999, −1.8117] | 4.3549 | 0.0223 | [4.0401, 4.4998] |
| 8 | 1 | 1 | 0 | 0 | 0 | −1.3433 | 0.0409 | [−1.7158, −1.0005] | 4.1817 | 0.0558 | [3.7427, 4.4999] |
| 9 | 1 | 0 | 1 | 0 | 0 | −1.6266 | 0.0566 | [−2.0725, −1.1512] | 4.2735 | 0.0384 | [3.8794, 4.4998] |
| 10 | 1 | 0 | 1 | 1 | 1 | −1.5226 | 0.0246 | [−1.8180, −1.2110] | 4.1072 | 0.0796 | [3.5678, 4.4999] |
| 11 | 1 | 0 | 1 | 0 | 0 | −1.7813 | 0.0681 | [−2.3048, −1.2903] | 4.0454 | 0.0884 | [3.5121, 4.4999] |
| 12 | 1 | 0 | 1 | 1 | 0 | −2.3802 | 0.0119 | [−2.4998, −2.1534] | 4.2212 | 0.0481 | [3.7945, 4.4994] |
| 13 | 1 | 1 | 1 | 1 | 0 | −1.8221 | 0.0399 | [−2.2142, −1.4328] | 3.5878 | 0.1009 | [2.9818, 4.1937] |
| 14 | 1 | 1 | 1 | 1 | 1 | −2.4279 | 0.0058 | [−2.4999, −2.2647] | 3.8646 | 0.0982 | [3.3310, 4.4741] |
| 15 | 1 | 1 | 1 | 1 | 0 | −2.4298 | 0.0060 | [−2.4999, −2.2551] | 4.0033 | 0.0765 | [3.5339, 4.4946] |
Note that α.
The posterior probability distribution of the latent class parameters for the Fraction Subtraction Test.
| 0 | 0 | 0 | 0 | 0 | 1.909% | 0.0003 | [0.0000, 0.0542] |
| 1 | 0 | 0 | 0 | 0 | 0.766% | 0.0000 | [0.0000, 0.0208] |
| 0 | 1 | 0 | 0 | 0 | 1.743% | 0.0002 | [0.0000, 0.0504] |
| 0 | 0 | 1 | 0 | 0 | 1.299% | 0.0001 | [0.0000, 0.0367] |
| 0 | 0 | 0 | 1 | 0 | 2.001% | 0.0002 | [0.0000, 0.0533] |
| 0 | 0 | 0 | 0 | 1 | 1.790% | 0.0002 | [0.0000, 0.0540] |
| 1 | 1 | 0 | 0 | 0 | 0.677% | 0.0000 | [0.0000, 0.0190] |
| 1 | 0 | 1 | 0 | 0 | 1.898% | 0.0001 | [0.0000, 0.0443] |
| 1 | 0 | 0 | 1 | 0 | 0.756% | 0.0000 | [0.0000, 0.0203] |
| 1 | 0 | 0 | 0 | 1 | 0.822% | 0.0000 | [0.0000, 0.0222] |
| 0 | 1 | 1 | 0 | 0 | 1.162% | 0.0001 | [0.0000, 0.0339] |
| 0 | 1 | 0 | 1 | 0 | 1.808% | 0.0002 | [0.0000, 0.0507] |
| 0 | 1 | 0 | 0 | 1 | 1.943% | 0.0003 | [0.0000, 0.0567] |
| 0 | 0 | 1 | 1 | 0 | 1.242% | 0.0001 | [0.0000, 0.0330] |
| 0 | 0 | 1 | 0 | 1 | 1.165% | 0.0001 | [0.0000, 0.0328] |
| 0 | 0 | 0 | 1 | 1 | 1.778% | 0.0002 | [0.0000, 0.0486] |
| 1 | 1 | 1 | 0 | 0 | 10.146% | 0.0039 | [0.0002, 0.2029] |
| 1 | 1 | 0 | 1 | 0 | 0.709% | 0.0000 | [0.0000, 0.0198] |
| 1 | 1 | 0 | 0 | 1 | 0.764% | 0.0000 | [0.0000, 0.0205] |
| 1 | 0 | 1 | 1 | 0 | 0.546% | 0.0000 | [0.0000, 0.0140] |
| 1 | 0 | 1 | 0 | 1 | 1.782% | 0.0001 | [0.0000, 0.0419] |
| 1 | 0 | 0 | 1 | 1 | 0.751% | 0.0000 | [0.0000, 0.0201] |
| 0 | 1 | 1 | 1 | 0 | 1.326% | 0.0001 | [0.0000, 0.0370] |
| 0 | 1 | 1 | 0 | 1 | 1.181% | 0.0001 | [0.0000, 0.0357] |
| 0 | 1 | 0 | 1 | 1 | 1.675% | 0.0002 | [0.0000, 0.0473] |
| 0 | 0 | 1 | 1 | 1 | 1.167% | 0.0001 | [0.0000, 0.0335] |
| 1 | 1 | 1 | 1 | 0 | 9.680% | 0.0002 | [0.0667, 0.1264] |
| 1 | 1 | 1 | 0 | 1 | 11.119% | 0.0038 | [0.0001, 0.2078] |
| 1 | 1 | 0 | 1 | 1 | 0.688% | 0.0000 | [0.0000, 0.0195] |
| 1 | 0 | 1 | 1 | 1 | 0.429% | 0.0000 | [0.0000, 0.0119] |
| 0 | 1 | 1 | 1 | 1 | 1.119% | 0.0001 | [0.0000, 0.0320] |
| 1 | 1 | 1 | 1 | 1 | 34.142% | 0.0004 | [0.2998, 0.3844] |
Note that α.