Literature DB >> 31488922

Developing Multistage Tests Using D-Scoring Method.

Kyung Chris T Han1, Dimiter M Dimitrov2, Faisal Al-Mashary3.   

Abstract

The D-scoring method for scoring and equating tests with binary items proposed by Dimitrov offers some of the advantages of item response theory, such as item-level difficulty information and score computation that reflects the item difficulties, while retaining the merits of classical test theory such as the simplicity of number correct score computation and relaxed requirements for model sample sizes. Because of its unique combination of those merits, the D-scoring method has seen quick adoption in the educational and psychological measurement field. Because item-level difficulty information is available with the D-scoring method and item difficulties are reflected in test scores, it conceptually makes sense to use the D-scoring method with adaptive test designs such as multistage testing (MST). In this study, we developed and compared several versions of the MST mechanism using the D-scoring approach and also proposed and implemented a new framework for conducting MST simulation under the D-scoring method. Our findings suggest that the score recovery performance under MST with D-scoring was promising, as it retained score comparability across different MST paths. We found that MST using the D-scoring method can achieve improvements in measurement precision and efficiency over linear-based tests that use D-scoring method.

Keywords:  item calibration; multistage testing; scoring; simulation; test construction

Year:  2019        PMID: 31488922      PMCID: PMC6713982          DOI: 10.1177/0013164419841428

Source DB:  PubMed          Journal:  Educ Psychol Meas        ISSN: 0013-1644            Impact factor:   2.821


  4 in total

1.  Reliability and true-score measures of binary items as a function of their Rasch difficulty parameter.

Authors:  Dimiter M Dimitrov
Journal:  J Appl Meas       Date:  2003

2.  A Note on the D-Scoring Method Adapted for Polytomous Test Items.

Authors:  Dimiter M Dimitrov; Yong Luo
Journal:  Educ Psychol Meas       Date:  2018-07-04       Impact factor: 2.821

3.  An Approach to Scoring and Equating Tests With Binary Items: Piloting With Large-Scale Assessments.

Authors:  Dimiter M Dimitrov
Journal:  Educ Psychol Meas       Date:  2016-02-16       Impact factor: 2.821

Review 4.  Components of the item selection algorithm in computerized adaptive testing.

Authors:  Kyung Chris Tyek Han
Journal:  J Educ Eval Health Prof       Date:  2018-03-24
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.