Literature DB >> 30147122

The Stabilizing Influences of Linking Set Size and Model-Data Fit in Sparse Rater-Mediated Assessment Networks.

Stefanie A Wind1, Eli Jones2.   

Abstract

Previous research includes frequent admonitions regarding the importance of establishing connectivity in data collection designs prior to the application of Rasch models. However, details regarding the influence of characteristics of the linking sets used to establish connections among facets, such as locations on the latent variable, model-data fit, and sample size, have not been thoroughly explored. These considerations are particularly important in assessment systems that involve large proportions of missing data (i.e., sparse designs) and are associated with high-stakes decisions, such as teacher evaluations based on teaching observations. The purpose of this study is to explore the influence of characteristics of linking sets in sparsely connected rating designs on examinee, rater, and task estimates. A simulation design whose characteristics were intended to reflect practical large-scale assessment networks with sparse connections were used to consider the influence of locations on the latent variable, model-data fit, and sample size within linking sets on the stability and model-data fit of estimates. Results suggested that parameter estimates for examinee and task facets are quite robust to modifications in the size, model-data fit, and latent-variable location of the link. Parameter estimates for the rater, while still quite robust, are more sensitive to reductions in link size. The implications are discussed as they relate to research, theory, and practice.

Keywords:  Rasch measurement; rater-mediated assessments; sparse designs; teacher evaluation

Year:  2017        PMID: 30147122      PMCID: PMC6096472          DOI: 10.1177/0013164417703733

Source DB:  PubMed          Journal:  Educ Psychol Meas        ISSN: 0013-1644            Impact factor:   2.821


  7 in total

1.  Many-facet Rasch analysis with crossed, nested, and mixed designs.

Authors:  R E Schumacker
Journal:  J Outcome Meas       Date:  1999

2.  Construction of measures from many-facet data.

Authors:  John M Linacre; Benjamin D Wright
Journal:  J Appl Meas       Date:  2002

3.  Correcting performance-rating errors in oral examinations.

Authors:  M R Raymond; L C Webb; W M Houston
Journal:  Eval Health Prof       Date:  1991-03       Impact factor: 2.651

4.  Instrument development tools and activities for measure validation using Rasch models: part II--validation activities.

Authors:  Edward W Wolfe; Everett V Smith
Journal:  J Appl Meas       Date:  2007

5.  Constructing rater and task banks for performance assessments.

Authors:  G Engelhard
Journal:  J Outcome Meas       Date:  1997

6.  Using item mean squares to evaluate fit to the Rasch model.

Authors:  R M Smith; R E Schumacker; M J Bush
Journal:  J Outcome Meas       Date:  1998

7.  Detection and validation of unscalable item score patterns using item response theory: an illustration with Harter's Self-Perception Profile for Children.

Authors:  Rob R Meijer; Iris J L Egberink; Wilco H M Emons; Klaas Sijtsma
Journal:  J Pers Assess       Date:  2008-05
  7 in total
  2 in total

1.  Detecting Rater Biases in Sparse Rater-Mediated Assessment Networks.

Authors:  Stefanie A Wind; Yuan Ge
Journal:  Educ Psychol Meas       Date:  2021-01-19       Impact factor: 3.088

2.  Determining the influence of different linking patterns on the stability of students' score adjustments produced using Video-based Examiner Score Comparison and Adjustment (VESCA).

Authors:  Peter Yeates; Gareth McCray; Alice Moult; Natalie Cope; Richard Fuller; Robert McKinley
Journal:  BMC Med Educ       Date:  2022-01-17       Impact factor: 2.463

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.