| Literature DB >> 30919664 |
Hyo Jeong Shin1, Sophia Rabe-Hesketh2, Mark Wilson2.
Abstract
In this study we extend and assess the trifactor model for multiple-ratings data in which two different raters give independent scores for the same responses (e.g., in the GRE essay or to subset of PISA constructed-responses). The trifactor model was extended to incorporate a cross-classified data structure (e.g., items and raters) instead of a strictly hierarchical structure. we present a set of simulations to reflect the incompleteness and imbalance in real-world assessments. The effects of the rate of missingness in the data and of ignoring differences among raters are investigated using two sets of simulations. The use of the trifactor model is also illustrated with empirical data analysis using a well-known international large-scale assessment.Keywords: Programme for International Student Assessment (PISA); Trifactor model; multiple-ratings data; rater effects; validity
Mesh:
Year: 2019 PMID: 30919664 DOI: 10.1080/00273171.2018.1530091
Source DB: PubMed Journal: Multivariate Behav Res ISSN: 0027-3171 Impact factor: 5.923