| Literature DB >> 34898747 |
James Soland1,2, Megan Kuhfeld2.
Abstract
Researchers in the social sciences often obtain ratings of a construct of interest provided by multiple raters. While using multiple raters provides a way to help avoid the subjectivity of any given person's responses, rater disagreement can be a problem. A variety of models exist to address rater disagreement in both structural equation modeling and item response theory frameworks. Recently, a model was developed by Bauer et al. (2013) and referred to as the "trifactor model" to provide applied researchers with a straightforward way of estimating scores that are purged of variance that is idiosyncratic by rater. Although the intent of the model is to be usable and interpretable, little is known about the circumstances under which it performs well, and those it does not. We conduct simulation studies to examine the performance of the trifactor model under a range of sample sizes and model specifications and then compare model fit, bias, and convergence rates.Entities:
Keywords: item response theory; multiple raters; trifactor model
Year: 2021 PMID: 34898747 PMCID: PMC8655468 DOI: 10.1177/01466216211051728
Source DB: PubMed Journal: Appl Psychol Meas ISSN: 0146-6216