Literature DB >> 12473229

A mixture model approach to indexing rater agreement.

Christof Schuster1.   

Abstract

Raters are an important potential source of measurement error when assigning targets to categories. Therefore, psychologists have devoted considerable attention to quantifying the extent to which ratings agree with each other. Two main approaches to analysing rater agreement data can be distinguished. While the first approach focuses on the development of summary statistics that index rater agreement, the second models the association pattern among the observers' ratings. With the modelling approach three groups of models can be distinguished: latent class models, simple quasisymmetric agreement models, and mixture models. This paper discusses a class of mixture models that is defined by its characteristic of having a quasi-symmetric log-linear representation. This class of models has two interesting properties. First, the simple quasi-symmetric agreement models can be shown to be members of this class. Therefore, the results of a rater agreement analysis based on a simple quasi-symmetric agreement model may be interpreted in the mixture model framework. Second, since the mixture models readily provide a familiar measure of rater reliability, it is possible to obtain a model-based estimate of rater reliability from the simple quasi-symmetric agreement models. The suggested class of mixture models will be illustrated using data from a persuasive communication study in which three raters classified respondents on the basis of their elicited cognitive responses.

Mesh:

Year:  2002        PMID: 12473229     DOI: 10.1348/000711002760554598

Source DB:  PubMed          Journal:  Br J Math Stat Psychol        ISSN: 0007-1102            Impact factor:   3.380


  3 in total

1.  Log-Linear Modeling of Agreement among Expert Exposure Assessors.

Authors:  Phillip R Hunt; Melissa C Friesen; Susan Sama; Louise Ryan; Donald Milton
Journal:  Ann Occup Hyg       Date:  2015-03-06

2.  Large-Sample Variance of Fleiss Generalized Kappa.

Authors:  Kilem L Gwet
Journal:  Educ Psychol Meas       Date:  2021-02-15       Impact factor: 3.088

3.  Measuring inter-rater reliability for nominal data - which coefficients and confidence intervals are appropriate?

Authors:  Antonia Zapf; Stefanie Castell; Lars Morawietz; André Karch
Journal:  BMC Med Res Methodol       Date:  2016-08-05       Impact factor: 4.615

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.