| Literature DB >> 10927153 |
J F Reed1.
Abstract
The measurement of intra-observer agreement when the data are categorical has been the subject of several investigators since Cohen first proposed the kappa (kappa) as a chance-corrected coefficient of agreement for nominal scales. Subsequent procedures have been developed to assess the agreement of several raters using a dichotomous classification scheme, assess majority agreement among several raters using a polytomous classification scheme, and the use of kappa as an indicator of the quality of a measurement. Further developments include inference procedures for testing the homogeneity of k>/=2 independent kappa statistics. An executable FORTRAN code for testing the homogeneity of kappa statistics (kappa(h)) across multiple sites or studies is given. The FORTRAN program listing and/or executable programs are available from the author on request.Mesh:
Year: 2000 PMID: 10927153 DOI: 10.1016/s0169-2607(00)00074-2
Source DB: PubMed Journal: Comput Methods Programs Biomed ISSN: 0169-2607 Impact factor: 5.428