| Literature DB >> 1670851 |
Abstract
The limitations of the kappa statistic method to measure inter-observer variation of categorical assessments are shown by means of a hypothetical example. Another method for assessing the inter-observer variation of categorical variables, the proportion of agreement, is used for the same example, and reasons why it is preferable to the kappa statistic are given. Since this method allows measurement of the inherent difficulty of carrying out a particular assessment, it has wide applicability in the introduction of new technology. If a proportion of agreement study shows poor inter-observer agreement for a new method, the technology must either be improved or abandoned.Mesh:
Year: 1991 PMID: 1670851 DOI: 10.1016/0140-6736(91)92169-3
Source DB: PubMed Journal: Lancet ISSN: 0140-6736 Impact factor: 79.321