| Literature DB >> 10132480 |
Abstract
A meta-analysis of studies examining the interrater reliability of the standard practice of peer assessments of quality of care was conducted. Using the Medline, Health Planning and Administration, and SCISEARCH databases, the English-language literature from 1966 through 1991 was searched for studies of chance corrected agreement among peer reviewers. The weighted mean kappa of 21 independent findings from 13 studies was .31. Comparison of this result with widely used standards suggests that the interrater reliability of peer assessment is quite limited and needs improvement. Research needs to be directed at modifying the peer review process to improve its reliability or at identifying indexes of quality with sufficient validity and reliability that they can be employed without subsequent peer review.Mesh:
Year: 1994 PMID: 10132480 DOI: 10.1177/016327879401700101
Source DB: PubMed Journal: Eval Health Prof ISSN: 0163-2787 Impact factor: 2.651