| Literature DB >> 2047054 |
Abstract
A review of the grading process used in an obstetrics and gynecology clerkship prompted an analysis of physician ratings of student clinical performance. The study assessed the following: 1) the degree to which raters distinguished among six categories of performance, 2) the concordance among raters in terms of evaluation criteria used, 3) the degree of inter-rater agreement, and 4) the relationship between the ratings and student performance on the National Board of Medical Examiners Subject Examination in obstetrics and gynecology. Data from physician ratings and examination scores of 82 students were analyzed. The physicians received no standardized training in using the evaluation form. Seven raters (three faculty, four residents [one per postgraduate year]) were randomly selected from each student's set of evaluations. Contrary to expectations, physicians made global ratings of the students using a single criterion and did not distinguish among the six evaluation categories. Each of the five groups of physicians used a different criterion to evaluate students. The ratings were inflated and suffered from low inter-rater reliability. First-year residents were more lenient in their grading tendencies than the other physician groups. The ratings showed a weak correlation with the examination scores, and the strength of this relationship varied with the physician group and performance category. These findings and supporting literature should remind clerkship directors to periodically check the quality of clinical performance ratings and to recognize the limitations of these ratings for grading purposes. Suggestions are presented for improving the student evaluation process.Mesh:
Year: 1991 PMID: 2047054
Source DB: PubMed Journal: Obstet Gynecol ISSN: 0029-7844 Impact factor: 7.661