Literature DB >> 7718064

Comparing checklists and databases with physicians' ratings as measures of students' history and physical-examination skills.

H M MacRae1, N V Vu, B Graham, M Word-Sims, J A Colliver, R S Robbs.   

Abstract

PURPOSE: To compare two methods of rating students' performances on history and physical examination: (1) by using checklists completed by standardized patients (SPs) and databases completed by students, and (2) by using ratings of students by three physicians for each SP-student encounter.
METHOD: Four cases were chosen for the study, and 30 students were examined per case. The students were all in their fourth year at the Southern Illinois University School of Medicine in the spring of 1991. Two of the cases had both checklists and databases, and the remaining two had databases only. Each SP-student encounter was videotaped and was viewed independently by three physicians unfamiliar with the contents of the checklists and databases. The physicians' pooled ratings were then compared with the checklist and database scores. Uncorrected and corrected correlations were obtained, with the generalizability coefficient used as the index of reliability.
RESULTS: Interrater generalizability of physicians' ratings was very good, ranging from .65 to .93 for overall ratings. Generalizability of physicians' ratings pooled across the four cases was .85. Checklist scores tended to correlate higher with physicians' ratings than did database scores: across the cases, correlation coefficients between physicians' ratings and checklist scores and database scores were .65 and .39, respectively.
CONCLUSION: The checklist scores correlated strongly with the physicians' ratings of history and physical-examination skills, providing some evidence of validity for their use. The checklist scores correlated much better with the physicians' ratings than did the database scores. Possible explanations for this finding are discussed.

Entities:  

Mesh:

Year:  1995        PMID: 7718064     DOI: 10.1097/00001888-199504000-00015

Source DB:  PubMed          Journal:  Acad Med        ISSN: 1040-2446            Impact factor:   6.893


  8 in total

1.  The risks of thoroughness: Reliability and validity of global ratings and checklists in an OSCE.

Authors:  J P Cunnington; A J Neville; G R Norman
Journal:  Adv Health Sci Educ Theory Pract       Date:  1996-01       Impact factor: 3.853

2.  Improving in-training evaluation programs.

Authors:  J Turnbull; J Gray; J MacFadyen
Journal:  J Gen Intern Med       Date:  1998-05       Impact factor: 5.128

3.  Assessing the Utility of a Quality-of-Care Assessment Tool Used in Assessing Comprehensive Care Services Provided by Community Health Workers in South Africa.

Authors:  Olukemi Babalola; Jane Goudge; Jonathan Levin; Celia Brown; Frances Griffiths
Journal:  Front Public Health       Date:  2022-05-16

4.  A Single-Blinded, Direct Observational Study of PGY-1 Interns and PGY-2 Residents in Evaluating their History-Taking and Physical-Examination Skills.

Authors:  Sandeep Sharma
Journal:  Perm J       Date:  2011

5.  Standardized patient outcomes trial (SPOT) in neurology.

Authors:  Joseph E Safdieh; Andrew L Lin; Juliet Aizer; Peter M Marzuk; Bernice Grafstein; Carol Storey-Johnson; Yoon Kang
Journal:  Med Educ Online       Date:  2011-01-14

Review 6.  Birth, death, and resurrection of the physical examination: clinical and academic perspectives on bedside diagnosis.

Authors:  A J Peixoto
Journal:  Yale J Biol Med       Date:  2001 Jul-Aug

7.  Necessity of introducing postencounter note describing history and physical examination at clinical performance examination in Korea.

Authors:  Jonghoon Kim
Journal:  Korean J Med Educ       Date:  2014-06-01

8.  The Change of CPX Scores according to Repeated CPXs.

Authors:  Yoon Hee Lee; Jae Hyun Park; Jin Kyung Ko; Hyo Bin Yoo
Journal:  Korean J Med Educ       Date:  2011-09-30
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.