Literature DB >> 34457681

Evaluator Agreement in Medical Student Assessment Across a Multi-Campus Medical School During a Standardized Patient Encounter.

Sherri A Braksick1,2, Yunxia Wang1, Suzanne L Hunt3, William Cathcart-Rake4, Jon P Schrage5, Gary S Gronseth1.   

Abstract

PURPOSE: Class rank and clerkship grades impact a medical student's residency application. The variability and inter-rater reliability in assessment across multiple clinical sites within a single university system is unknown. We aimed to determine if medical student assessment across medical school campuses is consistent when using a standardized scoring rubric. DESIGN/
METHODS: Attending physicians who participate in assignment of clerkship grades for neurology from three separate clinical campuses of the same medical school observed 10 identical standardized patient encounters completed by third year medical students during the 2017-2018 academic year. Scoring was completed using a standardized rubric. Descriptive analysis and intra-rater comparisons were completed. Evaluations as a part of this study were completed in 2018.
RESULTS: Of 50 possible points for the patient encounter, the median score among all medical students and all evaluators was 43 (IQR 40, 45.5). Evaluator number 1 provided a statistically significant lower overall score as compared to evaluators 2 and 3 (p = 0.0001 and p = 0.0006, respectively), who were consistently similar in their overall medical student assessment (p = 0.46). Overall agreement between evaluators was good (ICC = 0.805, 95% CI 0.36-0.95) and consistency was excellent (ICC = 0.91, 95% CI 0.75-0.97).
CONCLUSIONS: Medical student evaluation across multiple clinical campus sites via observation of identical standardized patient encounters and use of a standardized scoring rubric generally demonstrated good inter-rater agreement and consistency, but the small variation seen may affect overall clerkship scores. © International Association of Medical Science Educators 2020.

Entities:  

Keywords:  Medical education; Multi-campus medical school; Neurology; Standardized patient

Year:  2020        PMID: 34457681      PMCID: PMC8368357          DOI: 10.1007/s40670-020-00916-1

Source DB:  PubMed          Journal:  Med Sci Educ        ISSN: 2156-8650


  6 in total

1.  Inter-rater reliability and generalizability of patient note scores using a scoring rubric based on the USMLE Step-2 CS format.

Authors:  Yoon Soo Park; Abbas Hyderi; Georges Bordage; Kuan Xing; Rachel Yudkowsky
Journal:  Adv Health Sci Educ Theory Pract       Date:  2016-01-12       Impact factor: 3.853

2.  Incorporating simulation technology into a neurology clerkship.

Authors:  David Matthew Ermak; Douglas W Bower; Jody Wood; Elizabeth H Sinz; Milind J Kothari
Journal:  J Am Osteopath Assoc       Date:  2013-08

3.  Validity Evidence and Scoring Guidelines for Standardized Patient Encounters and Patient Notes From a Multisite Study of Clinical Performance Examinations in Seven Medical Schools.

Authors:  Yoon Soo Park; Abbas Hyderi; Nancy Heine; Win May; Andrew Nevins; Ming Lee; Georges Bordage; Rachel Yudkowsky
Journal:  Acad Med       Date:  2017-11       Impact factor: 6.893

4.  Neurology Education for Critical Care Fellows Using High-Fidelity Simulation.

Authors:  Sherri A Braksick; Kianoush Kashani; Sara Hocker
Journal:  Neurocrit Care       Date:  2017-02       Impact factor: 3.210

Review 5.  An overview of the uses of standardized patients for teaching and evaluating clinical skills. AAMC.

Authors:  H S Barrows
Journal:  Acad Med       Date:  1993-06       Impact factor: 6.893

6.  Standardized patient outcomes trial (SPOT) in neurology.

Authors:  Joseph E Safdieh; Andrew L Lin; Juliet Aizer; Peter M Marzuk; Bernice Grafstein; Carol Storey-Johnson; Yoon Kang
Journal:  Med Educ Online       Date:  2011-01-14
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.