Andrew S Parsons1, Kelley Mark2, James R Martindale3, Megan J Bray4, Ryan P Smith5, Elizabeth Bradley3, Maryellen Gusic3,6. 1. Departments of Medicine and Public Health Sciences, University of Virginia School of Medicine, Charlottesville, Virginia, USA. Asp5c@virginia.edu. 2. University of Virginia School of Medicine, Charlottesville, Virginia, USA. 3. Office of Medical Education, Center for Medical Education Research and Scholarly Innovation, University of Virginia School of Medicine, Charlottesville, Virginia, USA. 4. Department of Obstetrics and Gynecology, University of Virginia School of Medicine, Charlottesville, Virginia, USA. 5. Department of Urology, University of Virginia School of Medicine, Charlottesville, Virginia, USA. 6. Departments of Biomedical Education and Data Science and Pediatrics, Lewis Katz School of Medicine, Temple University, Philadelphia, Pennsylvania, USA.
Abstract
BACKGROUND: Use of EPA-based entrustment-supervision ratings to determine a learner's readiness to assume patient care responsibilities is expanding. OBJECTIVE: In this study, we investigate the correlation between narrative comments and supervision ratings assigned during ad hoc assessments of medical students' performance of EPA tasks. DESIGN: Data from assessments completed for students enrolled in the clerkship phase over 2 academic years were used to extract a stratified random sample of 100 narrative comments for review by an expert panel. PARTICIPANTS: A review panel, comprised of faculty with specific expertise related to their roles within the EPA program, provided a "gold standard" supervision rating using the comments provided by the original assessor. MAIN MEASURES: Interrater reliability (IRR) between members of review panel and correlation coefficients (CC) between expert ratings and supervision ratings from original assessors. KEY RESULTS: IRR among members of the expert panel ranged from .536 for comments associated with focused history taking to .833 for complete physical exam. CC (Kendall's correlation coefficient W) between panel members' assignment of supervision ratings and the ratings provided by the original assessors for history taking, physical examination, and oral presentation comments were .668, .697, and .735 respectively. The supervision ratings of the expert panel had the highest degree of correlation with ratings provided during assessments done by master assessors, faculty trained to assess students across clinical contexts. Correlation between supervision ratings provided with the narrative comments at the time of observation and supervision ratings assigned by the expert panel differed by clinical discipline, perhaps reflecting the value placed on, and perhaps the comfort level with, assessment of the task in a given specialty. CONCLUSIONS: To realize the full educational and catalytic effect of EPA assessments, assessors must apply established performance expectations and provide high-quality narrative comments aligned with the criteria.
BACKGROUND: Use of EPA-based entrustment-supervision ratings to determine a learner's readiness to assume patient care responsibilities is expanding. OBJECTIVE: In this study, we investigate the correlation between narrative comments and supervision ratings assigned during ad hoc assessments of medical students' performance of EPA tasks. DESIGN: Data from assessments completed for students enrolled in the clerkship phase over 2 academic years were used to extract a stratified random sample of 100 narrative comments for review by an expert panel. PARTICIPANTS: A review panel, comprised of faculty with specific expertise related to their roles within the EPA program, provided a "gold standard" supervision rating using the comments provided by the original assessor. MAIN MEASURES: Interrater reliability (IRR) between members of review panel and correlation coefficients (CC) between expert ratings and supervision ratings from original assessors. KEY RESULTS: IRR among members of the expert panel ranged from .536 for comments associated with focused history taking to .833 for complete physical exam. CC (Kendall's correlation coefficient W) between panel members' assignment of supervision ratings and the ratings provided by the original assessors for history taking, physical examination, and oral presentation comments were .668, .697, and .735 respectively. The supervision ratings of the expert panel had the highest degree of correlation with ratings provided during assessments done by master assessors, faculty trained to assess students across clinical contexts. Correlation between supervision ratings provided with the narrative comments at the time of observation and supervision ratings assigned by the expert panel differed by clinical discipline, perhaps reflecting the value placed on, and perhaps the comfort level with, assessment of the task in a given specialty. CONCLUSIONS: To realize the full educational and catalytic effect of EPA assessments, assessors must apply established performance expectations and provide high-quality narrative comments aligned with the criteria.
Authors: C P M van der Vleuten; L W T Schuwirth; E W Driessen; J Dijkstra; D Tigelaar; L K J Baartman; J van Tartwijk Journal: Med Teach Date: 2012 Impact factor: 3.650
Authors: Sarah Burm; Stefanie S Sebok-Syer; Lorelei Lingard; Tamara VanHooren; Saad Chahine; Mark Goldszmidt; Christopher J Watling Journal: Acad Med Date: 2019-10 Impact factor: 6.893
Authors: Suzanne Schut; Sylvia Heeneman; Beth Bierer; Erik Driessen; Jan van Tartwijk; Cees van der Vleuten Journal: Med Educ Date: 2020-04-06 Impact factor: 6.251
Authors: Ara Tekian; Yoon Soo Park; Sarette Tilton; Patrick F Prunty; Eric Abasolo; Fred Zar; David A Cook Journal: Acad Med Date: 2019-12 Impact factor: 6.893