Literature DB >> 29697428

Generalizability of Competency Assessment Scores Across and Within Clerkships: How Students, Assessors, and Clerkships Matter.

Nikki L Bibler Zaidi1, Clarence D Kreiter, Peris R Castaneda, Jocelyn H Schiller, Jun Yang, Cyril M Grum, Maya M Hammoud, Larry D Gruppen, Sally A Santen.   

Abstract

PURPOSE: Many factors influence the reliable assessment of medical students' competencies in the clerkships. The purpose of this study was to determine how many clerkship competency assessment scores were necessary to achieve an acceptable threshold of reliability.
METHOD: Clerkship student assessment data were collected during the 2015-2016 academic year as part of the medical school assessment program at the University of Michigan Medical School. Faculty and residents assigned competency assessment scores for third-year core clerkship students. Generalizability (G) and decision (D) studies were conducted using balanced, stratified, and random samples to examine the extent to which overall assessment scores could reliably differentiate between students' competency levels both within and across clerkships.
RESULTS: In the across-clerkship model, the residual error accounted for the largest proportion of variance (75%), whereas the variance attributed to the student and student-clerkship effects was much smaller (7% and 10.1%, respectively). D studies indicated that generalizability estimates for eight assessors within a clerkship varied across clerkships (G coefficients range = 0.000-0.795). Within clerkships, the number of assessors needed for optimal reliability varied from 4 to 17.
CONCLUSIONS: Minimal reliability was found in competency assessment scores for half of clerkships. The variability in reliability estimates across clerkships may be attributable to differences in scoring processes and assessor training. Other medical schools face similar variation in assessments of clerkship students; therefore, the authors hope this study will serve as a model for other institutions that wish to examine the reliability of their clerkship assessment scores.

Entities:  

Mesh:

Year:  2018        PMID: 29697428     DOI: 10.1097/ACM.0000000000002262

Source DB:  PubMed          Journal:  Acad Med        ISSN: 1040-2446            Impact factor:   6.893


  5 in total

1.  Clerkship Grading and the U.S. Economy: What Medical Education Can Learn From America's Economic History.

Authors:  Michael S Ryan; E Marshall Brooks; Komal Safdar; Sally A Santen
Journal:  Acad Med       Date:  2021-02-01       Impact factor: 6.893

2.  Clerkship Grading Committees: the Impact of Group Decision-Making for Clerkship Grading.

Authors:  Annabel K Frank; Patricia O'Sullivan; Lynnea M Mills; Virginie Muller-Juge; Karen E Hauer
Journal:  J Gen Intern Med       Date:  2019-05       Impact factor: 5.128

3.  Amplifying the Student Voice: Medical Student Perceptions of AΩA.

Authors:  Jeremy M Jones; Alexandra B Berman; Erik X Tan; Sarthak Mohanty; Michelle A Rose; Judy A Shea; Jennifer R Kogan
Journal:  J Gen Intern Med       Date:  2022-06-28       Impact factor: 5.128

4.  WBAs in UME-How Many Are Needed? A Reliability Analysis of 5 AAMC Core EPAs Implemented in the Internal Medicine Clerkship.

Authors:  Dana Dunne; Katherine Gielissen; Martin Slade; Yoon Soo Park; Michael Green
Journal:  J Gen Intern Med       Date:  2021-09-24       Impact factor: 6.473

5.  The Reliability of 2-Station Clerkship Objective Structured Clinical Examinations in Isolation and in Aggregate.

Authors:  Aaron W Bernard; Richard Feinn; Gabbriel Ceccolini; Robert Brown; Ilene Rosenberg; Walter Trymbulak; Christine VanCott
Journal:  J Med Educ Curric Dev       Date:  2019-07-22
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.