James Johnston 1 , Maury Pinsk 1 . Show Affiliations »
Abstract
INTRODUCTION: The University of Manitoba's ambulatory pediatric clerkship transitioned to daily encounter cards (DECs) from single in-training evaluation reports (ITERs). The impact of this change on quality of student assessment was unknown. Using the validated Completed Clinical Evaluation Report Rating (CCERR) scale, we compared the assessment quality of the single ITER to the DEC-based system. METHODS: Block randomization was used to select from a cohort of ITER- and DEC-based assessments during equivalent points in clerkship training. Data were transcribed and anonymized and scored by two blinded raters using the CCERR. RESULTS: Inter-rater reliability for total CCERR scores was substantive (> 0.6). Mean total CCERR score for the DEC cohort was significantly higher than for the ITER cohort (25.2 vs. 16.8, p < 0.001), as were the mean scores for each item (2.81 vs. 1.86, p < 0.05). Multivariate logistical regression supported the significant influence of assessment method on assessment quality. CONCLUSIONS: There is improvement in the average quality of student assessments associated with the transition from an ITER-based system to a DEC-based system. However, the improvement to only average CCERR scores for the DEC cohort suggests an unmet need for faculty development. © International Association of Medical Science Educators 2019.
INTRODUCTION: The University of Manitoba's ambulatory pediatric clerkship transitioned to daily encounter cards (DECs) from single in-training evaluation reports (ITERs). The impact of this change on quality of student assessment was unknown. Using the validated Completed Clinical Evaluation Report Rating (CCERR) scale, we compared the assessment quality of the single ITER to the DEC-based system. METHODS: Block randomization was used to select from a cohort of ITER- and DEC-based assessments during equivalent points in clerkship training. Data were transcribed and anonymized and scored by two blinded raters using the CCERR. RESULTS: Inter-rater reliability for total CCERR scores was substantive (> 0.6). Mean total CCERR score for the DEC cohort was significantly higher than for the ITER cohort (25.2 vs. 16.8, p < 0.001), as were the mean scores for each item (2.81 vs. 1.86, p < 0.05). Multivariate logistical regression supported the significant influence of assessment method on assessment quality. CONCLUSIONS: There is improvement in the average quality of student assessments associated with the transition from an ITER-based system to a DEC-based system. However, the improvement to only average CCERR scores for the DEC cohort suggests an unmet need for faculty development. © International Association of Medical Science Educators 2019.
Entities: Chemical
Keywords:
Daily evaluation cards; Feedback; Medical students; Quality improvement; Student assessment
Year: 2019
PMID: 34457660 PMCID: PMC8368482 DOI: 10.1007/s40670-019-00855-6
Source DB: PubMed Journal: Med Sci Educ ISSN: 2156-8650