BACKGROUND: Assessment of professionalism in undergraduate medical education is challenging. One approach that has not been well studied in this context is performance-based examinations. PURPOSE: This study sought to investigate the reliability of standardized patients' scores of students' professionalism in performance-based examinations. METHODS: Twenty students were observed on 4 simulated cases involving professional challenges; 9 raters evaluated each encounter on 21 professionalism items. Correlational and multivariate generalizability (G) analyses were conducted. RESULTS: G coefficients were .75, .53, and .68 for physicians, standardized patients (SPs), and lay raters, respectively. Composite G coefficient for all raters reached acceptable level of .86. Results indicated SP raters were more variable than other rater types in severity with which they rated students, although rank ordering of students was consistent among SPs. CONCLUSIONS: SPs' ratings were less reliable and consistent than physician or lay ratings, although the SPs rank ordered students more consistently than the other rater types.
BACKGROUND: Assessment of professionalism in undergraduate medical education is challenging. One approach that has not been well studied in this context is performance-based examinations. PURPOSE: This study sought to investigate the reliability of standardized patients' scores of students' professionalism in performance-based examinations. METHODS: Twenty students were observed on 4 simulated cases involving professional challenges; 9 raters evaluated each encounter on 21 professionalism items. Correlational and multivariate generalizability (G) analyses were conducted. RESULTS: G coefficients were .75, .53, and .68 for physicians, standardized patients (SPs), and lay raters, respectively. Composite G coefficient for all raters reached acceptable level of .86. Results indicated SP raters were more variable than other rater types in severity with which they rated students, although rank ordering of students was consistent among SPs. CONCLUSIONS:SPs' ratings were less reliable and consistent than physician or lay ratings, although the SPs rank ordered students more consistently than the other rater types.
Authors: Matthew D McEvoy; William R Hand; Cory M Furse; Larry C Field; Carlee A Clark; Vivek K Moitra; Paul J Nietert; Michael F O'Connor; Mark E Nunnally Journal: Simul Healthc Date: 2014-10 Impact factor: 1.929
Authors: Matthew D McEvoy; Jeremy C Smalley; Paul J Nietert; Larry C Field; Cory M Furse; John W Blenko; Benjamin G Cobb; Jenna L Walters; Allen Pendarvis; Nishita S Dalal; John J Schaefer Journal: Simul Healthc Date: 2012-08 Impact factor: 1.929
Authors: Ann Blair Kennedy; Cindy Nessim Youssef Riyad; Ryan Ellis; Perry R Fleming; Mallorie Gainey; Kara Templeton; Anna Nourse; Virginia Hardaway; April Brown; Pam Evans; Nabil Natafgi Journal: J Particip Med Date: 2022-08-30
Authors: Kamran Sattar; Muhamad Saiful Bahri Yusoff; Wan Nor Arifin; Mohd Azhar Mohd Yasin; Mohd Zarawi Mat Nor Journal: Pak J Med Sci Date: 2021 Jul-Aug Impact factor: 1.088