PURPOSE: This study assessed a clinical performance evaluation tool for use in a simulator-based testing environment. METHOD:Twenty-three subjects were evaluated during five standardized encounters using a patient simulator (six emergency medicine students, seven house officers, ten chief resident-fellows). Performance in each 15-minute session was compared with performance on an identical number of oral objective-structured clinical examination (OSCE) sessions used as controls. Each was scored by a faculty rater using a scoring system previously validated for oral certification examinations in emergency medicine (eight skills rated 1-8; passing = 5.75). RESULTS: On both simulator exams and oral controls, chief resident-fellows earned (mean) "passing" scores [sim = 6.4 (95% CI: 6.0-6.8), oral = 6.4 (95% CI: 6.1-6.7)]; house officers earned "borderline" scores [sim = 5.6 (95% CI: 5.2-5.9), oral = 5.5 (95% CI: 5.0-5.9)]; and students earned "failing" scores [sim = 4.3 (95% CI: 3.8-4.7), oral = 4.5 (95% CI: 3.8-5.1)]. There were significant differences among mean scores for the three cohorts, for both oral and simulator test arms (p <.01). CONCLUSIONS: In this pilot, a standardized oral OSCE scoring system performed equally well in a simulator-based testing environment.
RCT Entities:
PURPOSE: This study assessed a clinical performance evaluation tool for use in a simulator-based testing environment. METHOD: Twenty-three subjects were evaluated during five standardized encounters using a patient simulator (six emergency medicine students, seven house officers, ten chief resident-fellows). Performance in each 15-minute session was compared with performance on an identical number of oral objective-structured clinical examination (OSCE) sessions used as controls. Each was scored by a faculty rater using a scoring system previously validated for oral certification examinations in emergency medicine (eight skills rated 1-8; passing = 5.75). RESULTS: On both simulator exams and oral controls, chief resident-fellows earned (mean) "passing" scores [sim = 6.4 (95% CI: 6.0-6.8), oral = 6.4 (95% CI: 6.1-6.7)]; house officers earned "borderline" scores [sim = 5.6 (95% CI: 5.2-5.9), oral = 5.5 (95% CI: 5.0-5.9)]; and students earned "failing" scores [sim = 4.3 (95% CI: 3.8-4.7), oral = 4.5 (95% CI: 3.8-5.1)]. There were significant differences among mean scores for the three cohorts, for both oral and simulator test arms (p <.01). CONCLUSIONS: In this pilot, a standardized oral OSCE scoring system performed equally well in a simulator-based testing environment.
Authors: James A Gordon; Erik K Alexander; Steven W Lockley; Erin Flynn-Evans; Suresh K Venkatan; Christopher P Landrigan; Charles A Czeisler Journal: Acad Med Date: 2010-10 Impact factor: 6.893
Authors: Rosemarie Fernandez; Dennis Parker; James S Kalus; Douglas Miller; Scott Compton Journal: Am J Pharm Educ Date: 2007-06-15 Impact factor: 2.047
Authors: Mark J Bullard; Anthony J Weekes; Randolph J Cordle; Sean M Fox; Catherine M Wares; Alan C Heffner; Lisa D Howley; Deborah Navedo Journal: AEM Educ Train Date: 2018-12-21