Erin M Corsini1, Kyle G Mitchell1, Tom C Nguyen2, Ara A Vaporciyan1, Mara B Antonoff3. 1. Department of Thoracic and Cardiovascular Surgery, University of Texas MD Anderson Cancer Center, Houston, Tex. 2. Department of Cardiothoracic and Vascular Surgery, University of Texas Medical School at Houston, Memorial Hermann Hospital-Heart and Vascular Institute, Houston, Tex. 3. Department of Thoracic and Cardiovascular Surgery, University of Texas MD Anderson Cancer Center, Houston, Tex. Electronic address: mbantonoff@mdanderson.org.
Abstract
OBJECTIVES: Although in-training examinations provide surrogate data on qualifying exam readiness, use of mock oral examinations (MOEs) in cardiothoracic surgery training before the American Board of Thoracic Surgery certifying oral exam is not uniform. Although MOEs are prioritized by some institutions, development and execution of these labor-intensive, time-consuming exams may be a barrier to others. Therefore, we aimed to develop an MOE program and to assess its educational value. METHODS: We developed an institutional MOE program that mimicked the certification examination and was serially administered to 10 cardiothoracic surgery trainees from 2014 to 2018. Biannual MOE scores were reviewed, along with certifying examination pass rates. MOE data were available for curriculum development and trainee performance evaluations. RESULTS: MOEs were conducted twice each academic year, with 4 exams administered during each individual's training. MOE program development required significant up-front time commitment, and thereafter each MOE required approximately 24 total faculty hours and 4 administrator hours. The pass rates for sequential MOEs demonstrated gradual improvement, and the corresponding certifying exam pass rate was 100% for these same individuals. MOE data were routinely used for curriculum refinement, as well as individual trainee feedback. CONCLUSIONS: Standardized MOEs are useful educational adjuncts to assess trainees' knowledge and readiness for certification exams, but require significant coordination and time to develop an accurate, rigorous simulation mechanism. Although we recognize that improvement in serial MOEs is likely related to exposure as well as expanding funds of knowledge, we believe these results justify use of this assessment tool in training.
OBJECTIVES: Although in-training examinations provide surrogate data on qualifying exam readiness, use of mock oral examinations (MOEs) in cardiothoracic surgery training before the American Board of Thoracic Surgery certifying oral exam is not uniform. Although MOEs are prioritized by some institutions, development and execution of these labor-intensive, time-consuming exams may be a barrier to others. Therefore, we aimed to develop an MOE program and to assess its educational value. METHODS: We developed an institutional MOE program that mimicked the certification examination and was serially administered to 10 cardiothoracic surgery trainees from 2014 to 2018. Biannual MOE scores were reviewed, along with certifying examination pass rates. MOE data were available for curriculum development and trainee performance evaluations. RESULTS: MOEs were conducted twice each academic year, with 4 exams administered during each individual's training. MOE program development required significant up-front time commitment, and thereafter each MOE required approximately 24 total faculty hours and 4 administrator hours. The pass rates for sequential MOEs demonstrated gradual improvement, and the corresponding certifying exam pass rate was 100% for these same individuals. MOE data were routinely used for curriculum refinement, as well as individual trainee feedback. CONCLUSIONS: Standardized MOEs are useful educational adjuncts to assess trainees' knowledge and readiness for certification exams, but require significant coordination and time to develop an accurate, rigorous simulation mechanism. Although we recognize that improvement in serial MOEs is likely related to exposure as well as expanding funds of knowledge, we believe these results justify use of this assessment tool in training.
Authors: Amy G Fiedler; Dominic Emerson; Erin A Gillaspie; Joshua L Hermsen; Melissa M Levack; Daniel P McCarthy; Smita Sihag; Stephanie G Worrell; Mara B Antonoff Journal: JTCVS Open Date: 2020-07-25