Literature DB >> 29369062

Simulation-based Assessment to Reliably Identify Key Resident Performance Attributes.

Richard H Blum1, Sharon L Muret-Wagstaff, John R Boulet, Jeffrey B Cooper, Emil R Petrusa, Keith H Baker, Galina Davidyuk, Jennifer L Dearden, David M Feinstein, Stephanie B Jones, William R Kimball, John D Mitchell, Robert L Nadelberg, Sarah H Wiser, Meredith A Albrecht, Amanda K Anastasi, Ruma R Bose, Laura Y Chang, Deborah J Culley, Lauren J Fisher, Meera Grover, Suzanne B Klainer, Rikante Kveraga, Jeffrey P Martel, Shannon S McKenna, Rebecca D Minehart, John D Mitchell, Jeremi R Mountjoy, John B Pawlowski, Robert N Pilon, Douglas C Shook, David A Silver, Carol A Warfield, Katherine L Zaleski.   

Abstract

BACKGROUND: Obtaining reliable and valid information on resident performance is critical to patient safety and training program improvement. The goals were to characterize important anesthesia resident performance gaps that are not typically evaluated, and to further validate scores from a multiscenario simulation-based assessment.
METHODS: Seven high-fidelity scenarios reflecting core anesthesiology skills were administered to 51 first-year residents (CA-1s) and 16 third-year residents (CA-3s) from three residency programs. Twenty trained attending anesthesiologists rated resident performances using a seven-point behaviorally anchored rating scale for five domains: (1) formulate a clear plan, (2) modify the plan under changing conditions, (3) communicate effectively, (4) identify performance improvement opportunities, and (5) recognize limits. A second rater assessed 10% of encounters. Scores and variances for each domain, each scenario, and the total were compared. Low domain ratings (1, 2) were examined in detail.
RESULTS: Interrater agreement was 0.76; reliability of the seven-scenario assessment was r = 0.70. CA-3s had a significantly higher average total score (4.9 ± 1.1 vs. 4.6 ± 1.1, P = 0.01, effect size = 0.33). CA-3s significantly outscored CA-1s for five of seven scenarios and domains 1, 2, and 3. CA-1s had a significantly higher proportion of worrisome ratings than CA-3s (chi-square = 24.1, P < 0.01, effect size = 1.50). Ninety-eight percent of residents rated the simulations more educational than an average day in the operating room.
CONCLUSIONS: Sensitivity of the assessment to CA-1 versus CA-3 performance differences for most scenarios and domains supports validity. No differences, by experience level, were detected for two domains associated with reflective practice. Smaller score variances for CA-3s likely reflect a training effect; however, worrisome performance scores for both CA-1s and CA-3s suggest room for improvement.

Entities:  

Mesh:

Year:  2018        PMID: 29369062     DOI: 10.1097/ALN.0000000000002091

Source DB:  PubMed          Journal:  Anesthesiology        ISSN: 0003-3022            Impact factor:   7.892


  4 in total

1.  Critical Appraisal of Anesthesiology Educational Research for 2018.

Authors:  Lara Zisblatt; Ashley E Grantham; Dawn Dillman; Amy N DiLorenzo; Mark P MacEachern; Amy Miller Juve; Emily E Peoples; Fei Chen
Journal:  J Educ Perioper Med       Date:  2020-01-01

2.  Comparing Real-time Versus Delayed Video Assessments for Evaluating ACGME Sub-competency Milestones in Simulated Patient Care Environments.

Authors:  Robert Isaak; Marjorie Stiegler; Gene Hobbs; Susan M Martinelli; David Zvara; Harendra Arora; Fei Chen
Journal:  Cureus       Date:  2018-03-04

Review 3.  Identifying patient safety competences among anesthesiology residents: systematic review.

Authors:  Fernanda Silva Hojas Pereira; Daniela Bianchi Garcia; Elaine Rossi Ribeiro
Journal:  Braz J Anesthesiol       Date:  2022-02-03

4.  General anesthesia for emergency cesarean delivery: simulation-based evaluation of residents.

Authors:  Júlio Alberto Rodrigues Maldonado Teixeira; Cláudia Alves; Conceição Martins; Joana Carvalhas; Margarida Pereira
Journal:  Braz J Anesthesiol       Date:  2021-04-30
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.