Matthew D McEvoy1, William R Hand, Cory M Furse, Larry C Field, Carlee A Clark, Vivek K Moitra, Paul J Nietert, Michael F O'Connor, Mark E Nunnally. 1. From the Department of Anesthesiology (M.D.M.), Vanderbilt University Medical Center, Nashville, TN; Departments of Anesthesia and Perioperative Medicine (W.R.H., C.M.F., L.C.F., C.A.C.), and Public Health Sciences (P.J.N.), Medical University of South Carolina, Charleston, SC; Department of Anesthesiology (V.K.M.), Columbia University Medical Center, New York, NY; and Section of Critical Care Medicine (M.F.O.) Department of Anesthesia and Critical Care (M.E.N.), University of Chicago, Chicago, IL.
Abstract
INTRODUCTION: Few valid and reliable grading checklists have been published for the evaluation of performance during simulated high-stakes perioperative event management. As such, the purposes of this study were to construct valid scoring checklists for a variety of perioperative emergencies and to determine the reliability of scores produced by these checklists during continuous video review. METHODS: A group of anesthesiologists, intensivists, and educators created a set of simulation grading checklists for the assessment of the following scenarios: severe anaphylaxis, cerebrovascular accident, hyperkalemic arrest, malignant hyperthermia, and acute coronary syndrome. Checklist items were coded as critical or noncritical. Nonexpert raters evaluated 10 simulation videos in a random order, with each video being graded 4 times. A group of faculty experts also graded the videos to create a reference standard to which nonexpert ratings were compared. P < 0.05 was considered significant. RESULTS: Team leaders in the simulation videos were scored by the expert panel as having performed 56.5% of all items on the checklist (range, 43.8%-84.0%), and 67.2% of the critical items (range, 30.0%-100%). Nonexpert raters agreed with the expert assessment 89.6% of the time (95% confidence interval, 87.2%-91.6%). No learning curve development was found with repetitive video assessment or checklist use. The κ values comparing nonexpert rater assessments to the reference standard averaged 0.76 (95% confidence interval, 0.71-0.81). CONCLUSIONS: The findings indicate that the grading checklists described are valid, are reliable, and could be used in perioperative crisis management assessment.
INTRODUCTION: Few valid and reliable grading checklists have been published for the evaluation of performance during simulated high-stakes perioperative event management. As such, the purposes of this study were to construct valid scoring checklists for a variety of perioperative emergencies and to determine the reliability of scores produced by these checklists during continuous video review. METHODS: A group of anesthesiologists, intensivists, and educators created a set of simulation grading checklists for the assessment of the following scenarios: severe anaphylaxis, cerebrovascular accident, hyperkalemic arrest, malignant hyperthermia, and acute coronary syndrome. Checklist items were coded as critical or noncritical. Nonexpert raters evaluated 10 simulation videos in a random order, with each video being graded 4 times. A group of faculty experts also graded the videos to create a reference standard to which nonexpert ratings were compared. P < 0.05 was considered significant. RESULTS: Team leaders in the simulation videos were scored by the expert panel as having performed 56.5% of all items on the checklist (range, 43.8%-84.0%), and 67.2% of the critical items (range, 30.0%-100%). Nonexpert raters agreed with the expert assessment 89.6% of the time (95% confidence interval, 87.2%-91.6%). No learning curve development was found with repetitive video assessment or checklist use. The κ values comparing nonexpert rater assessments to the reference standard averaged 0.76 (95% confidence interval, 0.71-0.81). CONCLUSIONS: The findings indicate that the grading checklists described are valid, are reliable, and could be used in perioperative crisis management assessment.
Authors: Diane B Wayne; Viva J Siddall; John Butter; Monica J Fudala; Leonard D Wade; Joe Feinglass; William C McGaghie Journal: Acad Med Date: 2006-10 Impact factor: 6.893
Authors: Nick Sevdalis; Melinda Lyons; Andrew N Healey; Shabnam Undre; Ara Darzi; Charles A Vincent Journal: Ann Surg Date: 2009-06 Impact factor: 12.969
Authors: Marilyn Green Larach; Gerald A Gronert; Gregory C Allen; Barbara W Brandom; Erik B Lehman Journal: Anesth Analg Date: 2010-02-01 Impact factor: 5.108
Authors: Kristen L Nelson; Nicole A Shilkofski; Jamie A Haggerty; Mary Saliski; Elizabeth A Hunt Journal: Simul Healthc Date: 2008 Impact factor: 1.929
Authors: Ashwaq AlE'ed; Pinar Ozge Avar Aydin; Nora Al Mutairi; Alhanouf AlSaleem; Hafize Emine Sonmez; Michael Henrickson; Jennifer L Huggins; Seza Ozen; Sulaiman M Al-Mayouf; Hermine I Brunner Journal: Lupus Sci Med Date: 2018-11-17