BACKGROUND AND OBJECTIVES: Demand is growing for objective assessment of manual skills and competencies of invasive procedures. The aim of this study was to validate an objective tool for assessing residents' skill in performing epidural anesthesia by use of a global assessment scale and a 3-scale, 27-stage checklist. We wish to demonstrate that this tool can differentiate operators with different levels of training. METHODS: Second-year anesthesia residents were recruited. Their previous experience was assessed by questionnaire. They were repeatedly videotaped performing epidural anesthesia over a 6-month period. Videotaping was done in a blinded manner that masked the identity and level of training of the residents. Three blinded, independent examiners evaluated each session by use of a specifically devised assessment tool that consisted of a global rating scale and a 3-scale, 27-stage checklist to judge the skill level and grade the videotaped sessions. RESULTS: Twenty-one sessions by 6 residents were videotaped over 6 months. Interrater reliability for the different checklist and global-rating form items shows moderate to high degree of agreement for most stages. Total scores demonstrate almost perfect agreement (kappa/ICC +/- SE = 0.90 +/- 0.03 and 0.83 +/- 0.13, respectively; P < .0001) between examiners. To test whether higher total scores are associated with greater experience, a series of repeated-measures ANCOVAs were performed. In both the global-rating form and the checklist, a significant relation between total scores and epidurals done was found to exist (checklist: P < .0001; global rating: P < .0001). CONCLUSIONS: The results of our study show that scores on a system that consists of a global-rating form and a task-specific checklist had a significant relation to the number of epidural insertions performed (i.e., experience). The interrater reliability of these assessment tools was very strong. Evaluation of technical skills by an objective tool under direct observation, as opposed to laboratory setting, may create a more reliable standard of assessment. Furthermore, residency programs could use these evaluations to identify deficiencies in teaching programs and trainees who require extra instruction.
BACKGROUND AND OBJECTIVES: Demand is growing for objective assessment of manual skills and competencies of invasive procedures. The aim of this study was to validate an objective tool for assessing residents' skill in performing epidural anesthesia by use of a global assessment scale and a 3-scale, 27-stage checklist. We wish to demonstrate that this tool can differentiate operators with different levels of training. METHODS: Second-year anesthesia residents were recruited. Their previous experience was assessed by questionnaire. They were repeatedly videotaped performing epidural anesthesia over a 6-month period. Videotaping was done in a blinded manner that masked the identity and level of training of the residents. Three blinded, independent examiners evaluated each session by use of a specifically devised assessment tool that consisted of a global rating scale and a 3-scale, 27-stage checklist to judge the skill level and grade the videotaped sessions. RESULTS: Twenty-one sessions by 6 residents were videotaped over 6 months. Interrater reliability for the different checklist and global-rating form items shows moderate to high degree of agreement for most stages. Total scores demonstrate almost perfect agreement (kappa/ICC +/- SE = 0.90 +/- 0.03 and 0.83 +/- 0.13, respectively; P < .0001) between examiners. To test whether higher total scores are associated with greater experience, a series of repeated-measures ANCOVAs were performed. In both the global-rating form and the checklist, a significant relation between total scores and epidurals done was found to exist (checklist: P < .0001; global rating: P < .0001). CONCLUSIONS: The results of our study show that scores on a system that consists of a global-rating form and a task-specific checklist had a significant relation to the number of epidural insertions performed (i.e., experience). The interrater reliability of these assessment tools was very strong. Evaluation of technical skills by an objective tool under direct observation, as opposed to laboratory setting, may create a more reliable standard of assessment. Furthermore, residency programs could use these evaluations to identify deficiencies in teaching programs and trainees who require extra instruction.
Authors: David F Pepley; Hong-En Chen; Yichun Tang; Sanjib Das Adhikary; Scarlett R Miller; Jason Z Moore Journal: IEEE Trans Haptics Date: 2019-05-02 Impact factor: 2.487
Authors: Grace Lim; Robert G Krohner; David G Metro; Bedda L Rosario; Jong-Hyeon Jeong; Tetsuro Sakai Journal: Anesth Analg Date: 2016-05 Impact factor: 5.108
Authors: Julián Varas; Pablo Achurra; Felipe León; Richard Castillo; Natalia De La Fuente; Rajesh Aggarwal; Leticia Clede; María P Bravo; Marcia Corvetto; Rodrigo Montaña Journal: Ann Surg Innov Res Date: 2016-02-12
Authors: Karthikeyan Kallidaikurichi Srinivasan; Anthony Gallagher; Niall O'Brien; Vinod Sudir; Nick Barrett; Raymund O'Connor; Francesca Holt; Peter Lee; Brian O'Donnell; George Shorten Journal: BMJ Open Date: 2018-10-15 Impact factor: 2.692
Authors: Marco Scorzoni; Gian Luigi Gonnella; Emanuele Capogna; Matteo Velardo; Pietro Paolo Giuri; Mariano Ciancia; Giorgio Capogna; Gaetano Draisci Journal: Anesthesiol Res Pract Date: 2022-09-02