Nicholas Raison1,2, Kamran Ahmed1, Nicola Fossati3, Nicolò Buffi4, Alexandre Mottrie5, Prokar Dasgupta1, Henk Van Der Poel6. 1. MRC Centre for Transplantation, Faculty of Life Sciences and Medicine, King's College London, London, UK. 2. The London Clinic, London, UK. 3. IRCCS Ospedale San Raffaele, Milan, Italy. 4. Humanitas Research Hospital, Milan, Italy. 5. OLV Hospital, Aalst, Belgium. 6. Netherlands Cancer Institute, Amsterdam, The Netherlands.
Abstract
OBJECTIVES: To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. SUBJECTS AND METHODS: This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. RESULTS: Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. CONCLUSION: Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum.
OBJECTIVES: To develop benchmark scores of competency for use within a competency based virtual reality (VR) robotic training curriculum. SUBJECTS AND METHODS: This longitudinal, observational study analysed results from nine European Association of Urology hands-on-training courses in VR simulation. In all, 223 participants ranging from novice to expert robotic surgeons completed 1565 exercises. Competency was set at 75% of the mean expert score. Benchmark scores for all general performance metrics generated by the simulator were calculated. Assessment exercises were selected by expert consensus and through learning-curve analysis. Three basic skill and two advanced skill exercises were identified. RESULTS: Benchmark scores based on expert performance offered viable targets for novice and intermediate trainees in robotic surgery. Novice participants met the competency standards for most basic skill exercises; however, advanced exercises were significantly more challenging. Intermediate participants performed better across the seven metrics but still did not achieve the benchmark standard in the more difficult exercises. CONCLUSION: Benchmark scores derived from expert performances offer relevant and challenging scores for trainees to achieve during VR simulation training. Objective feedback allows both participants and trainers to monitor educational progress and ensures that training remains effective. Furthermore, the well-defined goals set through benchmarking offer clear targets for trainees and enable training to move to a more efficient competency based curriculum.
Authors: Cho Rok Lee; Seoung Yoon Rho; Sang Hyup Han; Young Moon; Sun Young Hwang; Young Joo Kim; Chang Moo Kang Journal: World J Surg Date: 2019-11 Impact factor: 3.352
Authors: Jan Ebbing; Peter N Wiklund; Olof Akre; Stefan Carlsson; Mats J Olsson; Jonas Höijer; Maurice Heimer; Justin W Collins Journal: Int J Med Robot Date: 2020-11-13 Impact factor: 2.547