BACKGROUND: A novel computer simulator is now commercially available for robotic surgery using the da Vinci(®) System (Intuitive Surgical, Sunnyvale, CA). Initial investigations into its utility have been limited due to a lack of understanding of which of the many provided skills modules and metrics are useful for evaluation. In addition, construct validity testing has been done using medical students as a "novice" group-a clinically irrelevant cohort given the complexity of robotic surgery. This study systematically evaluated the simulator's skills tasks and metrics and established face, content, and construct validity using a relevant novice group. METHODS: Expert surgeons deconstructed the task of performing robotic surgery into eight separate skills. The content of the 33 modules provided by the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA) was then evaluated for these deconstructed skills and 8 of the 33 determined to be unique. These eight tasks were used for evaluating the performance of 46 surgeons and trainees on the simulator (25 novices, 8 intermediate, and 13 experts). Novice surgeons were general surgery and urology residents or practicing surgeons with clinical experience in open and laparoscopic surgery but limited exposure to robotics. Performance was measured using 85 metrics across all eight tasks. RESULTS: Face and content validity were confirmed using global rating scales. Of the 85 metrics provided by the simulator, 11 were found to be unique, and these were used for further analysis. Experts performed significantly better than novices in all eight tasks and for nearly every metric. Intermediates were inconsistently better than novices, with only four tasks showing a significant difference in performance. Intermediate and expert performance did not differ significantly. CONCLUSION: This study systematically determined the important modules and metrics on the da Vinci Skills Simulator and used them to demonstrate face, content, and construct validity with clinically relevant novice, intermediate, and expert groups. These data will be used to develop proficiency-based training programs on the simulator and to investigate predictive validity.
BACKGROUND: A novel computer simulator is now commercially available for robotic surgery using the da Vinci(®) System (Intuitive Surgical, Sunnyvale, CA). Initial investigations into its utility have been limited due to a lack of understanding of which of the many provided skills modules and metrics are useful for evaluation. In addition, construct validity testing has been done using medical students as a "novice" group-a clinically irrelevant cohort given the complexity of robotic surgery. This study systematically evaluated the simulator's skills tasks and metrics and established face, content, and construct validity using a relevant novice group. METHODS: Expert surgeons deconstructed the task of performing robotic surgery into eight separate skills. The content of the 33 modules provided by the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA) was then evaluated for these deconstructed skills and 8 of the 33 determined to be unique. These eight tasks were used for evaluating the performance of 46 surgeons and trainees on the simulator (25 novices, 8 intermediate, and 13 experts). Novice surgeons were general surgery and urology residents or practicing surgeons with clinical experience in open and laparoscopic surgery but limited exposure to robotics. Performance was measured using 85 metrics across all eight tasks. RESULTS: Face and content validity were confirmed using global rating scales. Of the 85 metrics provided by the simulator, 11 were found to be unique, and these were used for further analysis. Experts performed significantly better than novices in all eight tasks and for nearly every metric. Intermediates were inconsistently better than novices, with only four tasks showing a significant difference in performance. Intermediate and expert performance did not differ significantly. CONCLUSION: This study systematically determined the important modules and metrics on the da Vinci Skills Simulator and used them to demonstrate face, content, and construct validity with clinically relevant novice, intermediate, and expert groups. These data will be used to develop proficiency-based training programs on the simulator and to investigate predictive validity.
Authors: Jason Y Lee; Phillip Mucksavage; David C Kerbl; Victor B Huynh; Mohamed Etafy; Elspeth M McDougall Journal: J Urol Date: 2012-01-20 Impact factor: 7.450
Authors: Patrick A Kenney; Matthew F Wszolek; Justin J Gould; John A Libertino; Alireza Moinzadeh Journal: Urology Date: 2009-04-10 Impact factor: 2.649
Authors: Hugh J Lavery; David B Samadi; Rahul Thaly; David Albala; Thomas Ahlering; Arieh Shalhav; Peter Wiklund; Ashutosh Tewari; Randy Fagin; Anthony J Costello; Geoff Coughlin; Vipul R Patel Journal: J Robot Surg Date: 2009-09-04
Authors: Andrew J Hung; Pascal Zehnder; Mukul B Patil; Jie Cai; Casey K Ng; Monish Aron; Inderbir S Gill; Mihir M Desai Journal: J Urol Date: 2011-07-23 Impact factor: 7.450
Authors: Michael Connolly; Johnathan Seligman; Andrew Kastenmeier; Matthew Goldblatt; Jon C Gould Journal: Surg Endosc Date: 2014-01-01 Impact factor: 4.584
Authors: Amir Szold; Roberto Bergamaschi; Ivo Broeders; Jenny Dankelman; Antonello Forgione; Thomas Langø; Andreas Melzer; Yoav Mintz; Salvador Morales-Conde; Michael Rhodes; Richard Satava; Chung-Ngai Tang; Ramon Vilallonga Journal: Surg Endosc Date: 2014-11-08 Impact factor: 4.584
Authors: Erika Palagonia; Elio Mazzone; Geert De Naeyer; Frederiek D'Hondt; Justin Collins; Pawel Wisz; Fijs W B Van Leeuwen; Henk Van Der Poel; Peter Schatteman; Alexandre Mottrie; Paolo Dell'Oglio Journal: World J Urol Date: 2019-08-19 Impact factor: 4.226
Authors: Andrew C Harbin; Kumar S Nadhan; James H Mooney; Daohai Yu; Joshua Kaplan; Nora McGinley-Hence; Andrew Kim; Yiming Gu; Daniel D Eun Journal: J Robot Surg Date: 2016-11-16