OBJECTIVE: In this paper, we evaluate face, content and construct validity of the da Vinci Surgical Skills Simulator (dVSSS) across 3 surgical disciplines. METHODS: In total, 48 participants from urology, gynecology and general surgery participated in the study as novices (0 robotic cases performed), intermediates (1-74) or experts (≥75). Each participant completed 9 tasks (Peg board level 2, match board level 2, needle targeting, ring and rail level 2, dots and needles level 1, suture sponge level 2, energy dissection level 1, ring walk level 3 and tubes). The Mimic Technologies software scored each task from 0 (worst) to 100 (best) using several predetermined metrics. Face and content validity were evaluated by a questionnaire administered after task completion. Wilcoxon test was used to perform pair wise comparisons. RESULTS: The expert group comprised of 6 attending surgeons. The intermediate group included 4 attending surgeons, 3 fellows and 5 residents. The novices included 1 attending surgeon, 1 fellow, 13 residents, 13 medical students and 2 research assistants. The median number of robotic cases performed by experts and intermediates were 250 and 9, respectively. The median overall realistic score (face validity) was 8/10. Experts rated the usefulness of the simulator as a training tool for residents (content validity) as 8.5/10. For construct validity, experts outperformed novices in all 9 tasks (p < 0.05). Intermediates outperformed novices in 7 of 9 tasks (p < 0.05); there were no significant differences in the energy dissection and ring walk tasks. Finally, experts scored significantly better than intermediates in only 3 of 9 tasks (matchboard, dots and needles and energy dissection) (p < 0.05). CONCLUSIONS: This study confirms the face, content and construct validities of the dVSSS across urology, gynecology and general surgery. Larger sample size and more complex tasks are needed to further differentiate intermediates from experts.
OBJECTIVE: In this paper, we evaluate face, content and construct validity of the da Vinci Surgical Skills Simulator (dVSSS) across 3 surgical disciplines. METHODS: In total, 48 participants from urology, gynecology and general surgery participated in the study as novices (0 robotic cases performed), intermediates (1-74) or experts (≥75). Each participant completed 9 tasks (Peg board level 2, match board level 2, needle targeting, ring and rail level 2, dots and needles level 1, suture sponge level 2, energy dissection level 1, ring walk level 3 and tubes). The Mimic Technologies software scored each task from 0 (worst) to 100 (best) using several predetermined metrics. Face and content validity were evaluated by a questionnaire administered after task completion. Wilcoxon test was used to perform pair wise comparisons. RESULTS: The expert group comprised of 6 attending surgeons. The intermediate group included 4 attending surgeons, 3 fellows and 5 residents. The novices included 1 attending surgeon, 1 fellow, 13 residents, 13 medical students and 2 research assistants. The median number of robotic cases performed by experts and intermediates were 250 and 9, respectively. The median overall realistic score (face validity) was 8/10. Experts rated the usefulness of the simulator as a training tool for residents (content validity) as 8.5/10. For construct validity, experts outperformed novices in all 9 tasks (p < 0.05). Intermediates outperformed novices in 7 of 9 tasks (p < 0.05); there were no significant differences in the energy dissection and ring walk tasks. Finally, experts scored significantly better than intermediates in only 3 of 9 tasks (matchboard, dots and needles and energy dissection) (p < 0.05). CONCLUSIONS: This study confirms the face, content and construct validities of the dVSSS across urology, gynecology and general surgery. Larger sample size and more complex tasks are needed to further differentiate intermediates from experts.
Authors: Patrick A Kenney; Matthew F Wszolek; Justin J Gould; John A Libertino; Alireza Moinzadeh Journal: Urology Date: 2009-04-10 Impact factor: 2.649
Authors: Michael A Liss; Corollos Abdelshehid; Stephen Quach; Achim Lusch; Joseph Graversen; Jaime Landman; Elspeth M McDougall Journal: J Endourol Date: 2012-10-02 Impact factor: 2.942
Authors: Breno Dauster; Andrew P Steinberg; Melina C Vassiliou; Simon Bergman; Donna D Stanbridge; Liane S Feldman; Gerald M Fried Journal: J Endourol Date: 2005-06 Impact factor: 2.942
Authors: Daniel J Kiely; Walter H Gotlieb; Susie Lau; Xing Zeng; Vanessa Samouelian; Agnihotram V Ramanakumar; Helena Zakrzewski; Sonya Brin; Shannon A Fraser; Pira Korsieporn; Laura Drudi; Joshua Z Press Journal: J Robot Surg Date: 2015-05-16
Authors: C Güldner; A Orth; P Dworschak; I Diogo; M Mandapathil; A Teymoortash; U Walliczek-Dworschak Journal: Surg Endosc Date: 2017-03-09 Impact factor: 4.584
Authors: U Walliczek-Dworschak; M Schmitt; P Dworschak; I Diogo; A Ecke; M Mandapathil; A Teymoortash; C Güldner Journal: Surg Endosc Date: 2016-09-20 Impact factor: 4.584
Authors: Chuhao Wu; Jackie Cha; Jay Sulek; Chandru P Sundaram; Juan Wachs; Robert W Proctor; Denny Yu Journal: Appl Ergon Date: 2020-09-19 Impact factor: 3.661