BACKGROUND: Many surgical training programs are introducing virtual-reality laparoscopic simulators into their curriculum. If a surgical simulator will be used to determine when a trainee has reached an "expert" level of performance, its evaluation metrics must accurately reflect varying levels of skill. The ability of a metric to differentiate novice from expert performance is referred to as construct validity. The present study was undertaken to determine whether the LapMentor's metrics demonstrate construct validity. METHODS: Medical students, residents and faculty laparoscopic surgeons (n = 5-14 per group) performed 5 consecutive repetitions of 6 laparoscopic skills tasks: 30 degrees Camera Manipulation, Eye-Hand Coordination, Clipping/Grasping, Cutting, Electrocautery, and Translocation of Objects. The LapMentor measured performance in 4 to 12 parameters per task. Mean performance for each parameter was compared between subject groups for the first and fifth repetitions. Pairwise comparisons among the 3 groups were made by post hoc t-tests with Bonferroni technique. Significance was set at P < 0.05. RESULTS: Of the 6 tasks evaluated, only the Eye-Hand Coordination task (3/12 parameters) and the Clipping and Grasping (1/7 parameters) had expert-level discrimination when performance was compared after completion of 1 repetition. Comparison of the fifth repetition performance (representing the plateau of the learning curves), demonstrated that the parameters Time and Score had expert level discrimination on the Eye-Hand Coordination task, and Time on the Cutting task. The remaining LapMentor tasks evaluated did not exhibit the ability to differentiate level of expertise based on the built-in metrics on either repetition 1 or 5. CONCLUSIONS: The majority of the LapMentor tasks' metrics were unable to differentiate between laparoscopic experts and less skilled subjects. Therefore, performance on those tasks may not accurately reflect a subject's true level of ability. Feedback to the manufacturer about these findings may encourage the development of evaluation parameters with greater sensitivity.
BACKGROUND: Many surgical training programs are introducing virtual-reality laparoscopic simulators into their curriculum. If a surgical simulator will be used to determine when a trainee has reached an "expert" level of performance, its evaluation metrics must accurately reflect varying levels of skill. The ability of a metric to differentiate novice from expert performance is referred to as construct validity. The present study was undertaken to determine whether the LapMentor's metrics demonstrate construct validity. METHODS: Medical students, residents and faculty laparoscopic surgeons (n = 5-14 per group) performed 5 consecutive repetitions of 6 laparoscopic skills tasks: 30 degrees Camera Manipulation, Eye-Hand Coordination, Clipping/Grasping, Cutting, Electrocautery, and Translocation of Objects. The LapMentor measured performance in 4 to 12 parameters per task. Mean performance for each parameter was compared between subject groups for the first and fifth repetitions. Pairwise comparisons among the 3 groups were made by post hoc t-tests with Bonferroni technique. Significance was set at P < 0.05. RESULTS: Of the 6 tasks evaluated, only the Eye-Hand Coordination task (3/12 parameters) and the Clipping and Grasping (1/7 parameters) had expert-level discrimination when performance was compared after completion of 1 repetition. Comparison of the fifth repetition performance (representing the plateau of the learning curves), demonstrated that the parameters Time and Score had expert level discrimination on the Eye-Hand Coordination task, and Time on the Cutting task. The remaining LapMentor tasks evaluated did not exhibit the ability to differentiate level of expertise based on the built-in metrics on either repetition 1 or 5. CONCLUSIONS: The majority of the LapMentor tasks' metrics were unable to differentiate between laparoscopic experts and less skilled subjects. Therefore, performance on those tasks may not accurately reflect a subject's true level of ability. Feedback to the manufacturer about these findings may encourage the development of evaluation parameters with greater sensitivity.
Authors: Kellie K Middleton; Travis Hamilton; Pei-Chien Tsai; Dana B Middleton; John L Falcone; Giselle Hamad Journal: Surg Endosc Date: 2013-06-13 Impact factor: 4.584
Authors: Daniel J Scott; Juan C Cendan; Carla M Pugh; Rebecca M Minter; Gary L Dunnington; Rosemary A Kozar Journal: J Surg Res Date: 2008-03-13 Impact factor: 2.192
Authors: J L Moyano-Cuevas; F M Sánchez-Margallo; L F Sánchez-Peralta; J B Pagador; S Enciso; P Sánchez-González; E J Gómez-Aguilera; J Usón-Gargallo Journal: Int J Comput Assist Radiol Surg Date: 2011-04-17 Impact factor: 2.924
Authors: Tomoko Mizota; Nicholas E Anton; Elizabeth M Huffman; Michael J Guzman; Frederick Lane; Jennifer N Choi; Dimitrios Stefanidis Journal: Surg Endosc Date: 2019-05-17 Impact factor: 4.584
Authors: Mona W Schmidt; Karl-Friedrich Kowalewski; Marc L Schmidt; Erica Wennberg; Carly R Garrow; Sang Paik; Laura Benner; Marlies P Schijven; Beat P Müller-Stich; Felix Nickel Journal: Surg Endosc Date: 2018-10-16 Impact factor: 4.584
Authors: Mark Wilson; John McGrath; Samuel Vine; James Brewer; David Defriend; Richard Masters Journal: Surg Endosc Date: 2010-03-24 Impact factor: 4.584
Authors: Mark R Wilson; John S McGrath; Samuel J Vine; James Brewer; David Defriend; Richard S W Masters Journal: Surg Endosc Date: 2011-02-27 Impact factor: 4.584
Authors: Mark R Wilson; Samuel J Vine; Elizabeth Bright; Rich S W Masters; David Defriend; John S McGrath Journal: Surg Endosc Date: 2011-06-14 Impact factor: 4.584
Authors: Willem M Brinkman; Sanne Y Havermans; Sonja N Buzink; Sanne M B I Botden; Jack J Jakimowicz; Benedictus C Schoot Journal: Surg Endosc Date: 2012-02-21 Impact factor: 4.584