Xuan Anh Nguyen1, Damir Ljuhar2, Maurizio Pacilli2, Ramesh Mark Nataraja2, Sunita Chauhan3. 1. Department of Mechanical and Aerospace Engineering, Monash University, Clayton, Victoria, 3800, Australia. 2. Department of Surgical Simulation, Monash Children's Hospital, Melbourne, Australia; Department of Paediatrics, School of Clinical Sciences, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia. 3. Department of Mechanical and Aerospace Engineering, Monash University, Clayton, Victoria, 3800, Australia. Electronic address: Sunita.Chauhan@monash.edu.
Abstract
BACKGROUND AND OBJECTIVES: Currently, the assessment of surgical skills relies primarily on the observations of expert surgeons. This may be time-consuming, non-scalable, inconsistent and subjective. Therefore, an automated system that can objectively identify the actual skills level of a junior trainee is highly desirable. This study aims to design an automated surgical skills evaluation system. METHODS: We propose to use a deep neural network model that can analyze raw surgical motion data with minimal preprocessing. A platform with inertial measurement unit sensors was developed and participants with different levels of surgical experience were recruited to perform core open surgical skills tasks. JIGSAWS a publicly available robot based surgical training dataset was used to evaluate the generalization of our deep network model. 15 participants (4 experts, 4 intermediates and 7 novices) were recruited into the study. RESULTS: The proposed deep model achieved an accuracy of 98.2%. With comparison to JIGSAWS; our method outperformed some existing approaches with an accuracy of 98.4%, 98.4% and 94.7% for suturing, needle-passing, and knot-tying, respectively. The experimental results demonstrated the applicability of this method in both open surgery and robot-assisted minimally invasive surgery. CONCLUSIONS: This study demonstrated the potential ability of the proposed deep network model to learn the discriminative features between different surgical skills levels.
BACKGROUND AND OBJECTIVES: Currently, the assessment of surgical skills relies primarily on the observations of expert surgeons. This may be time-consuming, non-scalable, inconsistent and subjective. Therefore, an automated system that can objectively identify the actual skills level of a junior trainee is highly desirable. This study aims to design an automated surgical skills evaluation system. METHODS: We propose to use a deep neural network model that can analyze raw surgical motion data with minimal preprocessing. A platform with inertial measurement unit sensors was developed and participants with different levels of surgical experience were recruited to perform core open surgical skills tasks. JIGSAWS a publicly available robot based surgical training dataset was used to evaluate the generalization of our deep network model. 15 participants (4 experts, 4 intermediates and 7 novices) were recruited into the study. RESULTS: The proposed deep model achieved an accuracy of 98.2%. With comparison to JIGSAWS; our method outperformed some existing approaches with an accuracy of 98.4%, 98.4% and 94.7% for suturing, needle-passing, and knot-tying, respectively. The experimental results demonstrated the applicability of this method in both open surgery and robot-assisted minimally invasive surgery. CONCLUSIONS: This study demonstrated the potential ability of the proposed deep network model to learn the discriminative features between different surgical skills levels.
Authors: I-Hsuan Alan Chen; Ahmed Ghazi; Ashwin Sridhar; Danail Stoyanov; Mark Slack; John D Kelly; Justin W Collins Journal: World J Urol Date: 2020-11-06 Impact factor: 4.226