Ziheng Wang1, Ann Majewicz Fey2,3. 1. Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA. zihengwang@utdallas.edu. 2. Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA. 3. Department of Surgery, UT Southwestern Medical Center, Dallas, TX, 75390, USA.
Abstract
PURPOSE: With the advent of robot-assisted surgery, the role of data-driven approaches to integrate statistics and machine learning is growing rapidly with prominent interests in objective surgical skill assessment. However, most existing work requires translating robot motion kinematics into intermediate features or gesture segments that are expensive to extract, lack efficiency, and require significant domain-specific knowledge. METHODS: We propose an analytical deep learning framework for skill assessment in surgical training. A deep convolutional neural network is implemented to map multivariate time series data of the motion kinematics to individual skill levels. RESULTS: We perform experiments on the public minimally invasive surgical robotic dataset, JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our proposed learning model achieved competitive accuracies of 92.5%, 95.4%, and 91.3%, in the standard training tasks: Suturing, Needle-passing, and Knot-tying, respectively. Without the need of engineered features or carefully tuned gesture segmentation, our model can successfully decode skill information from raw motion profiles via end-to-end learning. Meanwhile, the proposed model is able to reliably interpret skills within a 1-3 second window, without needing an observation of entire training trial. CONCLUSION: This study highlights the potential of deep architectures for efficient online skill assessment in modern surgical training.
PURPOSE: With the advent of robot-assisted surgery, the role of data-driven approaches to integrate statistics and machine learning is growing rapidly with prominent interests in objective surgical skill assessment. However, most existing work requires translating robot motion kinematics into intermediate features or gesture segments that are expensive to extract, lack efficiency, and require significant domain-specific knowledge. METHODS: We propose an analytical deep learning framework for skill assessment in surgical training. A deep convolutional neural network is implemented to map multivariate time series data of the motion kinematics to individual skill levels. RESULTS: We perform experiments on the public minimally invasive surgical robotic dataset, JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS). Our proposed learning model achieved competitive accuracies of 92.5%, 95.4%, and 91.3%, in the standard training tasks: Suturing, Needle-passing, and Knot-tying, respectively. Without the need of engineered features or carefully tuned gesture segmentation, our model can successfully decode skill information from raw motion profiles via end-to-end learning. Meanwhile, the proposed model is able to reliably interpret skills within a 1-3 second window, without needing an observation of entire training trial. CONCLUSION: This study highlights the potential of deep architectures for efficient online skill assessment in modern surgical training.
Authors: David Silver; Aja Huang; Chris J Maddison; Arthur Guez; Laurent Sifre; George van den Driessche; Julian Schrittwieser; Ioannis Antonoglou; Veda Panneershelvam; Marc Lanctot; Sander Dieleman; Dominik Grewe; John Nham; Nal Kalchbrenner; Ilya Sutskever; Timothy Lillicrap; Madeleine Leach; Koray Kavukcuoglu; Thore Graepel; Demis Hassabis Journal: Nature Date: 2016-01-28 Impact factor: 49.962
Authors: Jeremy D Brown; Conor E O Brien; Sarah C Leung; Kristoffel R Dumon; David I Lee; Katherine J Kuchenbecker Journal: IEEE Trans Biomed Eng Date: 2016-12-02 Impact factor: 4.538
Authors: Ben Bridgewater; Anthony D Grayson; Mark Jackson; Nicholas Brooks; Geir J Grotte; Daniel J M Keenan; Russell Millner; Brian M Fabri; Mark Jones Journal: BMJ Date: 2003-07-05
Authors: Jason D Kelly; Ashley Petersen; Thomas S Lendvay; Timothy M Kowalewski Journal: Int J Comput Assist Radiol Surg Date: 2020-09-30 Impact factor: 2.924
Authors: Iulia Andras; Elio Mazzone; Fijs W B van Leeuwen; Geert De Naeyer; Matthias N van Oosterom; Sergi Beato; Tessa Buckle; Shane O'Sullivan; Pim J van Leeuwen; Alexander Beulens; Nicolae Crisan; Frederiek D'Hondt; Peter Schatteman; Henk van Der Poel; Paolo Dell'Oglio; Alexandre Mottrie Journal: World J Urol Date: 2019-11-27 Impact factor: 4.226