Literature DB >> 31319938

Surgical skill levels: Classification and analysis using deep neural network model and motion signals.

Xuan Anh Nguyen1, Damir Ljuhar2, Maurizio Pacilli2, Ramesh Mark Nataraja2, Sunita Chauhan3.   

Abstract

BACKGROUND AND OBJECTIVES: Currently, the assessment of surgical skills relies primarily on the observations of expert surgeons. This may be time-consuming, non-scalable, inconsistent and subjective. Therefore, an automated system that can objectively identify the actual skills level of a junior trainee is highly desirable. This study aims to design an automated surgical skills evaluation system.
METHODS: We propose to use a deep neural network model that can analyze raw surgical motion data with minimal preprocessing. A platform with inertial measurement unit sensors was developed and participants with different levels of surgical experience were recruited to perform core open surgical skills tasks. JIGSAWS a publicly available robot based surgical training dataset was used to evaluate the generalization of our deep network model. 15 participants (4 experts, 4 intermediates and 7 novices) were recruited into the study.
RESULTS: The proposed deep model achieved an accuracy of 98.2%. With comparison to JIGSAWS; our method outperformed some existing approaches with an accuracy of 98.4%, 98.4% and 94.7% for suturing, needle-passing, and knot-tying, respectively. The experimental results demonstrated the applicability of this method in both open surgery and robot-assisted minimally invasive surgery.
CONCLUSIONS: This study demonstrated the potential ability of the proposed deep network model to learn the discriminative features between different surgical skills levels.
Copyright © 2019 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Deep neural network; Hand motion signals; Surgical education; Surgical skill assessment

Year:  2019        PMID: 31319938     DOI: 10.1016/j.cmpb.2019.05.008

Source DB:  PubMed          Journal:  Comput Methods Programs Biomed        ISSN: 0169-2607            Impact factor:   5.428


  5 in total

Review 1.  Machine learning in gastrointestinal surgery.

Authors:  Takashi Sakamoto; Tadahiro Goto; Michimasa Fujiogi; Alan Kawarai Lefor
Journal:  Surg Today       Date:  2021-09-24       Impact factor: 2.549

2.  An Intelligent Augmented Reality Training Framework for Neonatal Endotracheal Intubation.

Authors:  Shang Zhao; Xiao Xiao; Qiyue Wang; Xiaoke Zhang; Wei Li; Lamia Soghier; James Hahn
Journal:  Int Symp Mix Augment Real       Date:  2020-12-14

Review 3.  Computer Vision in the Surgical Operating Room.

Authors:  François Chadebecq; Francisco Vasconcelos; Evangelos Mazomenos; Danail Stoyanov
Journal:  Visc Med       Date:  2020-10-15

4.  Evolving robotic surgery training and improving patient safety, with the integration of novel technologies.

Authors:  I-Hsuan Alan Chen; Ahmed Ghazi; Ashwin Sridhar; Danail Stoyanov; Mark Slack; John D Kelly; Justin W Collins
Journal:  World J Urol       Date:  2020-11-06       Impact factor: 4.226

5.  Motion analysis of the JHU-ISI Gesture and Skill Assessment Working Set using Robotics Video and Motion Assessment Software.

Authors:  Alan Kawarai Lefor; Kanako Harada; Aristotelis Dosis; Mamoru Mitsuishi
Journal:  Int J Comput Assist Radiol Surg       Date:  2020-10-06       Impact factor: 2.924

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.