Literature DB >> 29778931

Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks.

Hassan Al Hajj1, Mathieu Lamard2, Pierre-Henri Conze3, Béatrice Cochener4, Gwenolé Quellec5.   

Abstract

This paper investigates the automatic monitoring of tool usage during a surgery, with potential applications in report generation, surgical training and real-time decision support. Two surgeries are considered: cataract surgery, the most common surgical procedure, and cholecystectomy, one of the most common digestive surgeries. Tool usage is monitored in videos recorded either through a microscope (cataract surgery) or an endoscope (cholecystectomy). Following state-of-the-art video analysis solutions, each frame of the video is analyzed by convolutional neural networks (CNNs) whose outputs are fed to recurrent neural networks (RNNs) in order to take temporal relationships between events into account. Novelty lies in the way those CNNs and RNNs are trained. Computational complexity prevents the end-to-end training of "CNN+RNN" systems. Therefore, CNNs are usually trained first, independently from the RNNs. This approach is clearly suboptimal for surgical tool analysis: many tools are very similar to one another, but they can generally be differentiated based on past events. CNNs should be trained to extract the most useful visual features in combination with the temporal context. A novel boosting strategy is proposed to achieve this goal: the CNN and RNN parts of the system are simultaneously enriched by progressively adding weak classifiers (either CNNs or RNNs) trained to improve the overall classification accuracy. Experiments were performed in a dataset of 50 cataract surgery videos, where the usage of 21 surgical tools was manually annotated, and a dataset of 80 cholecystectomy videos, where the usage of 7 tools was manually annotated. Very good classification performance are achieved in both datasets: tool usage could be labeled with an average area under the ROC curve of Az=0.9961 and Az=0.9939, respectively, in offline mode (using past, present and future information), and Az=0.9957 and Az=0.9936, respectively, in online mode (using past and present information only).
Copyright © 2018 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Boosting; Cataract and cholecystectomy surgeries; Convolutional and recurrent neural networks; Tool usage monitoring; Video analysis

Mesh:

Year:  2018        PMID: 29778931     DOI: 10.1016/j.media.2018.05.001

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


  9 in total

1.  CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer Assisted Interventions.

Authors:  Tom Vercauteren; Mathias Unberath; Nicolas Padoy; Nassir Navab
Journal:  Proc IEEE Inst Electr Electron Eng       Date:  2019-10-23       Impact factor: 10.961

2.  Real-time medical phase recognition using long-term video understanding and progress gate method.

Authors:  Yanyi Zhang; Ivan Marsic; Randall S Burd
Journal:  Med Image Anal       Date:  2021-09-03       Impact factor: 8.545

3.  A contextual detector of surgical tools in laparoscopic videos using deep learning.

Authors:  Babak Namazi; Ganesh Sankaranarayanan; Venkat Devarajan
Journal:  Surg Endosc       Date:  2021-02-08       Impact factor: 4.584

4.  Computer Vision in the Operating Room: Opportunities and Caveats.

Authors:  Lauren R Kennedy-Metz; Pietro Mascagni; Antonio Torralba; Roger D Dias; Pietro Perona; Julie A Shah; Nicolas Padoy; Marco A Zenati
Journal:  IEEE Trans Med Robot Bionics       Date:  2020-11-24

5.  Real-time surgical instrument detection in robot-assisted surgery using a convolutional neural network cascade.

Authors:  Zijian Zhao; Tongbiao Cai; Faliang Chang; Xiaolin Cheng
Journal:  Healthc Technol Lett       Date:  2019-11-26

Review 6.  Application of artificial intelligence in cataract management: current and future directions.

Authors:  Laura Gutierrez; Jane Sujuan Lim; Li Lian Foo; Wei Yan Ng; Michelle Yip; Gilbert Yong San Lim; Melissa Hsing Yi Wong; Allan Fong; Mohamad Rosman; Jodhbir Singth Mehta; Haotian Lin; Darren Shu Jeng Ting; Daniel Shu Wei Ting
Journal:  Eye Vis (Lond)       Date:  2022-01-07

7.  Gauze Detection and Segmentation in Minimally Invasive Surgery Video Using Convolutional Neural Networks.

Authors:  Guillermo Sánchez-Brizuela; Francisco-Javier Santos-Criado; Daniel Sanz-Gobernado; Eusebio de la Fuente-López; Juan-Carlos Fraile; Javier Pérez-Turiel; Ana Cisnal
Journal:  Sensors (Basel)       Date:  2022-07-11       Impact factor: 3.847

8.  Real-Time Tool Detection for Workflow Identification in Open Cranial Vault Remodeling.

Authors:  Alicia Pose Díez de la Lastra; Lucía García-Duarte Sáenz; David García-Mato; Luis Hernández-Álvarez; Santiago Ochandiano; Javier Pascau
Journal:  Entropy (Basel)       Date:  2021-06-26       Impact factor: 2.524

9.  Video-based fully automatic assessment of open surgery suturing skills.

Authors:  Adam Goldbraikh; Anne-Lise D'Angelo; Carla M Pugh; Shlomi Laufer
Journal:  Int J Comput Assist Radiol Surg       Date:  2022-02-01       Impact factor: 3.421

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.