Literature DB >> 31996995

SurgAI: deep learning for computerized laparoscopic image understanding in gynaecology.

Sabrina Madad Zadeh1,2, Tom Francois2, Lilian Calvet2, Pauline Chauvet1,2, Michel Canis1,2, Adrien Bartoli2, Nicolas Bourdel3,4.   

Abstract

BACKGROUND: In laparoscopy, the digital camera offers surgeons the opportunity to receive support from image-guided surgery systems. Such systems require image understanding, the ability for a computer to understand what the laparoscope sees. Image understanding has recently progressed owing to the emergence of artificial intelligence and especially deep learning techniques. However, the state of the art of deep learning in gynaecology only offers image-based detection, reporting the presence or absence of an anatomical structure, without finding its location. A solution to the localisation problem is given by the concept of semantic segmentation, giving the detection and pixel-level location of a structure in an image. The state-of-the-art results in semantic segmentation are achieved by deep learning, whose usage requires a massive amount of annotated data. We propose the first dataset dedicated to this task and the first evaluation of deep learning-based semantic segmentation in gynaecology.
METHODS: We used the deep learning method called Mask R-CNN. Our dataset has 461 laparoscopic images manually annotated with three classes: uterus, ovaries and surgical tools. We split our dataset in 361 images to train Mask R-CNN and 100 images to evaluate its performance.
RESULTS: The segmentation accuracy is reported in terms of percentage of overlap between the segmented regions from Mask R-CNN and the manually annotated ones. The accuracy is 84.5%, 29.6% and 54.5% for uterus, ovaries and surgical tools, respectively. An automatic detection of these structures was then inferred from the semantic segmentation results which led to state-of-the-art detection performance, except for the ovaries. Specifically, the detection accuracy is 97%, 24% and 86% for uterus, ovaries and surgical tools, respectively.
CONCLUSION: Our preliminary results are very promising, given the relatively small size of our initial dataset. The creation of an international surgical database seems essential.

Entities:  

Keywords:  Artificial intelligence; Deep learning; Gynaecological surgery; Laparoscopic surgery

Year:  2020        PMID: 31996995     DOI: 10.1007/s00464-019-07330-8

Source DB:  PubMed          Journal:  Surg Endosc        ISSN: 0930-2794            Impact factor:   4.584


  7 in total

Review 1.  Computer-aided anatomy recognition in intrathoracic and -abdominal surgery: a systematic review.

Authors:  R B den Boer; C de Jongh; W T E Huijbers; T J M Jaspers; J P W Pluim; R van Hillegersberg; M Van Eijnatten; J P Ruurda
Journal:  Surg Endosc       Date:  2022-08-04       Impact factor: 3.453

Review 2.  The Advances in Computer Vision That Are Enabling More Autonomous Actions in Surgery: A Systematic Review of the Literature.

Authors:  Andrew A Gumbs; Vincent Grasso; Nicolas Bourdel; Roland Croner; Gaya Spolverato; Isabella Frigerio; Alfredo Illanes; Mohammad Abu Hilal; Adrian Park; Eyad Elyan
Journal:  Sensors (Basel)       Date:  2022-06-29       Impact factor: 3.847

3.  Towards a better understanding of annotation tools for medical imaging: a survey.

Authors:  Manar Aljabri; Manal AlAmir; Manal AlGhamdi; Mohamed Abdel-Mottaleb; Fernando Collado-Mesa
Journal:  Multimed Tools Appl       Date:  2022-03-25       Impact factor: 2.577

Review 4.  The Future in Standards of Care for Gynecologic Laparoscopic Surgery to Improve Training and Education.

Authors:  Vlad I Tica; Andrei A Tica; Rudy L De Wilde
Journal:  J Clin Med       Date:  2022-04-14       Impact factor: 4.964

Review 5.  Artificial intelligence assisted display in thoracic surgery: development and possibilities.

Authors:  Zhuxing Chen; Yudong Zhang; Zeping Yan; Junguo Dong; Weipeng Cai; Yongfu Ma; Jipeng Jiang; Keyao Dai; Hengrui Liang; Jianxing He
Journal:  J Thorac Dis       Date:  2021-12       Impact factor: 3.005

6.  Deep-Learning-Based Cerebral Artery Semantic Segmentation in Neurosurgical Operating Microscope Vision Using Indocyanine Green Fluorescence Videoangiography.

Authors:  Min-Seok Kim; Joon Hyuk Cha; Seonhwa Lee; Lihong Han; Wonhyoung Park; Jae Sung Ahn; Seong-Cheol Park
Journal:  Front Neurorobot       Date:  2022-01-12       Impact factor: 2.650

7.  Gauze Detection and Segmentation in Minimally Invasive Surgery Video Using Convolutional Neural Networks.

Authors:  Guillermo Sánchez-Brizuela; Francisco-Javier Santos-Criado; Daniel Sanz-Gobernado; Eusebio de la Fuente-López; Juan-Carlos Fraile; Javier Pérez-Turiel; Ana Cisnal
Journal:  Sensors (Basel)       Date:  2022-07-11       Impact factor: 3.847

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.