Literature DB >> 33556005

Image Compositing for Segmentation of Surgical Tools Without Manual Annotations.

Luis C Garcia-Peraza-Herrera, Lucas Fidon, Claudia D'Ettorre, Danail Stoyanov, Tom Vercauteren, Sebastien Ourselin.   

Abstract

Producing manual, pixel-accurate, image segmentation labels is tedious and time-consuming. This is often a rate-limiting factor when large amounts of labeled images are required, such as for training deep convolutional networks for instrument-background segmentation in surgical scenes. No large datasets comparable to industry standards in the computer vision community are available for this task. To circumvent this problem, we propose to automate the creation of a realistic training dataset by exploiting techniques stemming from special effects and harnessing them to target training performance rather than visual appeal. Foreground data is captured by placing sample surgical instruments over a chroma key (a.k.a. green screen) in a controlled environment, thereby making extraction of the relevant image segment straightforward. Multiple lighting conditions and viewpoints can be captured and introduced in the simulation by moving the instruments and camera and modulating the light source. Background data is captured by collecting videos that do not contain instruments. In the absence of pre-existing instrument-free background videos, minimal labeling effort is required, just to select frames that do not contain surgical instruments from videos of surgical interventions freely available online. We compare different methods to blend instruments over tissue and propose a novel data augmentation approach that takes advantage of the plurality of options. We show that by training a vanilla U-Net on semi-synthetic data only and applying a simple post-processing, we are able to match the results of the same network trained on a publicly available manually labeled real dataset.

Entities:  

Mesh:

Year:  2021        PMID: 33556005      PMCID: PMC8092331          DOI: 10.1109/TMI.2021.3057884

Source DB:  PubMed          Journal:  IEEE Trans Med Imaging        ISSN: 0278-0062            Impact factor:   10.048


  8 in total

1.  Towards image guided robotic surgery: multi-arm tracking through hybrid localization.

Authors:  David Morgan Kwartowitz; Michael I Miga; S Duke Herrell; Robert L Galloway
Journal:  Int J Comput Assist Radiol Surg       Date:  2009-03-19       Impact factor: 2.924

2.  Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos.

Authors:  Chinedu Innocent Nwoye; Didier Mutter; Jacques Marescaux; Nicolas Padoy
Journal:  Int J Comput Assist Radiol Surg       Date:  2019-04-09       Impact factor: 2.924

3.  Surgical data science for next-generation interventions.

Authors:  Lena Maier-Hein; Swaroop S Vedula; Stefanie Speidel; Nassir Navab; Ron Kikinis; Adrian Park; Matthias Eisenmann; Hubertus Feussner; Germain Forestier; Stamatia Giannarou; Makoto Hashizume; Darko Katic; Hannes Kenngott; Michael Kranzfelder; Anand Malpani; Keno März; Thomas Neumuth; Nicolas Padoy; Carla Pugh; Nicolai Schoch; Danail Stoyanov; Russell Taylor; Martin Wagner; Gregory D Hager; Pierre Jannin
Journal:  Nat Biomed Eng       Date:  2017-09       Impact factor: 25.671

4.  Feature classification for tracking articulated surgical tools.

Authors:  Austin Reiter; Peter K Allen; Tao Zhao
Journal:  Med Image Comput Comput Assist Interv       Date:  2012

5.  Atlas encoding by randomized forests for efficient label propagation.

Authors:  Darko Zikic; Ben Glocker; Antonio Criminisi
Journal:  Med Image Comput Comput Assist Interv       Date:  2013

6.  Real-time ultrasound transducer localization in fluoroscopy images by transfer learning from synthetic training data.

Authors:  Tobias Heimann; Peter Mountney; Matthias John; Razvan Ionasec
Journal:  Med Image Anal       Date:  2014-05-05       Impact factor: 8.545

7.  Enabling machine learning in X-ray-based procedures via realistic simulation of image formation.

Authors:  Mathias Unberath; Jan-Nico Zaech; Cong Gao; Bastian Bier; Florian Goldmann; Sing Chun Lee; Javad Fotouhi; Russell Taylor; Mehran Armand; Nassir Navab
Journal:  Int J Comput Assist Radiol Surg       Date:  2019-06-11       Impact factor: 2.924

8.  Exploiting the potential of unlabeled endoscopic video data with self-supervised learning.

Authors:  Tobias Ross; David Zimmerer; Anant Vemuri; Fabian Isensee; Manuel Wiesenfarth; Sebastian Bodenstedt; Fabian Both; Philip Kessler; Martin Wagner; Beat Müller; Hannes Kenngott; Stefanie Speidel; Annette Kopp-Schneider; Klaus Maier-Hein; Lena Maier-Hein
Journal:  Int J Comput Assist Radiol Surg       Date:  2018-04-27       Impact factor: 2.924

  8 in total
  2 in total

1.  Improving needle visibility in LED-based photoacoustic imaging using deep learning with semi-synthetic datasets.

Authors:  Mengjie Shi; Tianrui Zhao; Simeon J West; Adrien E Desjardins; Tom Vercauteren; Wenfeng Xia
Journal:  Photoacoustics       Date:  2022-04-07

2.  Robotic Endoscope Control Via Autonomous Instrument Tracking.

Authors:  Caspar Gruijthuijsen; Luis C Garcia-Peraza-Herrera; Gianni Borghesan; Dominiek Reynaerts; Jan Deprest; Sebastien Ourselin; Tom Vercauteren; Emmanuel Vander Poorten
Journal:  Front Robot AI       Date:  2022-04-11
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.