Literature DB >> 33930829

EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos.

Kutsev Bengisu Ozyoruk1, Guliz Irem Gokceler2, Taylor L Bobrow3, Gulfize Coskun2, Kagan Incetan2, Yasin Almalioglu4, Faisal Mahmood5, Eva Curto6, Luis Perdigoto6, Marina Oliveira6, Hasan Sahin2, Helder Araujo6, Henrique Alexandrino7, Nicholas J Durr3, Hunter B Gilbert8, Mehmet Turan9.   

Abstract

Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings, synthetically generated data as well as clinically in use conventional endoscope recording of the phantom colon with computed tomography(CT) scan ground truth. A Panda robotic arm, two commercially available capsule endoscopes, three conventional endoscopes with different camera properties, two high precision 3D scanners, and a CT scanner were employed to collect data from eight ex-vivo porcine gastrointestinal (GI)-tract organs and a silicone colon phantom model. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-datasets for colon, 12 sub-datasets for stomach, and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. To verify the applicability of this data for use with real clinical systems, we recorded a video sequence with a state-of-the-art colonoscope from a full representation silicon colon phantom. Synthetic capsule endoscopy frames from stomach, colon, and small intestine with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with a spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes that are commonly seen in endoscopic videos. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art: SC-SfMLearner, Monodepth2, and SfMLearner. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible as Supplementary Video 1.
Copyright © 2021 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Capsule endoscopy; Monocular depth estimation; SLAM dataset; Spatial attention module; Standard endoscopy; Visual odometry

Year:  2021        PMID: 33930829     DOI: 10.1016/j.media.2021.102058

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


  8 in total

1.  Joint estimation of depth and motion from a monocular endoscopy image sequence using a multi-loss rebalancing network.

Authors:  Shiyuan Liu; Jingfan Fan; Dengpan Song; Tianyu Fu; Yucong Lin; Deqiang Xiao; Hong Song; Yongtian Wang; Jian Yang
Journal:  Biomed Opt Express       Date:  2022-04-11       Impact factor: 3.562

2.  Dense Depth Estimation from Stereo Endoscopy Videos Using Unsupervised Optical Flow Methods.

Authors:  Zixin Yang; Richard Simon; Yangming Li; Cristian A Linte
Journal:  Med Image Underst Anal (2021)       Date:  2021-07-06

Review 3.  Endoscopic Imaging Technology Today.

Authors:  Axel Boese; Cora Wex; Roland Croner; Uwe Bernd Liehr; Johann Jakob Wendler; Jochen Weigt; Thorsten Walles; Ulrich Vorwerk; Christoph Hubertus Lohmann; Michael Friebe; Alfredo Illanes
Journal:  Diagnostics (Basel)       Date:  2022-05-18

4.  Synthetic data in machine learning for medicine and healthcare.

Authors:  Richard J Chen; Ming Y Lu; Tiffany Y Chen; Drew F K Williamson; Faisal Mahmood
Journal:  Nat Biomed Eng       Date:  2021-06       Impact factor: 29.234

5.  A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery.

Authors:  Erica Padovan; Giorgia Marullo; Leonardo Tanzi; Pietro Piazzolla; Sandro Moos; Francesco Porpiglia; Enrico Vezzetti
Journal:  Int J Med Robot       Date:  2022-03-13       Impact factor: 2.483

Review 6.  Surgical data science - from concepts toward clinical translation.

Authors:  Lena Maier-Hein; Matthias Eisenmann; Duygu Sarikaya; Keno März; Toby Collins; Anand Malpani; Johannes Fallert; Hubertus Feussner; Stamatia Giannarou; Pietro Mascagni; Hirenkumar Nakawala; Adrian Park; Carla Pugh; Danail Stoyanov; Swaroop S Vedula; Kevin Cleary; Gabor Fichtinger; Germain Forestier; Bernard Gibaud; Teodor Grantcharov; Makoto Hashizume; Doreen Heckmann-Nötzel; Hannes G Kenngott; Ron Kikinis; Lars Mündermann; Nassir Navab; Sinan Onogur; Tobias Roß; Raphael Sznitman; Russell H Taylor; Minu D Tizabi; Martin Wagner; Gregory D Hager; Thomas Neumuth; Nicolas Padoy; Justin Collins; Ines Gockel; Jan Goedeke; Daniel A Hashimoto; Luc Joyeux; Kyle Lam; Daniel R Leff; Amin Madani; Hani J Marcus; Ozanan Meireles; Alexander Seitel; Dogu Teber; Frank Ückert; Beat P Müller-Stich; Pierre Jannin; Stefanie Speidel
Journal:  Med Image Anal       Date:  2021-11-18       Impact factor: 13.828

7.  Gastrointestinal Tract Disease Classification from Wireless Endoscopy Images Using Pretrained Deep Learning Model.

Authors:  J Yogapriya; Venkatesan Chandran; M G Sumithra; P Anitha; P Jenopaul; C Suresh Gnana Dhas
Journal:  Comput Math Methods Med       Date:  2021-09-11       Impact factor: 2.238

Review 8.  Artificial Intelligence in Colon Capsule Endoscopy-A Systematic Review.

Authors:  Sarah Moen; Fanny E R Vuik; Ernst J Kuipers; Manon C W Spaander
Journal:  Diagnostics (Basel)       Date:  2022-08-17
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.