Masoud S Nosrati1, Alborz Amir-Khalili2, Jean-Marc Peyrat3, Julien Abinahed3, Osama Al-Alao4, Abdulla Al-Ansari4, Rafeef Abugharbieh2, Ghassan Hamarneh5. 1. Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada. smasoudn@gmail.com. 2. BiSICL, University of British Columbia, Vancouver, BC, Canada. 3. Qatar Robotic Surgery Centre, Qatar Science and Technology Park, Doha, Qatar. 4. Urology Department, Hamad General Hospital, Hamad Medical Corporation, Doha, Qatar. 5. Medical Image Analysis Lab, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada.
Abstract
PURPOSE: Despite great advances in medical image segmentation, the accurate and automatic segmentation of endoscopic scenes remains a challenging problem. Two important aspects have to be considered in segmenting an endoscopic scene: (1) noise and clutter due to light reflection and smoke from cutting tissue, and (2) structure occlusion (e.g. vessels occluded by fat, or endophytic tumours occluded by healthy kidney tissue). METHODS: In this paper, we propose a variational technique to augment a surgeon's endoscopic view by segmenting visible as well as occluded structures in the intraoperative endoscopic view. Our method estimates the 3D pose and deformation of anatomical structures segmented from 3D preoperative data in order to align to and segment corresponding structures in 2D intraoperative endoscopic views. Our preoperative to intraoperative alignment is driven by, first, spatio-temporal, signal processing based vessel pulsation cues and, second, machine learning based analysis of colour and textural visual cues. To our knowledge, this is the first work that utilizes vascular pulsation cues for guiding preoperative to intraoperative registration. In addition, we incorporate a tissue-specific (i.e. heterogeneous) physically based deformation model into our framework to cope with the non-rigid deformation of structures that occurs during the intervention. RESULTS: We validated the utility of our technique on fifteen challenging clinical cases with 45 % improvements in accuracy compared to the state-of-the-art method. CONCLUSIONS: A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.
PURPOSE: Despite great advances in medical image segmentation, the accurate and automatic segmentation of endoscopic scenes remains a challenging problem. Two important aspects have to be considered in segmenting an endoscopic scene: (1) noise and clutter due to light reflection and smoke from cutting tissue, and (2) structure occlusion (e.g. vessels occluded by fat, or endophytic tumours occluded by healthy kidney tissue). METHODS: In this paper, we propose a variational technique to augment a surgeon's endoscopic view by segmenting visible as well as occluded structures in the intraoperative endoscopic view. Our method estimates the 3D pose and deformation of anatomical structures segmented from 3D preoperative data in order to align to and segment corresponding structures in 2D intraoperative endoscopic views. Our preoperative to intraoperative alignment is driven by, first, spatio-temporal, signal processing based vessel pulsation cues and, second, machine learning based analysis of colour and textural visual cues. To our knowledge, this is the first work that utilizes vascular pulsation cues for guiding preoperative to intraoperative registration. In addition, we incorporate a tissue-specific (i.e. heterogeneous) physically based deformation model into our framework to cope with the non-rigid deformation of structures that occurs during the intervention. RESULTS: We validated the utility of our technique on fifteen challenging clinical cases with 45 % improvements in accuracy compared to the state-of-the-art method. CONCLUSIONS: A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.
Authors: Philip W Mewes; Dominik Neumann; Oleg Licegevic; Johannes Simon; Aleksandar L Juloski; Elli Angelopoulou Journal: Med Image Comput Comput Assist Interv Date: 2011
Authors: Nicole J Crane; Suzanne M Gillern; Kambiz Tajkarimi; Ira W Levin; Peter A Pinto; Eric A Elster Journal: J Urol Date: 2010-08-17 Impact factor: 7.450
Authors: Isabel N Figueiredo; Pedro N Figueiredo; Georg Stadler; Omar Ghattas; Adérito Araujo Journal: IEEE Trans Med Imaging Date: 2009-11-17 Impact factor: 10.048
Authors: Inderbir S Gill; Mihir M Desai; Jihad H Kaouk; Anoop M Meraney; David P Murphy; Gyung Tak Sung; Andrew C Novick Journal: J Urol Date: 2002-02 Impact factor: 7.450
Authors: Inderbir S Gill; Louis R Kavoussi; Brian R Lane; Michael L Blute; Denise Babineau; J Roberto Colombo; Igor Frank; Sompol Permpongkosol; Christopher J Weight; Jihad H Kaouk; Michael W Kattan; Andrew C Novick Journal: J Urol Date: 2007-05-11 Impact factor: 7.450
Authors: B M Zeeshan Hameed; Aiswarya V L S Dhavileswarapu; Syed Zahid Raza; Hadis Karimi; Harneet Singh Khanuja; Dasharathraj K Shetty; Sufyan Ibrahim; Milap J Shah; Nithesh Naik; Rahul Paul; Bhavan Prasad Rai; Bhaskar K Somani Journal: J Clin Med Date: 2021-04-26 Impact factor: 4.241
Authors: I-Hsuan Alan Chen; Ahmed Ghazi; Ashwin Sridhar; Danail Stoyanov; Mark Slack; John D Kelly; Justin W Collins Journal: World J Urol Date: 2020-11-06 Impact factor: 4.226
Authors: Karl-Friedrich Kowalewski; Luisa Egen; Chanel E Fischetti; Stefano Puliatti; Gomez Rivas Juan; Mark Taratkin; Rivero Belenchon Ines; Marie Angela Sidoti Abate; Julia Mühlbauer; Frederik Wessels; Enrico Checcucci; Giovanni Cacciamani Journal: Asian J Urol Date: 2022-06-18