| Literature DB >> 26693166 |
Daniel Reichard1, Sebastian Bodenstedt1, Stefan Suwelack1, Benjamin Mayer2, Anas Preukschas2, Martin Wagner2, Hannes Kenngott2, Beat Müller-Stich2, Rüdiger Dillmann1, Stefanie Speidel1.
Abstract
The goal of computer-assisted surgery is to provide the surgeon with guidance during an intervention, e.g., using augmented reality. To display preoperative data, soft tissue deformations that occur during surgery have to be taken into consideration. Laparoscopic sensors, such as stereo endoscopes, can be used to create a three-dimensional reconstruction of stereo frames for registration. Due to the small field of view and the homogeneous structure of tissue, reconstructing just one frame, in general, will not provide enough detail to register preoperative data, since every frame only contains a part of an organ surface. A correct assignment to the preoperative model is possible only if the patch geometry can be unambiguously matched to a part of the preoperative surface. We propose and evaluate a system that combines multiple smaller reconstructions from different viewpoints to segment and reconstruct a large model of an organ. Using graphics processing unit-based methods, we achieved four frames per second. We evaluated the system with in silico, phantom, ex vivo, and in vivo (porcine) data, using different methods for estimating the camera pose (optical tracking, iterative closest point, and a combination). The results indicate that the proposed method is promising for on-the-fly organ reconstruction and registration.Entities:
Keywords: endoscopic image processing; quantitative endoscopy; simultaneous localization and mapping; stitching; surgical vision; visualization
Year: 2015 PMID: 26693166 PMCID: PMC4675173 DOI: 10.1117/1.JMI.2.4.045001
Source DB: PubMed Journal: J Med Imaging (Bellingham) ISSN: 2329-4302