Shadi Albarqouni1, Ulrich Konrad2, Lichao Wang2, Nassir Navab2,3, Stefanie Demirci2. 1. Chair for Computer Aided Medical Procedure (CAMP), Technische Universität München (TUM), 85748, Munich, Germany. shadi.albarqouni@tum.de. 2. Chair for Computer Aided Medical Procedure (CAMP), Technische Universität München (TUM), 85748, Munich, Germany. 3. Whiting School of Engineering, Johns Hopkins University (JHU), Baltimore, MD, 21218, USA.
Abstract
PURPOSE: X-ray imaging is widely used for guiding minimally invasive surgeries. Despite ongoing efforts in particular toward advanced visualization incorporating mixed reality concepts, correct depth perception from X-ray imaging is still hampered due to its projective nature. METHODS: In this paper, we introduce a new concept for predicting depth information from single-view X-ray images. Patient-specific training data for depth and corresponding X-ray attenuation information are constructed using readily available preoperative 3D image information. The corresponding depth model is learned employing a novel label-consistent dictionary learning method incorporating atlas and spatial prior constraints to allow for efficient reconstruction performance. RESULTS: We have validated our algorithm on patient data acquired for different anatomy focus (abdomen and thorax). Of 100 image pairs per each of 6 experimental instances, 80 images have been used for training and 20 for testing. Depth estimation results have been compared to ground truth depth values. CONCLUSION: We have achieved around [Formula: see text] and [Formula: see text] mean squared error on abdomen and thorax datasets, respectively, and visual results of our proposed method are very promising. We have therefore presented a new concept for enhancing depth perception for image-guided interventions.
PURPOSE: X-ray imaging is widely used for guiding minimally invasive surgeries. Despite ongoing efforts in particular toward advanced visualization incorporating mixed reality concepts, correct depth perception from X-ray imaging is still hampered due to its projective nature. METHODS: In this paper, we introduce a new concept for predicting depth information from single-view X-ray images. Patient-specific training data for depth and corresponding X-ray attenuation information are constructed using readily available preoperative 3D image information. The corresponding depth model is learned employing a novel label-consistent dictionary learning method incorporating atlas and spatial prior constraints to allow for efficient reconstruction performance. RESULTS: We have validated our algorithm on patient data acquired for different anatomy focus (abdomen and thorax). Of 100 image pairs per each of 6 experimental instances, 80 images have been used for training and 20 for testing. Depth estimation results have been compared to ground truth depth values. CONCLUSION: We have achieved around [Formula: see text] and [Formula: see text] mean squared error on abdomen and thorax datasets, respectively, and visual results of our proposed method are very promising. We have therefore presented a new concept for enhancing depth perception for image-guided interventions.
Authors: Felix Ritter; Christian Hansen; Volker Dicken; Olaf Konrad; Bernhard Preim; Heinz-Otto Peitgen Journal: IEEE Trans Vis Comput Graph Date: 2006 Sep-Oct Impact factor: 4.579