Mathias Unberath1,2,3, Jan-Nico Zaech4,5, Cong Gao6,4, Bastian Bier4,5, Florian Goldmann4,5, Sing Chun Lee6,4,5, Javad Fotouhi6,4,5, Russell Taylor6,4, Mehran Armand4,7, Nassir Navab6,4,5. 1. Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA. unberath@jhu.edu. 2. Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA. unberath@jhu.edu. 3. Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA. unberath@jhu.edu. 4. Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA. 5. Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA. 6. Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA. 7. Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA.
Abstract
PURPOSE: Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS: We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS: Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION: Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.
PURPOSE: Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS: We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS: Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION: Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.
Entities:
Keywords:
Artificial intelligence; Computer assisted surgery; Image guidance; Monte Carlo simulation; Robotic surgery; Segmentation
Authors: T De Silva; J Punnoose; A Uneri; J Goerres; M Jacobson; M D Ketcha; A Manbachi; S Vogt; G Kleinszig; A J Khanna; J-P Wolinksy; G Osgood; J H Siewerdsen Journal: Proc SPIE Int Soc Opt Eng Date: 2017-03-03
Authors: Konstantinos Kamnitsas; Christian Ledig; Virginia F J Newcombe; Joanna P Simpson; Andrew D Kane; David K Menon; Daniel Rueckert; Ben Glocker Journal: Med Image Anal Date: 2016-10-29 Impact factor: 8.545
Authors: Luis C Garcia-Peraza-Herrera; Lucas Fidon; Claudia D'Ettorre; Danail Stoyanov; Tom Vercauteren; Sebastien Ourselin Journal: IEEE Trans Med Imaging Date: 2021-04-30 Impact factor: 10.048
Authors: Cong Gao; Amirhossein Farvardin; Robert B Grupp; Mahsan Bakhtiarinejad; Liuhong Ma; Mareike Thies; Mathias Unberath; Russell H Taylor; Mehran Armand Journal: IEEE Trans Med Robot Bionics Date: 2020-07-28
Authors: Robert B Grupp; Mathias Unberath; Cong Gao; Rachel A Hegeman; Ryan J Murphy; Clayton P Alexander; Yoshito Otake; Benjamin A McArthur; Mehran Armand; Russell H Taylor Journal: Int J Comput Assist Radiol Surg Date: 2020-04-24 Impact factor: 2.924
Authors: David Kügler; Jannik Sehring; Andrei Stefanov; Igor Stenin; Julia Kristin; Thomas Klenzner; Jörg Schipper; Anirban Mukhopadhyay Journal: Int J Comput Assist Radiol Surg Date: 2020-05-21 Impact factor: 2.924