Yang Zhang1,2, Ning Yue1, Min-Ying Su2, Bo Liu1, Yi Ding3, Yongkang Zhou4, Hao Wang5, Yu Kuang6, Ke Nie1. 1. Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, USA. 2. Department of Radiological Sciences, University of California, Irvine, CA, USA. 3. Department of Radiation Oncology, Hubei Cancer Hospital, Wuhan, China. 4. Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China. 5. Department of Radiation Oncology, Zhongshan Hospital, Shanghai, China. 6. Department of Integrated Health Sciences, University of Nebraska, Las Vegas, NV, USA.
Abstract
PURPOSE: To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial network. METHODS: One hundred and fifty paired pelvic CT and CBCT scans were used for model training and validation. An unsupervised deep learning method, 2.5D pixel-to-pixel generative adversarial network (GAN) model with feature mapping was proposed. A total of 12 000 slice pairs of CT and CBCT were used for model training, while ten-fold cross validation was applied to verify model robustness. Paired CT-CBCT scans from an additional 15 pelvic patients and 10 head-and-neck (HN) patients with CBCT images collected at a different machine were used for independent testing purpose. Besides the proposed method above, other network architectures were also tested as: 2D vs 2.5D; GAN model with vs without feature mapping; GAN model with vs without additional perceptual loss; and previously reported models as U-net and cycleGAN with or without identity loss. Image quality of deep-learning generated synthetic CT (sCT) images was quantitatively compared against the reference CT (rCT) image using mean absolute error (MAE) of Hounsfield units (HU) and peak signal-to-noise ratio (PSNR). The dosimetric calculation accuracy was further evaluated with both photon and proton beams. RESULTS: The deep-learning generated sCTs showed improved image quality with reduced artifact distortion and improved soft tissue contrast. The proposed algorithm of 2.5 Pix2pix GAN with feature matching (FM) was shown to be the best model among all tested methods producing the highest PSNR and the lowest MAE to rCT. The dose distribution demonstrated a high accuracy in the scope of photon-based planning, yet more work is needed for proton-based treatment. Once the model was trained, it took 11-12 ms to process one slice, and could generate a 3D volume of dCBCT (80 slices) in less than a second using a NVIDIA GeForce GTX Titan X GPU (12 GB, Maxwell architecture). CONCLUSION: The proposed deep learning algorithm is promising to improve CBCT image quality in an efficient way, thus has a potential to support online CBCT-based adaptive radiotherapy.
PURPOSE: To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial network. METHODS: One hundred and fifty paired pelvic CT and CBCT scans were used for model training and validation. An unsupervised deep learning method, 2.5D pixel-to-pixel generative adversarial network (GAN) model with feature mapping was proposed. A total of 12 000 slice pairs of CT and CBCT were used for model training, while ten-fold cross validation was applied to verify model robustness. Paired CT-CBCT scans from an additional 15 pelvic patients and 10 head-and-neck (HN) patients with CBCT images collected at a different machine were used for independent testing purpose. Besides the proposed method above, other network architectures were also tested as: 2D vs 2.5D; GAN model with vs without feature mapping; GAN model with vs without additional perceptual loss; and previously reported models as U-net and cycleGAN with or without identity loss. Image quality of deep-learning generated synthetic CT (sCT) images was quantitatively compared against the reference CT (rCT) image using mean absolute error (MAE) of Hounsfield units (HU) and peak signal-to-noise ratio (PSNR). The dosimetric calculation accuracy was further evaluated with both photon and proton beams. RESULTS: The deep-learning generated sCTs showed improved image quality with reduced artifact distortion and improved soft tissue contrast. The proposed algorithm of 2.5 Pix2pix GAN with feature matching (FM) was shown to be the best model among all tested methods producing the highest PSNR and the lowest MAE to rCT. The dose distribution demonstrated a high accuracy in the scope of photon-based planning, yet more work is needed for proton-based treatment. Once the model was trained, it took 11-12 ms to process one slice, and could generate a 3D volume of dCBCT (80 slices) in less than a second using a NVIDIA GeForce GTX Titan X GPU (12 GB, Maxwell architecture). CONCLUSION: The proposed deep learning algorithm is promising to improve CBCT image quality in an efficient way, thus has a potential to support online CBCT-based adaptive radiotherapy.
Authors: David C Hansen; Guillaume Landry; Florian Kamp; Minglun Li; Claus Belka; Katia Parodi; Christopher Kurz Journal: Med Phys Date: 2018-10-08 Impact factor: 4.071
Authors: Joseph Harms; Yang Lei; Tonghe Wang; Rongxiao Zhang; Jun Zhou; Xiangyang Tang; Walter J Curran; Tian Liu; Xiaofeng Yang Journal: Med Phys Date: 2019-07-17 Impact factor: 4.071
Authors: Bo Yang; Yankui Chang; Yongguang Liang; Zhiqun Wang; Xi Pei; Xie George Xu; Jie Qiu Journal: Front Oncol Date: 2022-05-30 Impact factor: 5.738
Authors: Adrian Thummerer; Carmen Seller Oria; Paolo Zaffino; Arturs Meijers; Gabriel Guterres Marmitt; Robin Wijsman; Joao Seco; Johannes Albertus Langendijk; Antje-Christin Knopf; Maria Francesca Spadea; Stefan Both Journal: Med Phys Date: 2021-11-16 Impact factor: 4.506