Jiwei Liu1, Hui Yan2, Hanlin Cheng1, Jianfei Liu3, Pengjian Sun1, Boyi Wang1, Ronghu Mao4, Chi Du5, Shengquan Luo1. 1. School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing, China. 2. Department of Radiation Oncology, National Clinical Research Center for Cancer, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China. 3. School of Electrical Engineering and Automation, Anhui University, Hefei, China. 4. Department of Radiation Oncology, The Affiliated Cancer Hospital of Zhengzhou University, Henan Cancer Hospital, Zhengzhou, China. 5. Cancer Center, The Second Peoples Hospital of Neijiang, Neijiang, China.
Abstract
BACKGROUND: Cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT), however its poor image quality limited its clinical application. In this study, we developed a deep-learning based approach to translate CBCT image to synthetic CT (sCT) image that preserves both CT image quality and CBCT anatomical structures. METHODS: A novel synthetic CT generative adversarial network (sCTGAN) was proposed for CBCT-to-CT translation via disentangled representation. The approach of disentangled representation was employed to extract the anatomical information shared by CBCT and CT image domains. Both on-board CBCT and planning CT of 40 patients were used for network learning and those of another 12 patients were used for testing. Accuracy of our network was quantitatively evaluated using a series of statistical metrics, including the peak signal-to-noise ratio (PSNR), mean structural similarity index (SSIM), mean absolute error (MAE), and root-mean-square error (RMSE). Effectiveness of our network was compared against three state-of-the-art CycleGAN-based methods. RESULTS: The PSNR, SSIM, MAE, and RMSE between sCT generated by sCTGAN and deformed planning CT (dpCT) were 34.12 dB, 0.86, 32.70 HU, and 60.53 HU, while the corresponding values between original CBCT and dpCT were 28.67 dB, 0.64, 70.56 HU, and 112.13 HU. The RMSE (60.53±14.38 HU) of sCT generated by sCTGAN was less than that of sCT generated by all the three comparing methods (72.40±16.03 HU by CycleGAN, 71.60±15.09 HU by CycleGAN-Unet512, 64.93±14.33 HU by CycleGAN-AG). CONCLUSIONS: The sCT generated by our sCTGAN network was closer to the ground truth (dpCT), in comparison to all the three comparing CycleGAN-based methods. It provides an effective way to generate high-quality sCT which has a wide application in IGRT and adaptive radiotherapy. 2021 Quantitative Imaging in Medicine and Surgery. All rights reserved.
BACKGROUND: Cone-beam computed tomography (CBCT) plays a key role in image-guided radiotherapy (IGRT), however its poor image quality limited its clinical application. In this study, we developed a deep-learning based approach to translate CBCT image to synthetic CT (sCT) image that preserves both CT image quality and CBCT anatomical structures. METHODS: A novel synthetic CT generative adversarial network (sCTGAN) was proposed for CBCT-to-CT translation via disentangled representation. The approach of disentangled representation was employed to extract the anatomical information shared by CBCT and CT image domains. Both on-board CBCT and planning CT of 40 patients were used for network learning and those of another 12 patients were used for testing. Accuracy of our network was quantitatively evaluated using a series of statistical metrics, including the peak signal-to-noise ratio (PSNR), mean structural similarity index (SSIM), mean absolute error (MAE), and root-mean-square error (RMSE). Effectiveness of our network was compared against three state-of-the-art CycleGAN-based methods. RESULTS: The PSNR, SSIM, MAE, and RMSE between sCT generated by sCTGAN and deformed planning CT (dpCT) were 34.12 dB, 0.86, 32.70 HU, and 60.53 HU, while the corresponding values between original CBCT and dpCT were 28.67 dB, 0.64, 70.56 HU, and 112.13 HU. The RMSE (60.53±14.38 HU) of sCT generated by sCTGAN was less than that of sCT generated by all the three comparing methods (72.40±16.03 HU by CycleGAN, 71.60±15.09 HU by CycleGAN-Unet512, 64.93±14.33 HU by CycleGAN-AG). CONCLUSIONS: The sCT generated by our sCTGAN network was closer to the ground truth (dpCT), in comparison to all the three comparing CycleGAN-based methods. It provides an effective way to generate high-quality sCT which has a wide application in IGRT and adaptive radiotherapy. 2021 Quantitative Imaging in Medicine and Surgery. All rights reserved.
Authors: Joseph Harms; Yang Lei; Tonghe Wang; Rongxiao Zhang; Jun Zhou; Xiangyang Tang; Walter J Curran; Tian Liu; Xiaofeng Yang Journal: Med Phys Date: 2019-07-17 Impact factor: 4.071
Authors: Christopher Kurz; Matteo Maspero; Mark H F Savenije; Guillaume Landry; Florian Kamp; Marco Pinto; Minglun Li; Katia Parodi; Claus Belka; Cornelis A T van den Berg Journal: Phys Med Biol Date: 2019-11-15 Impact factor: 3.609
Authors: Yuan Xu; Ti Bai; Hao Yan; Luo Ouyang; Arnold Pompos; Jing Wang; Linghong Zhou; Steve B Jiang; Xun Jia Journal: Phys Med Biol Date: 2015-04-10 Impact factor: 3.609