Navdeep Dahiya1, Sadegh R Alam2, Pengpeng Zhang2, Si-Yuan Zhang3, Tianfang Li2, Anthony Yezzi1, Saad Nadeem2. 1. Department of Electrical & Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA. 2. Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA. 3. Department of Radiation Oncology, Peking University Cancer Hospital, Beijing, China.
Abstract
PURPOSE: In current clinical practice, noisy and artifact-ridden weekly cone beam computed tomography (CBCT) images are only used for patient setup during radiotherapy. Treatment planning is performed once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures. If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment as well as for deriving biomarkers for treatment response. METHODS: Using a novel physics-based data augmentation strategy, we synthesize a large dataset of perfectly/inherently registered pCT and synthetic-CBCT pairs for locally advanced lung cancer patient cohort, which are then used in a multitask three-dimensional (3D) deep learning framework to simultaneously segment and translate real weekly CBCT images to high-quality pCT-like images. RESULTS: We compared the synthetic CT and OAR segmentations generated by the model to real pCT and manual OAR segmentations and showed promising results. The real week 1 (baseline) CBCT images which had an average mean absolute error (MAE) of 162.77 HU compared to pCT images are translated to synthetic CT images that exhibit a drastically improved average MAE of 29.31 HU and average structural similarity of 92% with the pCT images. The average DICE scores of the 3D OARs segmentations are: lungs 0.96, heart 0.88, spinal cord 0.83, and esophagus 0.66. CONCLUSIONS: We demonstrate an approach to translate artifact-ridden CBCT images to high-quality synthetic CT images, while simultaneously generating good quality segmentation masks for different OARs. This approach could allow clinicians to adjust treatment plans using only the routine low-quality CBCT images, potentially improving patient outcomes. Our code, data, and pre-trained models will be made available via our physics-based data augmentation library, Physics-ArX, at https://github.com/nadeemlab/Physics-ArX.
PURPOSE: In current clinical practice, noisy and artifact-ridden weekly cone beam computed tomography (CBCT) images are only used for patient setup during radiotherapy. Treatment planning is performed once at the beginning of the treatment using high-quality planning CT (pCT) images and manual contours for organs-at-risk (OARs) structures. If the quality of the weekly CBCT images can be improved while simultaneously segmenting OAR structures, this can provide critical information for adapting radiotherapy mid-treatment as well as for deriving biomarkers for treatment response. METHODS: Using a novel physics-based data augmentation strategy, we synthesize a large dataset of perfectly/inherently registered pCT and synthetic-CBCT pairs for locally advanced lung cancer patient cohort, which are then used in a multitask three-dimensional (3D) deep learning framework to simultaneously segment and translate real weekly CBCT images to high-quality pCT-like images. RESULTS: We compared the synthetic CT and OAR segmentations generated by the model to real pCT and manual OAR segmentations and showed promising results. The real week 1 (baseline) CBCT images which had an average mean absolute error (MAE) of 162.77 HU compared to pCT images are translated to synthetic CT images that exhibit a drastically improved average MAE of 29.31 HU and average structural similarity of 92% with the pCT images. The average DICE scores of the 3D OARs segmentations are: lungs 0.96, heart 0.88, spinal cord 0.83, and esophagus 0.66. CONCLUSIONS: We demonstrate an approach to translate artifact-ridden CBCT images to high-quality synthetic CT images, while simultaneously generating good quality segmentation masks for different OARs. This approach could allow clinicians to adjust treatment plans using only the routine low-quality CBCT images, potentially improving patient outcomes. Our code, data, and pre-trained models will be made available via our physics-based data augmentation library, Physics-ArX, at https://github.com/nadeemlab/Physics-ArX.
Authors: Jinzhong Yang; Harini Veeraraghavan; Samuel G Armato; Keyvan Farahani; Justin S Kirby; Jayashree Kalpathy-Kramer; Wouter van Elmpt; Andre Dekker; Xiao Han; Xue Feng; Paul Aljabar; Bruno Oliveira; Brent van der Heyden; Leonid Zamdborg; Dao Lam; Mark Gooding; Gregory C Sharp Journal: Med Phys Date: 2018-09-19 Impact factor: 4.071
Authors: Hu Chen; Yi Zhang; Mannudeep K Kalra; Feng Lin; Yang Chen; Peixi Liao; Jiliu Zhou; Ge Wang Journal: IEEE Trans Med Imaging Date: 2017-06-13 Impact factor: 10.048
Authors: Christopher Kurz; Matteo Maspero; Mark H F Savenije; Guillaume Landry; Florian Kamp; Marco Pinto; Minglun Li; Katia Parodi; Claus Belka; Cornelis A T van den Berg Journal: Phys Med Biol Date: 2019-11-15 Impact factor: 3.609
Authors: Yuan Xu; Ti Bai; Hao Yan; Luo Ouyang; Arnold Pompos; Jing Wang; Linghong Zhou; Steve B Jiang; Xun Jia Journal: Phys Med Biol Date: 2015-04-10 Impact factor: 3.609
Authors: Feng-Ming Spring Kong; Timothy Ritter; Douglas J Quint; Suresh Senan; Laurie E Gaspar; Ritsuko U Komaki; Coen W Hurkmans; Robert Timmerman; Andrea Bezjak; Jeffrey D Bradley; Benjamin Movsas; Lon Marsh; Paul Okunieff; Hak Choy; Walter J Curran Journal: Int J Radiat Oncol Biol Phys Date: 2010-10-08 Impact factor: 7.038