PURPOSE: Accurate tumor segmentation is a requirement for magnetic resonance (MR)-based radiotherapy. Lack of large expert annotated MR datasets makes training deep learning models difficult. Therefore, a cross-modality (MR-CT) deep learning segmentation approach that augments training data using pseudo MR images produced by transforming expert-segmented CT images was developed. METHODS: Eighty-one T2-weighted MRI scans from 28 patients with non-small cell lung cancers (nine with pretreatment and weekly MRI and the remainder with pre-treatment MRI scans) were analyzed. Cross-modality model encoding the transformation of CT to pseudo MR images resembling T2w MRI was learned as a generative adversarial deep learning network. This model was used to translate 377 expert segmented non-small cell lung cancer CT scans from the Cancer Imaging Archive into pseudo MRI that served as additional training set. This method was benchmarked against shallow learning using random forest, standard data augmentation, and three state-of-the art adversarial learning-based cross-modality data (pseudo MR) augmentation methods. Segmentation accuracy was computed using Dice similarity coefficient (DSC), Hausdorff distance metrics, and volume ratio. RESULTS: The proposed approach produced the lowest statistical variability in the intensity distribution between pseudo and T2w MR images measured as Kullback-Leibler divergence of 0.069. This method produced the highest segmentation accuracy with a DSC of (0.75 ± 0.12) and the lowest Hausdorff distance of (9.36 mm ± 6.00 mm) on the test dataset using a U-Net structure. This approach produced highly similar estimations of tumor growth as an expert (P = 0.37). CONCLUSIONS: A novel deep learning MR segmentation was developed that overcomes the limitation of learning robust models from small datasets by leveraging learned cross-modality information using a model that explicitly incorporates knowledge of tumors in modality translation to augment segmentation training. The results show the feasibility of the approach and the corresponding improvement over the state-of-the-art methods.
<span class="abstract_title">PURPOSE:pan> Accurate <span class="Disease">tumor segmentation is a requirement for magnetic resonance (MR)-based radiotherapy. Lack of large expert annotated MR datasets makes training deep learning models difficult. Therefore, a cross-modality (MR-CT) deep learning segmentation approach that augments training data using pseudo MR images produced by transforming expert-segmented CT images was developed. METHODS: Eighty-one T2-weighted MRI scans from 28 patients with non-small cell lung cancers (nine with pretreatment and weekly MRI and the remainder with pre-treatment MRI scans) were analyzed. Cross-modality model encoding the transformation of CT to pseudo MR images resembling T2w MRI was learned as a generative adversarial deep learning network. This model was used to translate 377 expert segmented non-small cell lung cancerCT scans from the Cancer Imaging Archive into pseudo MRI that served as additional training set. This method was benchmarked against shallow learning using random forest, standard data augmentation, and three state-of-the art adversarial learning-based cross-modality data (pseudo MR) augmentation methods. Segmentation accuracy was computed using Dice similarity coefficient (DSC), Hausdorff distance metrics, and volume ratio. RESULTS: The proposed approach produced the lowest statistical variability in the intensity distribution between pseudo and T2w MR images measured as Kullback-Leibler divergence of 0.069. This method produced the highest segmentation accuracy with a DSC of (0.75 ± 0.12) and the lowest Hausdorff distance of (9.36 mm ± 6.00 mm) on the test dataset using a U-Net structure. This approach produced highly similar estimations of tumor growth as an expert (P = 0.37). CONCLUSIONS: A novel deep learning MR segmentation was developed that overcomes the limitation of learning robust models from small datasets by leveraging learned cross-modality information using a model that explicitly incorporates knowledge of tumors in modality translation to augment segmentation training. The results show the feasibility of the approach and the corresponding improvement over the state-of-the-art methods.
Authors: Annegreet van Opbroek; M Arfan Ikram; Meike W Vernooij; Marleen de Bruijne Journal: IEEE Trans Med Imaging Date: 2014-11-04 Impact factor: 10.048
Authors: Reid F Thompson; Gilmer Valdes; Clifton David Fuller; Colin M Carpenter; Olivier Morin; Sanjay Aneja; William D Lindsay; Hugo J W L Aerts; Barbara Agrimson; Curtiland Deville; Seth A Rosenthal; James B Yu; Charles R Thomas Journal: Int J Radiat Oncol Biol Phys Date: 2018-06-06 Impact factor: 7.038
Authors: Kenneth Clark; Bruce Vendt; Kirk Smith; John Freymann; Justin Kirby; Paul Koppel; Stephen Moore; Stanley Phillips; David Maffitt; Michael Pringle; Lawrence Tarbox; Fred Prior Journal: J Digit Imaging Date: 2013-12 Impact factor: 4.056
Authors: E A Eisenhauer; P Therasse; J Bogaerts; L H Schwartz; D Sargent; R Ford; J Dancey; S Arbuck; S Gwyther; M Mooney; L Rubinstein; L Shankar; L Dodd; R Kaplan; D Lacombe; J Verweij Journal: Eur J Cancer Date: 2009-01 Impact factor: 9.162
Authors: Konstantinos Kamnitsas; Christian Ledig; Virginia F J Newcombe; Joanna P Simpson; Andrew D Kane; David K Menon; Daniel Rueckert; Ben Glocker Journal: Med Image Anal Date: 2016-10-29 Impact factor: 8.545
Authors: Shunxing Bao; Yucheng Tang; Ho Hin Lee; Riqiang Gao; Sophie Chiron; Ilwoo Lyu; Lori A Coburn; Keith T Wilson; Joseph T Roland; Bennett A Landman; Yuankai Huo Journal: Proc Mach Learn Res Date: 2021-09
Authors: Yubing Tong; Jayaram K Udupa; Joseph M McDonough; Carina Lott; Caiyun Wu; Chamith S Rajapakse; Jason B Anari; Drew A Torigian; Patrick J Cahill Journal: Proc SPIE Int Soc Opt Eng Date: 2021-02-15
Authors: Tonghe Wang; Yang Lei; Yabo Fu; Jacob F Wynne; Walter J Curran; Tian Liu; Xiaofeng Yang Journal: J Appl Clin Med Phys Date: 2020-12-11 Impact factor: 2.102
Authors: Vemund Fredriksen; Svein Ole M Sevle; André Pedersen; Thomas Langø; Gabriel Kiss; Frank Lindseth Journal: PLoS One Date: 2022-04-05 Impact factor: 3.240