Bulat Ibragimov1, Lei Xing1. 1. Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California, 94305, USA.
Abstract
PURPOSE: Accurate segmentation of organs-at-risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning-based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state-of-the-art automated segmentation algorithms, commercial software, and interobserver variability. METHODS: Convolutional neural networks (CNNs)-a concept from the field of deep learning-were used to study consistent intensity patterns of OARs from training CT images and to segment the OAR in a previously unseen test CT image. For CNN training, we extracted a representative number of positive intensity patches around voxels that belong to the OAR of interest in training CT images, and negative intensity patches around voxels that belong to the surrounding structures. These patches then passed through a sequence of CNN layers that captured local image features such as corners, end-points, and edges, and combined them into more complex high-order features that can efficiently describe the OAR. The trained network was applied to classify voxels in a region of interest in the test image where the corresponding OAR is expected to be located. We then smoothed the obtained classification results by using Markov random fields algorithm. We finally extracted the largest connected component of the smoothed voxels classified as the OAR by CNN, performed dilate-erode operations to remove cavities of the component, which resulted in segmentation of the OAR in the test image. RESULTS: The performance of CNNs was validated on segmentation of spinal cord, mandible, parotid glands, submandibular glands, larynx, pharynx, eye globes, optic nerves, and optic chiasm using 50 CT images. The obtained segmentation results varied from 37.4% Dice coefficient (DSC) for chiasm to 89.5% DSC for mandible. We also analyzed the performance of state-of-the-art algorithms and commercial software reported in the literature, and observed that CNNs demonstrate similar or superior performance on segmentation of spinal cord, mandible, parotid glands, larynx, pharynx, eye globes, and optic nerves, but inferior performance on segmentation of submandibular glands and optic chiasm. CONCLUSION: We concluded that convolution neural networks can accurately segment most of OARs using a representative database of 50 HaN CT images. At the same time, inclusion of additional information, for example, MR images, may be beneficial to some OARs with poorly visible boundaries.
PURPOSE: Accurate segmentation of organs-at-risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning-based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state-of-the-art automated segmentation algorithms, commercial software, and interobserver variability. METHODS: Convolutional neural networks (CNNs)-a concept from the field of deep learning-were used to study consistent intensity patterns of OARs from training CT images and to segment the OAR in a previously unseen test CT image. For CNN training, we extracted a representative number of positive intensity patches around voxels that belong to the OAR of interest in training CT images, and negative intensity patches around voxels that belong to the surrounding structures. These patches then passed through a sequence of CNN layers that captured local image features such as corners, end-points, and edges, and combined them into more complex high-order features that can efficiently describe the OAR. The trained network was applied to classify voxels in a region of interest in the test image where the corresponding OAR is expected to be located. We then smoothed the obtained classification results by using Markov random fields algorithm. We finally extracted the largest connected component of the smoothed voxels classified as the OAR by CNN, performed dilate-erode operations to remove cavities of the component, which resulted in segmentation of the OAR in the test image. RESULTS: The performance of CNNs was validated on segmentation of spinal cord, mandible, parotid glands, submandibular glands, larynx, pharynx, eye globes, optic nerves, and optic chiasm using 50 CT images. The obtained segmentation results varied from 37.4% Dice coefficient (DSC) for chiasm to 89.5% DSC for mandible. We also analyzed the performance of state-of-the-art algorithms and commercial software reported in the literature, and observed that CNNs demonstrate similar or superior performance on segmentation of spinal cord, mandible, parotid glands, larynx, pharynx, eye globes, and optic nerves, but inferior performance on segmentation of submandibular glands and optic chiasm. CONCLUSION: We concluded that convolution neural networks can accurately segment most of OARs using a representative database of 50 HaN CT images. At the same time, inclusion of additional information, for example, MR images, may be beneficial to some OARs with poorly visible boundaries.
Authors: Xiao Han; Mischa S Hoogeman; Peter C Levendag; Lyndon S Hibbard; David N Teguh; Peter Voet; Andrew C Cowen; Theresa K Wolf Journal: Med Image Comput Comput Assist Interv Date: 2008
Authors: E Faggiano; C Fiorino; E Scalco; S Broggi; M Cattaneo; E Maggiulli; I Dell'Oca; N Di Muzio; R Calandrino; G Rizzo Journal: Phys Med Biol Date: 2011-01-14 Impact factor: 3.609
Authors: Jianhua Yao; Joseph E Burns; Daniel Forsberg; Alexander Seitel; Abtin Rasoulian; Purang Abolmaesumi; Kerstin Hammernik; Martin Urschler; Bulat Ibragimov; Robert Korez; Tomaž Vrtovec; Isaac Castro-Mateos; Jose M Pozo; Alejandro F Frangi; Ronald M Summers; Shuo Li Journal: Comput Med Imaging Graph Date: 2016-01-02 Impact factor: 4.790
Authors: Bulat Ibragimov; Jerry L Prince; Emi Z Murano; Jonghye Woo; Maureen Stone; Boštjan Likar; Franjo Pernuš; Tomaž Vrtovec Journal: Med Image Anal Date: 2014-11-23 Impact factor: 8.545
Authors: Charlotte L Brouwer; Roel J H M Steenbakkers; Edwin van den Heuvel; Joop C Duppen; Arash Navran; Henk P Bijl; Olga Chouvalova; Fred R Burlage; Harm Meertens; Johannes A Langendijk; Aart A van 't Veld Journal: Radiat Oncol Date: 2012-03-13 Impact factor: 3.481
Authors: Elias Tappeiner; Samuel Pröll; Markus Hönig; Patrick F Raudaschl; Paolo Zaffino; Maria F Spadea; Gregory C Sharp; Rainer Schubert; Karl Fritscher Journal: Int J Comput Assist Radiol Surg Date: 2019-03-07 Impact factor: 2.924
Authors: Xingyu Wu; Jayaram K Udupa; Yubing Tong; Dewey Odhner; Gargi V Pednekar; Charles B Simone; David McLaughlin; Chavanon Apinorasethkul; John Lukens; Dimitris Mihailidis; Geraldine Shammo; Paul James; Joseph Camaratta; Drew A Torigian Journal: Proc SPIE Int Soc Opt Eng Date: 2018-03-13
Authors: Chenyang Shen; Yesenia Gonzalez; Peter Klages; Nan Qin; Hyunuk Jung; Liyuan Chen; Dan Nguyen; Steve B Jiang; Xun Jia Journal: Phys Med Biol Date: 2019-05-29 Impact factor: 3.609
Authors: Domen Močnik; Bulat Ibragimov; Lei Xing; Primož Strojan; Boštjan Likar; Franjo Pernuš; Tomaž Vrtovec Journal: Phys Med Date: 2018-06-19 Impact factor: 2.685
Authors: Kuo Men; Huaizhi Geng; Chingyun Cheng; Haoyu Zhong; Mi Huang; Yong Fan; John P Plastaras; Alexander Lin; Ying Xiao Journal: Med Phys Date: 2018-12-07 Impact factor: 4.071
Authors: Xingyu Wu; Jayaram K Udupa; Yubing Tong; Dewey Odhner; Gargi V Pednekar; Charles B Simone; David McLaughlin; Chavanon Apinorasethkul; Ontida Apinorasethkul; John Lukens; Dimitris Mihailidis; Geraldine Shammo; Paul James; Akhil Tiwari; Lisa Wojtowicz; Joseph Camaratta; Drew A Torigian Journal: Med Image Anal Date: 2019-01-29 Impact factor: 8.545