Tucker J Netherton1,2, Dong Joo Rhee1,2, Carlos E Cardenas1, Caroline Chung3, Ann H Klopp3, Christine B Peterson4, Rebecca M Howell1, Peter A Balter1, Laurence E Court1. 1. Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA. 2. The University of Texas MD Anderson Graduate School of Biomedical Science, Houston, TX, 77030, USA. 3. Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA. 4. Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA.
Abstract
PURPOSE: The purpose of this work was to evaluate the performance of X-Net, a multiview deep learning architecture, to automatically label vertebral levels (S2-C1) in palliative radiotherapy simulation CT scans. METHODS: For each patient CT scan, our automated approach 1) segmented spinal canal using a convolutional-neural network (CNN), 2) formed sagittal and coronal intensity projection pairs, 3) labeled vertebral levels with X-Net, and 4) detected irregular intervertebral spacing using an analytic methodology. The spinal canal CNN was trained via fivefold cross validation using 1,966 simulation CT scans and evaluated on 330 CT scans. After labeling vertebral levels (S2-C1) in 897 palliative radiotherapy simulation CT scans, a volume of interest surrounding the spinal canal in each patient's CT scan was converted into sagittal and coronal intensity projection image pairs. Then, intensity projection image pairs were augmented and used to train X-Net to automatically label vertebral levels using fivefold cross validation (n = 803). Prior to testing upon the final test set (n = 94), CT scans of patients with anatomical abnormalities, surgical implants, or other atypical features from the final test set were placed in an outlier group (n = 20), whereas those without these features were placed in a normative group (n = 74). The performance of X-Net, X-Net Ensemble, and another leading vertebral labeling architecture (Btrfly Net) was evaluated on both groups using identification rate, localization error, and other metrics. The performance of our approach was also evaluated on the MICCAI 2014 test dataset (n = 60). Finally, a method to detect irregular intervertebral spacing was created based on the rate of change in spacing between predicted vertebral body locations and was also evaluated using the final test set. Receiver operating characteristic analysis was used to investigate the performance of the method to detect irregular intervertebral spacing. RESULTS: The spinal canal architecture yielded centroid coordinates spanning S2-C1 with submillimeter accuracy (mean ± standard deviation, 0.399 ± 0.299 mm; n = 330 patients) and was robust in the localization of spinal canal centroid to surgical implants and widespread metastases. Cross-validation testing of X-Net for vertebral labeling revealed that the deep learning model performance (F1 score, precision, and sensitivity) improved with CT scan length. The X-Net, X-Net Ensemble, and Btrfly Net mean identification rates and localization errors were 92.4% and 2.3 mm, 94.2% and 2.2 mm, and 90.5% and 3.4 mm, respectively, in the final test set and 96.7% and 2.2 mm, 96.9% and 2.0 mm, and 94.8% and 3.3 mm, respectively, within the normative group of the final test set. The X-Net Ensemble yielded the highest percentage of patients (94%) having all vertebral bodies identified correctly in the final test set when the three most inferior and superior vertebral bodies were excluded from the CT scan. The method used to detect labeling failures had 67% sensitivity and 95% specificity when combined with the X-Net Ensemble and flagged five of six patients with atypical vertebral counts (additional thoracic (T13), additional lumbar (L6) or only four lumbar vertebrae). Mean identification rate on the MICCAI 2014 dataset using an X-Net Ensemble was increased from 86.8% to 91.3% through the use of transfer learning and obtained state-of-the-art results for various regions of the spine. CONCLUSIONS: We trained X-Net, our unique convolutional neural network, to automatically label vertebral levels from S2 to C1 on palliative radiotherapy CT images and found that an ensemble of X-Net models had high vertebral body identification rate (94.2%) and small localization errors (2.2 ± 1.8 mm). In addition, our transfer learning approach achieved state-of-the-art results on a well-known benchmark dataset with high identification rate (91.3%) and low localization error (3.3 mm ± 2.7 mm). When we pre-screened radiotherapy CT images for the presence of hardware, surgical implants, or other anatomic abnormalities prior to the use of X-Net, it labeled the spine correctly in more than 97% of patients and 94% of patients when scans were not prescreened. Automatically generated labels are robust to widespread vertebral metastases and surgical implants and our method to detect labeling failures based on neighborhood intervertebral spacing can reliably identify patients with an additional lumbar or thoracic vertebral body.
PURPOSE: The purpose of this work was to evaluate the performance of X-Net, a multiview deep learning architecture, to automatically label vertebral levels (S2-C1) in palliative radiotherapy simulation CT scans. METHODS: For each patient CT scan, our automated approach 1) segmented spinal canal using a convolutional-neural network (CNN), 2) formed sagittal and coronal intensity projection pairs, 3) labeled vertebral levels with X-Net, and 4) detected irregular intervertebral spacing using an analytic methodology. The spinal canal CNN was trained via fivefold cross validation using 1,966 simulation CT scans and evaluated on 330 CT scans. After labeling vertebral levels (S2-C1) in 897 palliative radiotherapy simulation CT scans, a volume of interest surrounding the spinal canal in each patient's CT scan was converted into sagittal and coronal intensity projection image pairs. Then, intensity projection image pairs were augmented and used to train X-Net to automatically label vertebral levels using fivefold cross validation (n = 803). Prior to testing upon the final test set (n = 94), CT scans of patients with anatomical abnormalities, surgical implants, or other atypical features from the final test set were placed in an outlier group (n = 20), whereas those without these features were placed in a normative group (n = 74). The performance of X-Net, X-Net Ensemble, and another leading vertebral labeling architecture (Btrfly Net) was evaluated on both groups using identification rate, localization error, and other metrics. The performance of our approach was also evaluated on the MICCAI 2014 test dataset (n = 60). Finally, a method to detect irregular intervertebral spacing was created based on the rate of change in spacing between predicted vertebral body locations and was also evaluated using the final test set. Receiver operating characteristic analysis was used to investigate the performance of the method to detect irregular intervertebral spacing. RESULTS: The spinal canal architecture yielded centroid coordinates spanning S2-C1 with submillimeter accuracy (mean ± standard deviation, 0.399 ± 0.299 mm; n = 330 patients) and was robust in the localization of spinal canal centroid to surgical implants and widespread metastases. Cross-validation testing of X-Net for vertebral labeling revealed that the deep learning model performance (F1 score, precision, and sensitivity) improved with CT scan length. The X-Net, X-Net Ensemble, and Btrfly Net mean identification rates and localization errors were 92.4% and 2.3 mm, 94.2% and 2.2 mm, and 90.5% and 3.4 mm, respectively, in the final test set and 96.7% and 2.2 mm, 96.9% and 2.0 mm, and 94.8% and 3.3 mm, respectively, within the normative group of the final test set. The X-Net Ensemble yielded the highest percentage of patients (94%) having all vertebral bodies identified correctly in the final test set when the three most inferior and superior vertebral bodies were excluded from the CT scan. The method used to detect labeling failures had 67% sensitivity and 95% specificity when combined with the X-Net Ensemble and flagged five of six patients with atypical vertebral counts (additional thoracic (T13), additional lumbar (L6) or only four lumbar vertebrae). Mean identification rate on the MICCAI 2014 dataset using an X-Net Ensemble was increased from 86.8% to 91.3% through the use of transfer learning and obtained state-of-the-art results for various regions of the spine. CONCLUSIONS: We trained X-Net, our unique convolutional neural network, to automatically label vertebral levels from S2 to C1 on palliative radiotherapy CT images and found that an ensemble of X-Net models had high vertebral body identification rate (94.2%) and small localization errors (2.2 ± 1.8 mm). In addition, our transfer learning approach achieved state-of-the-art results on a well-known benchmark dataset with high identification rate (91.3%) and low localization error (3.3 mm ± 2.7 mm). When we pre-screened radiotherapy CT images for the presence of hardware, surgical implants, or other anatomic abnormalities prior to the use of X-Net, it labeled the spine correctly in more than 97% of patients and 94% of patients when scans were not prescreened. Automatically generated labels are robust to widespread vertebral metastases and surgical implants and our method to detect labeling failures based on neighborhood intervertebral spacing can reliably identify patients with an additional lumbar or thoracic vertebral body.
Authors: Rebecca K S Wong; Daniel Letourneau; Anita Varma; Jean Pierre Bissonnette; David Fitzpatrick; Daniel Grabarz; Christine Elder; Melanie Martin; Andrea Bezjak; Tony Panzarella; Mary Gospodarowicz; David A Jaffray Journal: Int J Radiat Oncol Biol Phys Date: 2012-05-15 Impact factor: 7.038
Authors: Stefan Schmidt; Jörg Kappes; Martin Bergtholdt; Vladimir Pekar; Sebastian Dries; Daniel Bystrov; Christoph Schnörr Journal: Inf Process Med Imaging Date: 2007
Authors: Carlos E Cardenas; Rachel E McCarroll; Laurence E Court; Baher A Elgohari; Hesham Elhalawani; Clifton D Fuller; Mona J Kamal; Mohamed A M Meheissen; Abdallah S R Mohamed; Arvind Rao; Bowman Williams; Andrew Wong; Jinzhong Yang; Michalis Aristophanous Journal: Int J Radiat Oncol Biol Phys Date: 2018-02-07 Impact factor: 7.038
Authors: A Fairchild; E Pituskin; B Rose; S Ghosh; J Dutka; A Driga; P Tachynski; J Borschneck; L Gagnon; S Macdonnell; J Middleton; K Thavone; S Carstairs; D Brent; D Severin Journal: Support Care Cancer Date: 2008-06-20 Impact factor: 3.603
Authors: Kelly H Zou; Simon K Warfield; Aditya Bharatha; Clare M C Tempany; Michael R Kaus; Steven J Haker; William M Wells; Ferenc A Jolesz; Ron Kikinis Journal: Acad Radiol Date: 2004-02 Impact factor: 3.173
Authors: Dong Joo Rhee; Anuja Jhingran; Kai Huang; Tucker J Netherton; Nazia Fakie; Ingrid White; Alicia Sherriff; Carlos E Cardenas; Lifei Zhang; Surendra Prajapati; Stephen F Kry; Beth M Beadle; William Shaw; Frederika O'Reilly; Jeannette Parkes; Hester Burger; Chris Trauernicht; Hannah Simonds; Laurence E Court Journal: Med Phys Date: 2022-07-26 Impact factor: 4.506