| Literature DB >> 33200991 |
Yang Gao1, Xiong Xiao2, Bangcheng Han3, Guilin Li4, Xiaolin Ning3, Defeng Wang3, Weidong Cai5, Ron Kikinis6,7,8, Shlomo Berkovsky9, Antonio Di Ieva10, Liwei Zhang2, Nan Ji2, Sidong Liu9.
Abstract
BACKGROUND: The radiological differential diagnosis between tumor recurrence and radiation-induced necrosis (ie, pseudoprogression) is of paramount importance in the management of glioma patients.Entities:
Keywords: deep learning; multimodal MRI; progression; pseudoprogression; radiation necrosis; recurrent tumor
Year: 2020 PMID: 33200991 PMCID: PMC7708085 DOI: 10.2196/19805
Source DB: PubMed Journal: JMIR Med Inform
Figure 1The T1, T2, and T1c magnetic resonance imaging (MRI) sequences of 4 patients with their histograms of the voxels within the lesion masks. Patients (a) and (b) represent recurrent tumors; patients (c) and (d) represent radionecrosis lesions. The lesion masks were manually drawn using the software ITK-SNAP, generally used for delineating regions of interest. The histograms were created for individual sequences and further smoothed using the Hann filter. ITK: Insight Toolkit. T1: T1-weighted MRI; T1c: gadolinium-contrast-enhanced T1-weighted MRI; T2: T2-weighted MRI.
Figure 2The selection process for the patient cohorts in this study. MRI: magnetic resonance imaging; T1: T1-weighted MRI; T1c: gadolinium-contrast-enhanced T1-weighted MRI; T2: T2-weighted MRI.
Demographic and clinical data of the patient cohorts enrolled in this study.
| Characteristic | Training set (n=117) | Test set (n=29) | Total (N=146) | |
| Sample size (N=146), n (%) | 117 (80.1) | 29 (19.9) | 146 (100) | |
| Age in years, mean (SD) | 40.9 (12.4) | 42.0 (9.9) | 41.1 (11.9) | |
|
|
|
|
| |
|
| Male | 63 (53.8) | 15 (52) | 78 (53.4) |
|
| Female | 54 (46.2) | 14 (48) | 68 (46.6) |
|
|
|
|
| |
|
| Grade II | 33 (28.2) | 8 (28) | 41 (28.1) |
|
| Grade III | 26 (22.2) | 6 (21) | 32 (21.9) |
|
| Grade IV | 45 (38.5) | 11 (38) | 56 (38.4) |
|
| Unknown | 13 (11.1) | 4 (14) | 17 (11.6) |
|
|
|
|
| |
|
| Necrosis | 40 (34.2) | 10 (34) | 50 (34.2) |
|
| Glioma | 77 (65.8) | 19 (66) | 96 (65.8) |
Specifications of the imaging data acquired from the different magnetic resonance imaging systems.
| Imaging system | Field of view, mm | Slice thickness, mm | Slice spacing, mm | Matrix size |
| Siemens MAGNETOM Trio Tim | 220 | 5.0 | 6.5 | 496 × 512 |
| Siemens MAGNETOM Verio | 220 | 5.0 | 6.0 | 496 × 512 |
| GE Healthcare Discovery MR750 | 240 | 5.0 | 6.5 | 512 × 512 |
| GE Healthcare GENESIS SIGNA 3 T | 240 | 5.0 | 6.0 | 512 × 512 |
| GE Healthcare SIGNA 1.5 T | 240 | 5.5 | 6.5 | 512 × 512 |
Figure 3Overview of the proposed approach. (a) The co-registered multimodal images were fused as a multichannel RGB image with T1, T2, and T1c images representing the Red, Green and Blue channels, respectively. (b) The multichannel magnetic resonance (MR) images were used to train the deep neural network (DNN) models that classified the test MR images as either a recurrent tumor or radiation necrosis. (c) Architecture of the proposed efficient radionecrosis neural network (ERN-Net). ReLU: rectified linear unit; T1: T1-weighted magnetic resonance imaging (MRI); T1c: gadolinium-contrast-enhanced T1-weighted MRI; T2: T2-weighted MRI.
Performance of the deep neural network (DNN) models on individual magnetic resonance imaging (MRI) sequences: T1-weighted MRI (T1), T2-weighted MRI (T2), and gadolinium-contrast-enhanced T1-weighted MRI (T1c).
| DNN model and magnetic resonance sequence | Sensitivity (95% CI) | Specificity (95% CI) | Accuracy (95% CI) | Area under the curve (95% CI) | |
|
|
|
|
|
| |
|
| T1 | 0.725 (0.696-0.753) | 0.606 (0.562-0.648) | 0.684 (0.660-0.708) | 0.718 (0.689-0.747) |
|
| T2 | 0.690 (0.660-0.719) | 0.686 (0.644-0.727) | 0.689 (0.665-0.713) | 0.767 (0.740-0.794) |
|
| T1c | 0.874 (0.851-0.894) | 0.540 (0.496-0.585) | 0.759 (0.736-0.781) | 0.770 (0.743-0.797) |
|
|
|
|
|
| |
|
| T1 | 0.804 (0.778-0.829) | 0.448 (0.404-0.492) | 0.681 (0.657-0.705) | 0.692 (0.663-0.721) |
|
| T2 | 0.743 (0.714-0.770) | 0.554 (0.510-0.598) | 0.678 (0.653-0.702) | 0.741 (0.713-0.769) |
|
| T1c | 0.800 (0.773-0.825) | 0.653 (0.610-0.694) | 0.749 (0.726-0.771) | 0.795 (0.769-0.821) |
|
|
|
|
|
| |
|
| T1 | 0.782 (0.755-0.808) | 0.584 (0.540-0.627) | 0.714 (0.690-0.737) | 0.732 (0.704-0.760) |
|
| T2 | 0.833 (0.808-0.852) | 0.525 (0.480-0.569) | 0.727 (0.703-0.750) | 0.762 (0.735-0.789) |
|
| T1c | 0.825 (0.799-0.848) | 0.653 (0.610-0.694) | 0.766 (0.743-0.787) | 0.824 (0.800-0.848) |
|
|
|
|
|
| |
|
| T1 | 0.724 (0.695-0.752) | 0.596 (0.552-0.639) | 0.680 (0.656-0.704) | 0.706 (0.677-0.735) |
|
| T2 | 0.634 (0.603-0.665) | 0.734 (0.693-0.772) | 0.668 (0.644-0.693) | 0.734 (0.706-0.762) |
|
| T1c | 0.769 (0.741-0.795) | 0.732 (0.691-0.770) | 0.756 (0.733-0.778) | 0.831 (0.807-0.855) |
|
|
|
|
|
| |
|
| T1 | 0.774 (0.746-0.800) | 0.590 (0.546-0.633) | 0.711 (0.687-0.734) | 0.748 (0.720-0.776) |
|
| T2 | 0.829 (0.804-0.852) | 0.529 (0.484-0.573) | 0.726 (0.702-0.748) | 0.804 (0.779-0.829) |
|
| T1c | 0.812 (0.786-0.837) | 0.722 (0.681-0.761) | 0.781 (0.759-0.802) | 0.841 (0.818-0.864) |
|
|
|
|
|
| |
|
| T1 | 0.704 (0.674-0.732) | 0.519 (0.474-0.563) | 0.640 (0.615-0.665) | 0.646 (0.615-0.676) |
|
| T2 | 0.634 (0.603-0.665) | 0.606 (0.562-0.648) | 0.624 (0.599-0.649) | 0.675 (0.645-0.705) |
|
| T1c | 0.803 (0.777-0.828) | 0.643 (0.600-0.685) | 0.748 (0.725-0.770) | 0.807 (0.782-0.832) |
aVGG: Visual Geometry Group.
bResNet: residual neural network.
cERN-Net: efficient radionecrosis neural network.
Performance of different deep neural network (DNN) models on the T1a-T2b-T1cc-fused images for image-based classification.
| DNN models | Sensitivity (95% CI) | Specificity (95% CI) | Accuracy (95% CI) | Area under the curve (95% CI) |
| VGGd16 | 0.858 (0.834-0.880) | 0.826 (0.791-0.858) | 0.847 (0.828-0.865) | 0.864 (0.842-0.886) |
| VGG19 | 0.852 (0.828-0.874) | 0.704 (0.662-0.744) | 0.801 (0.780-0.821) | 0.828 (0.804-0.852) |
| ResNete-50 | 0.899 (0.879-0.918) | 0.663 (0.620-0.704) | 0.818 (0.797-0.837) | 0.866 (0.844-0.888) |
| Inception-v3 | 0.844 (0.819-0.866) | 0.716 (0.675-0.755) | 0.800 (0.778-0.820) | 0.845 (0.822-0.868) |
| Inception-ResNet-v2 | 0.925 (0.907-0.941) | 0.755 (0.716-0.792) | 0.867 (0.848-0.884) | 0.913 (0.895-0.931) |
| ERN-Netf | 0.820 (0.794-0.844) | 0.789 (0.751-0.824) | 0.809 (0.788-0.829) | 0.915 (0.895-0.932) |
aT1: T1-weighted magnetic resonance imaging (MRI).
bT2: T2-weighted MRI.
cT1c: gadolinium-contrast-enhanced T1-weighted MRI.
dVGG: Visual Geometry Group.
eResNet: residual neural network.
fERN-Net: efficient radionecrosis neural network.
Performance of different deep neural network (DNN) models for subject-based classification; the T1a-T2b-T1cc-fused images were used as the input to the models.
| DNN models | Sensitivity | Specificity | Accuracy | Area under the curve |
| VGGd16 | 0.947 | 0.9 | 0.931 | 0.911 |
| VGG19 | 0.947 | 0.8 | 0.897 | 0.911 |
| ResNete-50 | 0.947 | 0.7 | 0.862 | 0.937 |
| Inception-v3 | 0.947 | 0.8 | 0.897 | 0.953 |
| Inception-ResNet-v2 | 1.000 | 0.8 | 0.931 | 0.958 |
| ERN-Netf | 0.895 | 0.9 | 0.897 | 0.958 |
| All DNNs, mean (SD) | 0.947 (0.033) | 0.817 (0.075) | 0.903 (0.026) | 0.938 (0.022) |
| All neurosurgeons, mean (SD) | 0.768 (0.109) | 0.360 (0.089) | 0.628 (0.750) | N/Ag |
| .02 | <.001 | <.001 | N/A |
aT1: T1-weighted magnetic resonance imaging (MRI).
bT2: T2-weighted MRI.
cT1c: gadolinium-contrast-enhanced T1-weighted MRI.
dVGG: Visual Geometry Group.
eResNet: residual neural network.
fERN-Net: efficient radionecrosis neural network.
gN/A: not applicable. The diagnoses made by neurosurgeons are definite (ie, yes or no), unlike those made by the DNN models (eg, 30% yes or 70% no); therefore, the area under the curve cannot be computed without a probability distribution of predictions.
Figure 4Plots showing (a) performance of the deep neural network (DNN) models on multimodal magnetic resonance imaging in the image-based classification task and (b) performance of the DNN models and neurosurgeons in the subject-based classification task. Performance of the DNN models was evaluated using the area under the curve (AUC) of the receiver operating characteristic curves, while the five neurosurgeons’ sensitivity and specificity scores are represented by the red dots. ERN-Net: efficient radionecrosis neural network; ResNet: residual neural network; VGG: Visual Geometry Group.
Figure 5A T1c tumor image and its corresponding tumor masks created by two neuroradiologists independently, which shows the disagreement between annotators. MRI: magnetic resonance imaging; T1c: gadolinium-contrast-enhanced T1-weighted MRI.