| Literature DB >> 35204436 |
Kaoutar Ben Ahmed1, Lawrence O Hall1, Dmitry B Goldgof1, Robert Gatenby2.
Abstract
Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.Entities:
Keywords: BraTS; artificial intelligence; brain tumor; deep learning; glioblastoma multiforme; machine learning; magnetic resonance images; survival prediction
Year: 2022 PMID: 35204436 PMCID: PMC8871067 DOI: 10.3390/diagnostics12020345
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1Different planes of brain MRI with the T1CE sequence.
Figure 2Different sequences of brain MRI in the axial plane.
Figure 3Kaplan–Meier survival graph of the training set.
Figure 4Pipeline of proposed overall survival prediction system.
Performance results of the ensemble of five snapshots using the 3D T1CE sequence.
| Method | Accuracy | Specificity | Sensitivity | AUC |
|---|---|---|---|---|
| Snapshot 1 | 63% | 39% | 87% | 0.63 |
| Snapshot 2 | 67% | 74% | 61% | 0.67 |
| Snapshot 3 | 70% | 74% | 65% | 0.70 |
| Snapshot 4 | 61% | 74% | 48% | 0.61 |
| Snapshot 5 | 59% | 52% | 65% | 0.59 |
| Ensemble | 70% | 70% | 70% | 0.70 |
Performance results of the ensemble of five CNNs (convolutional neural networks) using the 3D T1CE sequence.
| Method | Accuracy | Specificity | Sensitivity | AUC |
|---|---|---|---|---|
| CNN 1 | 63% | 48% | 78% | 0.63 |
| CNN 2 | 54% | 48% | 61% | 0.54 |
| CNN 3 | 61% | 78% | 43% | 0.61 |
| CNN 4 | 61% | 65% | 57% | 0.61 |
| CNN 5 | 57% | 43% | 70% | 0.57 |
| Ensemble | 67% | 70% | 65% | 0.67 |
Figure 5Confusion matrix comparison of 3D vs 2D CNN using the T1CE sequence.
Figure 6Multi-branch CNN architecture.
Figure 7Performance comparison of the multi-branch CNN vs individual sequences vs ensemble of the five voted ensembles method.