| Literature DB >> 34079932 |
Ruqian Hao1,2,3, Khashayar Namdar4,3, Lin Liu1, Farzad Khalvati2,4,3,5.
Abstract
Brain tumor is one of the leading causes of cancer-related death globally among children and adults. Precise classification of brain tumor grade (low-grade and high-grade glioma) at an early stage plays a key role in successful prognosis and treatment planning. With recent advances in deep learning, artificial intelligence-enabled brain tumor grading systems can assist radiologists in the interpretation of medical images within seconds. The performance of deep learning techniques is, however, highly depended on the size of the annotated dataset. It is extremely challenging to label a large quantity of medical images, given the complexity and volume of medical data. In this work, we propose a novel transfer learning-based active learning framework to reduce the annotation cost while maintaining stability and robustness of the model performance for brain tumor classification. In this retrospective research, we employed a 2D slice-based approach to train and fine-tune our model on the magnetic resonance imaging (MRI) training dataset of 203 patients and a validation dataset of 66 patients which was used as the baseline. With our proposed method, the model achieved area under receiver operating characteristic (ROC) curve (AUC) of 82.89% on a separate test dataset of 66 patients, which was 2.92% higher than the baseline AUC while saving at least 40% of labeling cost. In order to further examine the robustness of our method, we created a balanced dataset, which underwent the same procedure. The model achieved AUC of 82% compared with AUC of 78.48% for the baseline, which reassures the robustness and stability of our proposed transfer learning augmented with active learning framework while significantly reducing the size of training data.Entities:
Keywords: MRI; active learning; brain tumor; classification; transfer learning
Year: 2021 PMID: 34079932 PMCID: PMC8165261 DOI: 10.3389/frai.2021.635766
Source DB: PubMed Journal: Front Artif Intell ISSN: 2624-8212
Detailed architecture of AlexNet.
| Layer | Kernel size | Stride | Padding | Output size |
|---|---|---|---|---|
| Conv1 | 11 × 11 | 4 | 2 | 64 × 55 × 55 |
| Maxpool1 | 3 × 3 | 2 | 0 | 64 × 27 × 27 |
| Conv2 | 5 × 5 | 2 | 2 | 192 × 27 × 27 |
| Maxpool2 | 3 × 3 | 2 | 0 | 192 × 14 × 14 |
| Conv3 | 3 × 3 | 1 | 1 | 384 × 13 × 13 |
| Conv4 | 3 × 3 | 1 | 1 | 256 × 13 × 13 |
| Conv5 | 3 × 3 | 1 | 1 | 256 × 13 × 13 |
| Maxpool3 | 3 × 3 | 2 | 0 | 256 × 6 × 6 |
| FC1 | – | – | – | 4096 × 1 |
| FC2 | – | – | – | 4096 × 1 |
| FC3 | – | – | – | 2 × 1 |
FIGURE 1Workflow of proposed transfer learning–based active learning framework.
AUC results of AlexNet trained from scratch and fine-tuned from the pretrained model.
| AUC (95% CI) | Pretrained AlexNet | AlexNet trained from scratch |
|---|---|---|
| Validation dataset | 87.46% (87.11, 87.81) | 86.14% (85.60, 86.68) |
| Test dataset | 79.91% (78.95, 80.87) | 71.93% (70.76, 73.10) |
FIGURE 2Visualization of uncertainty distribution of training dataset: (A) unsorted and (B) sorted.
FIGURE 3CNN performance on samples from different uncertainty ranges.
FIGURE 4The distribution of 10% examples with the highest and lowest uncertainty scores.
AUC results of the proposed method and baseline AUC.
| AUC (95% CI) | Proposed method | Baseline |
|---|---|---|
| Validation dataset | 86.86% (86.48, 87.24) | 87.46% (87.11, 87.81) |
| Test dataset | 82.89% (81.87, 83.91) | 79.91% (78.95, 80.87) |
FIGURE 5Comparison of AUC results of the proposed method and baseline.
AUC results of the proposed method and baseline AUC on the balanced dataset.
| AUC (95% CI) | Proposed method | Baseline |
|---|---|---|
| Validation dataset | 85.20% (84.88, 85.52) | 87.17% (86.87, 87.47) |
| Test dataset | 82.00% (81.18, 82.82) | 78.48% (77.60, 79.36) |
FIGURE 6Comparison of AUC results of the proposed method and baseline on the balanced dataset.
Correspondence between the proportion of sample size and the number of examples on the imbalanced dataset and the balanced dataset.
| Proportion of sample size | 10% | 20% | 30% | 40% | 50% | 60% | 70% | 80% | |
|---|---|---|---|---|---|---|---|---|---|
| Number of examples | Imbalanced dataset | 406 | 812 | 1218 | 1624 | 2030 | 2436 | 2842 | 3248 |
| Balanced dataset | 487 | 974 | 1461 | 1948 | 2435 | 2922 | 3409 | 3896 | |
FIGURE 7Comparison of test AUC results of the uncertainty sampling method and random sampling method on the imbalanced dataset.
FIGURE 8Comparison of test AUC results of the uncertainty sampling method and random sampling method on the balanced dataset.