| Literature DB >> 35965745 |
Usman Zahid1, Imran Ashraf1, Muhammad Attique Khan2, Majed Alhaisoni3, Khawaja M Yahya4, Hany S Hussein5,6, Hammam Alshazly7.
Abstract
Early detection of brain tumors can save precious human life. This work presents a fully automated design to classify brain tumors. The proposed scheme employs optimal deep learning features for the classification of FLAIR, T1, T2, and T1CE tumors. Initially, we normalized the dataset to pass them to the ResNet101 pretrained model to perform transfer learning for our dataset. This approach results in fine-tuning the ResNet101 model for brain tumor classification. The problem with this approach is the generation of redundant features. These redundant features degrade accuracy and cause computational overhead. To tackle this problem, we find optimal features by utilizing differential evaluation and particle swarm optimization algorithms. The obtained optimal feature vectors are then serially fused to get a single-fused feature vector. PCA is applied to this fused vector to get the final optimized feature vector. This optimized feature vector is fed as input to various classifiers to classify tumors. Performance is analyzed at various stages. Performance results show that the proposed technique achieved a speedup of 25.5x in prediction time on the medium neural network with an accuracy of 94.4%. These results show significant improvement over the state-of-the-art techniques in terms of computational overhead by maintaining approximately the same accuracy.Entities:
Mesh:
Year: 2022 PMID: 35965745 PMCID: PMC9371837 DOI: 10.1155/2022/1465173
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1The proposed approach for the brain tumor classification.
Figure 2Sample database images.
Summary of the dataset.
| Class | Database name | Total images |
|---|---|---|
| T1 | BraTS2018 | 28,446 |
| T1CE | 28,969 | |
| T2 | 28,759 | |
| FLAIR | 28,413 |
Figure 3ResNet101 architecture.
Figure 4Deep transfer learning process.
Figure 5The proposed training process of the deep learning model for brain tumor classification.
Figure 6Prediction results in the form of labeled Images (a) numerical results.
Comparison of prediction accuracy of brain tumors.
| Accuracies % | |||||
|---|---|---|---|---|---|
| Classifiers | Org. ResNet101 | Optimized | Feature fusion | %Age difference (org VS FF) | |
| PSO | DE | ||||
| Fine tree | 89.6 | 88.8 | 88 | 92.6 | 3.3% |
| Linear discriminant | 96 | 95.7 | 95.6 | 95.7 | −0.3% |
| Cubic SVM | 96.7 | 96.7 | 96.6 | 96.7 | 0% |
| Boosted trees | 92.8 | 92.1 | 92.3 | 92.5 | −0.3% |
| Bagged trees | 95.5 | 95.4 | 95.3 | 94.5 | −1% |
| Subspace discriminant | 95.8 | 95.5 | 95.4 | 95.4 | −0.4% |
| Narrow neural network | 95.7 | 95.5 | 95.4 | 93.9 | −1.9% |
| Medium neural network | 96 | 95.8 | 95.8 | 94.4 | −1.7% |
| Wide neural network | 96.1 | 95.9 | 96.1 | 95.4 | −0.7% |
Comparison of the prediction time (sec) of brain tumors.
| Prediction time (sec) | |||||
|---|---|---|---|---|---|
| Classifiers | Original ResNet101 | Optimized | Feature fusion | Speedup (org VS FF) | |
| PSO | DE | ||||
| Fine tree | 286.34 | 139.28 | 159.25 | 156.73 | 1.8x |
| Linear discriminant | 336.99 | 62.48 | 117.11 | 52.093 | 6.5x |
| Cubic SVM | 4461.8 | 954.87 | 2088.8 | 3901.4 | 1.1x |
| Boosted trees | 5615.4 | 2691.4 | 3013.2 | 2688.8 | 2.1x |
| Bagged trees | 774.26 | 326.44 | 460.75 | 403.45 | 1.9x |
| Subspace discriminant | 3063.2 | 688.63 | 1287.2 | 552.78 | 5.5x |
| Narrow neural network | 5999.1 | 1332.7 | 2754.2 | 1662.1 | 3.6x |
| Medium neural network | 4906.5 | 1457.7 | 3636.3 | 192.56 | 25.5x |
| Wide neural network | 1310.4 | 404.63 | 701.48 | 100.38 | 13.1x |
Figure 7The confusion matrix of the cubic SVM after the original feature classification.
Figure 8The confusion matrix of the cubic SVM after applying the particle swarm optimization (PSO)-based feature selection.
Figure 9The confusion matrix of the cubic SVM after applying the differential evolution (DE)-based feature selection.
Figure 10The confusion matrix of the cubic SVM after applying the optimal feature fusion.
Figure 11Comparison of accuracy results with state of the art.
Figure 12Comparison of the prediction time (logarithmic scale) with state of the art.
Detailed result after feature fusion.
| Fusion detail results | |||||
|---|---|---|---|---|---|
| Classifiers | Sensitivity | FNR | Precision | FPR | AUC |
| Fine tree | 92.575 | 7.425 | 92.575 | 0.0275 | 0.925 |
| Linear discriminant | 95.7 | 4.3 | 95.775 | 0.0125 | 0.9575 |
| Cubic SVM | 96.75 | 3.25 | 96.75 | 0.01 | 0.97 |
| Boosted trees | 92.5 | 7.5 | 92.8 | 0.0225 | 0.925 |
| Bagged tree | 94.475 | 5.525 | 94.55 | 0.02 | 0.945 |
| Subspace discriminant | 95.45 | 4.55 | 95.45 | 0.015 | 0.9525 |
| Narrow neural network | 93.9 | 6.1 | 93.9 | 0.02 | 0.94 |
| Medium neural network | 94.425 | 5.575 | 94.425 | 0.02 | 0.9425 |
| Wide neural network | 95.375 | 4.625 | 95.375 | 0.015 | 0.955 |
A comparative study of the proposed methodologies on the BraTS2018 dataset.
| Research papers | Maximum achieved accuracy (%) | Maximum achieved execution-time speedup |
|---|---|---|
| PSO features + softmax [ | 92.50 | NA |
| PLS features + ELM [ | 93.40 | NA |
| Two-channel DNN [ | 93.69 | NA |
| MANet [ | 94.91 | NA |
|
|
|
|
Bold represents the best values.