| Literature DB >> 35408340 |
Mirza Mumtaz Zahoor1,2,3, Shahzad Ahmad Qureshi1,2, Sameena Bibi4, Saddam Hussain Khan1,2,5, Asifullah Khan1,2,6, Usman Ghafoor7,8, Muhammad Raheel Bhutta9.
Abstract
Brain tumor analysis is essential to the timely diagnosis and effective treatment of patients. Tumor analysis is challenging because of tumor morphology factors like size, location, texture, and heteromorphic appearance in medical images. In this regard, a novel two-phase deep learning-based framework is proposed to detect and categorize brain tumors in magnetic resonance images (MRIs). In the first phase, a novel deep-boosted features space and ensemble classifiers (DBFS-EC) scheme is proposed to effectively detect tumor MRI images from healthy individuals. The deep-boosted feature space is achieved through customized and well-performing deep convolutional neural networks (CNNs), and consequently, fed into the ensemble of machine learning (ML) classifiers. While in the second phase, a new hybrid features fusion-based brain-tumor classification approach is proposed, comprised of both static and dynamic features with an ML classifier to categorize different tumor types. The dynamic features are extracted from the proposed brain region-edge net (BRAIN-RENet) CNN, which is able to learn the heteromorphic and inconsistent behavior of various tumors. In contrast, the static features are extracted by using a histogram of gradients (HOG) feature descriptor. The effectiveness of the proposed two-phase brain tumor analysis framework is validated on two standard benchmark datasets, which were collected from Kaggle and Figshare and contain different types of tumors, including glioma, meningioma, pituitary, and normal images. Experimental results suggest that the proposed DBFS-EC detection scheme outperforms the standard and achieved accuracy (99.56%), precision (0.9991), recall (0.9899), F1-Score (0.9945), MCC (0.9892), and AUC-PR (0.9990). The classification scheme, based on the fusion of feature spaces of proposed BRAIN-RENet and HOG, outperform state-of-the-art methods significantly in terms of recall (0.9913), precision (0.9906), accuracy (99.20%), and F1-Score (0.9909) in the CE-MRI dataset.Entities:
Keywords: analysis; brain tumor; classification; convolutional neural network; deep-boosted learning; detection; ensemble learning; hybrid learning; transfer learning
Mesh:
Year: 2022 PMID: 35408340 PMCID: PMC9002515 DOI: 10.3390/s22072726
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Overall description of the proposed two-phase brain tumor analysis framework.
Augmentation methods.
| Method | Parameters |
|---|---|
| Image-Rotation | 0 to 360 degree |
| Image-Sharing | −0.05, +0.05 |
| Image-Scaling | 0.5–1 limit |
| Image Reflection | ±1 in the right–left direction |
Figure 2Overall detailed description of the proposed brain tumor analysis framework. (A) Block flow diagram. (B) Overall workflow overview. (C) Detailed block diagram of proposed deep learning-based brain tumor detection scheme.
Figure 3(A) Displays the concise details. (B) Demonstrates the comprehensive details of the proposed HFF-BTC model.
Figure 4The proposed BRAIN-RENet deep CNN for brain tumor classification.
Figure 5Sample image from dataset of normal and tumor images. (A) Normal. (B) Glioma. (C) Meningioma. (D) Pituitary.
Assessment Metric Details.
| Metric | Description |
|---|---|
| Precision (Pre.) | The fraction of correctly detected class to an actual class |
| Recall (Rec) | The proportion of correctly identified class and actual negative class |
| Accuracy (Acc.) | % of the total number of correct detection |
| MCC | Matthews correlation coefficient |
| F1-Score | The harmonic mean of Pre. and Rec. |
| TP | Truly positive prediction |
| TN | Truly negative prediction |
| FP | Falsely positive prediction |
| FN | Falsely negative prediction |
Softmax probabilistic-based employment of custom-made CNN models 60:40% data portioning (training: testing).
| Model | Training Scheme | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Transfer Learning-Based (TL-B) | Training from Scratch (TR-SC) | |||||||||
| Acc. % | Rec. | Pre. | F1-Score | MCC | Acc. % | Rec. | Pre. | F1-Score | MCC | |
| ShuffleNet | 98.52 | 0.9824 | 0.9868 | 0.9846 | 0.9694 | 90.51 | 0.9849 | 0.8702 | 0.9241 | 0.8455 |
| VGG-16 | 98.76 | 0.9837 | 0.9901 | 0.9869 | 0.9739 | 94.07 | 0.9899 | 0.9155 | 0.9512 | 0.9016 |
| SqueezeNet | 98.91 | 0.9849 | 0.9917 | 0.9883 | 0.9768 | 95.36 | 0.9498 | 0.9556 | 0.9527 | 0.9058 |
| VGG-19 | 98.22 | 0.9949 | 0.9744 | 0.9846 | 0.9691 | 96.54 | 0.9799 | 0.9569 | 0.9683 | 0.9362 |
| ResNet-50 | 98.42 | 0.9649 | 0.9966 | 0.9805 | 0.9621 | 97.53 | 0.9448 | 0.9948 | 0.9692 | 0.9412 |
| Xception | 98.81 | 0.9824 | 0.9917 | 0.9807 | 0.9743 | 97.23 | 0.9599 | 0.9801 | 0.9698 | 0.9405 |
| Inception-V3 | 98.52 | 0.9924 | 0.9806 | 0.9856 | 0.9730 | 97.63 | 0.9573 | 0.9882 | 0.9725 | 0.9464 |
| Resnet-18 | 98.91 | 0.9774 | 0.9966 | 0.9869 | 0.9744 | 97.43 | 0.9812 | 0.9701 | 0.9756 | 0.9511 |
| GoogleNet | 98.52 | 0.9924 | 0.9806 | 0.9856 | 0.9731 | 97.53 | 0.9937 | 0.9643 | 0.9788 | 0.9575 |
| DenseNet-201 | 98.86 | 0.9724 | 0.9991 | 0.9856 | 0.9720 | 98.17 | 0.9636 | 0.9932 | 0.9782 | 0.9576 |
Performance comparison of Softmax probabilistic-based and deep feature extracted from custom-made TL-B CNNs with SVM-based classification of four best-performing TL-B CNN models selected for proposed DFS-BTD framework. 60:40% data portioning (training: testing).
| Model | DFS-HL Scheme | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Transfer Learning-Based (TL-B) | 4 Best Performing Transfer Learning-Based | |||||||||
| Acc. % | Rec. | Pre. | F1-Score | MCC | Acc. % | Rec. | Pre. | F1-Score | MCC | |
| Inception-V3 | 98.52 | 0.9924 | 0.9806 | 0.9856 | 0.973 | 99.01 | 0.9824 | 0.9950 | 0.9887 | 0.9776 |
| Resnet-18 | 98.91 | 0.9774 | 0.9966 | 0.9869 | 0.9744 | 99.16 | 0.9799 | 0.9991 | 0.9894 | 0.9793 |
| GoogleNet | 98.52 | 0.9924 | 0.9806 | 0.9856 | 0.9731 | 99.11 | 0.9849 | 0.995 | 0.9899 | 0.9801 |
| DenseNet-201 | 98.86 | 0.9724 | 0.9991 | 0.9856 | 0.9720 | 99.06 | 0.9887 | 0.9918 | 0.9902 | 0.9806 |
Performance comparison of features extracted from custom-made TL-B CNN models with MLP- and AdaBoostM1-based classification of four best-performing TL-B CNN models selected for the proposed DFS-BTD framework. 60:40% data portioning (training: testing).
| Model | DFS-HL Scheme | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| 4 Best Performing Transfer Learning-Based | 4 Best Performing Transfer Learning-Based (TL-B) with AdaBoostM1 | |||||||||
| Acc. % | Rec. | Pre. | F1-Score | MCC | Acc. % | Rec. | Pre. | F1-Score | MCC | |
| Inception-V3 | 99.31 | 0.9824 | 1.0000 | 0.9911 | 0.9826 | 99.06 | 0.9899 | 0.9910 | 0.9905 | 0.9810 |
| Resnet-18 | 99.26 | 0.9912 | 0.9934 | 0.9923 | 0.9847 | 99.41 | 0.9912 | 0.9959 | 0.9935 | 0.9872 |
| GoogleNet | 99.36 | 0.9874 | 0.9975 | 0.9924 | 0.9851 | 99.11 | 0.9824 | 0.9966 | 0.9895 | 0.9793 |
| DenseNet-201 | 99.41 | 0.9862 | 0.9991 | 0.9926 | 0.9855 | 99.46 | 0.9874 | 0.9991 | 0.9932 | 0.9867 |
Deep boosted feature space and ensemble classification (DBFS-EC) 60:40% data portioning (training: testing).
| Classifiers | Deep Hybrid Boosted Feature Space | ||||
|---|---|---|---|---|---|
| Acc. % | Rec. | Pre. | F1-Score | MCC | |
| SVM | 99.41 | 0.9924 | 0.9950 | 0.9937 | 0.9876 |
| MLP | 99.41 | 0.9974 | 0.9918 | 0.9940 | 0.9888 |
| AdaboostM1 | 99.46 | 0.9862 | 1.0000 | 0.9941 | 0.9863 |
| Proposed DBFS-EC | 99.56 | 0.9899 | 0.9991 | 0.9945 | 0.9892 |
Figure 6ROC curve for the existing CNNs, TR-SC based in (A), TL-B models in (B), and (C) represent proposed (DBFS-EC) framework.
Performance comparison of proposed HFF-BTC with existing ML models.
| Classifiers | Parameters | HFF-HL | |||
|---|---|---|---|---|---|
| Rec. | Pre. | Acc. % | F1-Score | ||
| Naïve Nayes | Gaussian Kernel | 0.9160 | 0.8923 | 90.5 | 0.9039 |
| Decision Tree | - | 0.9223 | 0.9280 | 93.3 | 0.9251 |
| Ensemble | AdaboostM2 | 0.9670 | 0.9670 | 96.9 | 0.9670 |
| SVM | Linear kernel | 0.9743 | 0.9616 | 96.9 | 0.9679 |
| Poly. Order 2 | 0.9823 | 0.9890 | 98.7 | 0.9856 | |
| RBF | 0.9883 | 0.9866 | 98.9 | 0.9874 | |
| Proposed Framework (HFF-BTC) | Dynamic + Static-SVM | 0.9906 | 0.9913 | 99.2 | 0.9909 |
Figure 7Confusion matrix-based performance comparison of proposed (A) HFF-BTC, (B) SVM poly. Ordr.2, (C) AdaboostM2, (D) Decision Tree, (E) Naïve Bayes.
Performance evaluation of proposed HFF-BTC with state-of-the-art models.
| Method | The Proposed Classification Setup | |||
|---|---|---|---|---|
| Rec. | Pre. | Acc. % | F1-Score | |
| Cheng et al. [ | 0.8105 | 0.9201 | 91.28 | - |
| Badža et al. [ | 0.9782 | 0.9715 | 97.28 | 0.9747 |
| Gumaei et al. [ | - | - | 94.23 | - |
| Díaz Pernas et al. [ | - | - | 97.30 | - |
| Proposed BRAIN-RENet-SVM | 0.9683 | 0.9750 | 97.40 | 0.9716 |
| HOG-SVM | 0.8906 | 0.8790 | 87.20 | 0.8897 |
| Proposed HFF-BTC | 0.9906 | 0.9913 | 99.20 | 0.9909 |
Figure 8Performance comparison of HFF-BTC models in terms of (A) F1-score, (B) accuracy, (C) recall, and (D) precision.
Figure 9Feature space visualization of the proposed DBFS-EC framework (A), DFS-HL (B), well-performing TL-B (C), and TR-SC (D) models.
Figure 10Feature space visualization of the proposed HFF-BTC framework (A), HOG descriptor (B), and proposed BRAIN-RENet (C).