| Literature DB >> 36147663 |
Md Saikat Islam Khan1, Anichur Rahman1,2, Tanoy Debnath1,3, Md Razaul Karim1, Mostofa Kamal Nasir1, Shahab S Band4, Amir Mosavi5,6, Iman Dehzangi7,8.
Abstract
Detection and Classification of a brain tumor is an important step to better understanding its mechanism. Magnetic Reasoning Imaging (MRI) is an experimental medical imaging technique that helps the radiologist find the tumor region. However, it is a time taking process and requires expertise to test the MRI images, manually. Nowadays, the advancement of Computer-assisted Diagnosis (CAD), machine learning, and deep learning in specific allow the radiologist to more reliably identify brain tumors. The traditional machine learning methods used to tackle this problem require a handcrafted feature for classification purposes. Whereas deep learning methods can be designed in a way to not require any handcrafted feature extraction while achieving accurate classification results. This paper proposes two deep learning models to identify both binary (normal and abnormal) and multiclass (meningioma, glioma, and pituitary) brain tumors. We use two publicly available datasets that include 3064 and 152 MRI images, respectively. To build our models, we first apply a 23-layers convolution neural network (CNN) to the first dataset since there is a large number of MRI images for the training purpose. However, when dealing with limited volumes of data, which is the case in the second dataset, our proposed "23-layers CNN" architecture faces overfitting problem. To address this issue, we use transfer learning and combine VGG16 architecture along with the reflection of our proposed "23 layers CNN" architecture. Finally, we compare our proposed models with those reported in the literature. Our experimental results indicate that our models achieve up to 97.8% and 100% classification accuracy for our employed datasets, respectively, exceeding all other state-of-the-art models. Our proposed models, employed datasets, and all the source codes are publicly available at: (https://github.com/saikat15010/Brain-Tumor-Detection).Entities:
Keywords: Brain tumor; Computer-assisted diagnosis; Convolutional neural network; Data augmentation; Magnetic reasoning imaging
Year: 2022 PMID: 36147663 PMCID: PMC9468505 DOI: 10.1016/j.csbj.2022.08.039
Source DB: PubMed Journal: Comput Struct Biotechnol J ISSN: 2001-0370 Impact factor: 6.155
Fig. 1Proposed architecture for brain tumor detection..
Fig. 2Different samples of brain tumors. Glioma, Metastatic adenocarcinoma, Metastatic bronchogenic carcinoma, Meningioma, and Sarcoma tumors from left to right in Harvard medical dataset. The tumor presents within the rectangle.
Number of MRI slices in dataset 1.
| Tumor Class | Number of Patients | Number of MR Slices |
|---|---|---|
| Meningioma | 82 | 708 |
| Glioma | 91 | 1426 |
| Pituitary | 60 | 930 |
| Total | 233 | 3064 |
Number of MRI slices in dataset 2.
| Brain | Tumor Class | Number of Slices |
|---|---|---|
| Normal | Normal Image | 71 |
| Abnormal | Glioma | 29 |
| Metastatic audenocarcinoma | 8 | |
| Metastatic bronchogenic carcinoma | 12 | |
| Meningioma | 16 | |
| Sarcoma | 16 | |
| Total | 152 |
MRI slices distribution for training validation and testing purposes.
| Dataset | Brain Tumor Type | Training | Validation | Testing |
|---|---|---|---|---|
| Harvard Medical | Normal | 357 | 42 | 14 |
| Abnormal | 406 | 49 | 16 | |
| Figshare | Meningioma | 502 | 56 | 150 |
| Glioma | 1032 | 115 | 279 | |
| Pituitary | 674 | 75 | 181 |
Data augmentation strategy used in this study.
| Serial | Parameter | Value |
|---|---|---|
| 1 | shear range | 0.2 |
| 2 | zoom range | 0.2 |
| 3 | rotation | 90 |
| 4 | width shift range | 0.1 |
| 5 | height shift range | 0.1 |
| 6 | vertical_flip | True |
| 7 | horizontal_flip | True |
Fig. 3Proposed 23-layers CNN architecture.
Fig. 4Convolution operation on 5 5 image using 3 3 kernel.
Fig. 5ReLU operation.
Fig. 7Max Pooling procedure.
Fig. 6Dropout layer.
Fig. 8Fine-tuned Proposed architecture with the attachment of “transfer learning based VGG16 architecture”..
Fig. 9Training progress for study I: (a) accuracy value during training and validation process (preferred higher value), and (b) loss value during training and validation process (preferred lower value).
Optimization of Hyper-Parameters for Study I and Study II.
| Model | Hyper-parameters | setting |
|---|---|---|
| CNN | Loss function | sparse_categorical _crossentropy |
| Optimizer function | adam | |
| Metrics | accuracy | |
| Epochs | 80 | |
| Batch_size | 32 | |
| Learning_rate | 0.0001 | |
| CNN- Pretrained Model | Loss function | categorical _crossentropy |
| Optimizer function | adam | |
| Metrics | accuracy | |
| Epochs | 40 | |
| Batch_size | 10 | |
| Learning_rate | 0.0001 |
Fig. 10CNN model’s performance a) confusion matrix, b) ROC curve.
The results obtained using the CNN model on dataset1.
| Metrics | Tumor class | TP | TN | FP | FN | Accuracy | Precision | Recall | FPR | TNR | F1-score |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Figshare Dataset | Meningioma | 140 | 450 | 10 | 10 | 96.7% | 93.3% | 93.3% | 0.021 | 0.978 | 0.933 |
| Glioma | 270 | 323 | 9 | 8 | 97.2% | 96.8% | 97.1% | 0.027 | 0.972 | 0.969 | |
| Pituitary | 180 | 427 | 1 | 2 | 99.5% | 99.4% | 98.9% | 0.002 | 0.998 | 0.991 | |
| Average Score | 97.8% | 96.5% | 96.4% | 0.016 | 0.983 | 0.964 | |||||
Fig. 14Training progress for study I I: (a) accuracy value during training and validation process (preferred higher value), and (b) loss value during training and validation process (preferred lower value).
Fig. 13Fine-tuned model’s performance a) confusion matrix, b) ROC curve.
The results obtained using the reflection of the proposed CNN model on dataset2.
| Metrics | Tumor class | TP | TN | FP | FN | Accuracy | Precision | Recall | FPR | TNR | F1-score |
|---|---|---|---|---|---|---|---|---|---|---|---|
| Harvard Medical Dataset | No Tumor | 14 | 16 | 0 | 0 | 100% | 100% | 100% | 0 | 1 | 100% |
| Tumor | 16 | 14 | 0 | 0 | 100% | 100% | 100% | 0 | 1 | 100% | |
| Average Score | 100% | 100% | 100% | 0 | 1 | 100% | |||||
Fig. 11Performance of the proposed method on Dataset-1.
Fig. 12Performance of the proposed method on Dataset-2.
Performance of different configurations on the Figshare dataset.
| Method | Loss Function | Activation Function | Accuracy on Figshare Dataset |
|---|---|---|---|
| 23-Layer CNN | Binary Cross Entropy | Sigmoid | 82% |
| 23-Layer CNN | Binary Cross Entropy | Tanh | 80% |
| 23-Layer CNN | Binary Cross Entropy | Softmax | 84% |
| 23-Layer CNN | Categorical Cross Entropy | Sigmoid | 89% |
| 23-Layer CNN | Categorical Cross Entropy | Tanh | 91% |
| 23-Layer CNN | Categorical Cross Entropy | Softmax | 92% |
| 23-Layer CNN | Sparse Categorical Cross Entropy | Sigmoid | 94% |
| 23-Layer CNN | Sparse Categorical Cross Entropy | Tanh | 95% |
| 23-Layer CNN | Sparse Categorical Cross Entropy | Softmax | 97.8% |
Comparison of the proposed framework with the other state of art models
| Method | Number of images | Classifier | Classification type | Accuracy |
|---|---|---|---|---|
| Shanaka et al. | 3064 | Deep Learning + Active Contouring | Multi class | 92 |
| Momina et al. | 3064 | Mask RCNN + ResNet-50 | Multi class | 95.9 |
| Francisco et al. | 3064 | CNN | Multi class | 97 |
| Emrah et al. | 3064 | CNN | Multi class | 92.6 |
| Abiwinanda et al. | 700 | CNN | Multi Class | 84.1 |
| Gudigar et al. | 612 | PSO + SVM | Binary Class | 97.4 |
| El-Dahshan et al. | 70 | KNN | Binary Class | 98.6 |
| Sultan et al. | 3064 | CNN | Multi Class | 96.1 |
| Anaraki et al. | 3064 | CNN + GA | Multi Class | 94.2 |
| Afshar et al. | 3064 | CapsNets | Multi Class | 90.8 |
| Chaplot et al. | 52 | SVM | Binary Class | 98.0 |
| Swati et al. | 3064 | VGG19 | Multi Class | 94.8 |
| Sajjad et al. | 3064 | VGG19 | Multi Class | 94.5 |
| Cheng et al. | 3064 | SVM and KNN | Multi Class | 91.2 |
| Proposed Method | 152 | Fine-tuned VGG16 | Binary Class | 100 |
| Proposed Method | 3064 | CNN | Multi Class | 97.8 |
Fig. 15Performance of the proposed method compared to the latest research..