| Literature DB >> 35634047 |
Shilpa Rani1,2, Deepika Ghai3, Sandeep Kumar4, Mvv Prasad Kantipudi5, Amal H Alharbi6, Mohammad Aman Ullah7.
Abstract
In computer vision and medical image processing, object recognition is the primary concern today. Humans require only a few milliseconds for object recognition and visual stimulation. This led to the development of a computer-specific pattern recognition method in this study for identifying objects in medical images such as brain tumors. Initially, an adaptive median filter is used to remove the noise from MRI images. Thereafter, the contrast image enhancement technique is used to improve the quality of the image. To evaluate the wireframe model, the cellular logic array processing (CLAP)-based algorithm is then applied to images. The basic patterns of three-dimensional (3D) images are then identified from the input image by scanning the whole image. The frequency of these patterns is also used for object classification. A deep neural network is then utilized for the classification of brain tumor. In the proposed model, the syntactic pattern recognition technique is used to find the feature vector and 3D AlexNet is used for brain tumor classification. To evaluate the performance of the proposed work, three benchmark brain tumor datasets are used, i.e., Figshare, Brain MRI Kaggle, and Medical MRI datasets and BraTS 2019 dataset. The comparative analyses reveal that the proposed brain tumor classification model achieves significantly better performance than the existing models.Entities:
Mesh:
Year: 2022 PMID: 35634047 PMCID: PMC9142332 DOI: 10.1155/2022/7882924
Source DB: PubMed Journal: Comput Intell Neurosci
Study of existing methodology on brain tumor detection.
| S. no. | Authors and year | Methodology | Database | Remarks |
|---|---|---|---|---|
| 1 | Li et al. [ | Multi-CNN | MICCAI BraTS [ | Accuracy = 84.3% |
| Sensitivity = 82.5% | ||||
| Specificity = 97.7% | ||||
|
| ||||
| 2 | Addel Gawad et al. [ | Genetic algorithm | Self-dataset | FOM = 89.87% |
| Accuracy = 97.71% | ||||
| Sensitivity = 94.71% | ||||
| Specificity = 98.5% | ||||
|
| ||||
| 3 | Mallick et al. [ | Deep wavelet auto-encoder (DWA) | Brain image dataset [ | Accuracy = 93.14% |
| Sensitivity = 94.26% | ||||
| Specificity = 92.16% | ||||
| F score = 93.15% | ||||
|
| ||||
| 4 | Shakeel et al. [ | AdaBoost classifier + back propagation | Self-dataset | Accuracy = 93.33% |
| Sensitivity = 71.42% | ||||
| Specificity = 88.88% | ||||
|
| ||||
| 5 | Anitha and Murugavalli [ | K-means + DWT | Self-dataset | Accuracy = 92.43% |
| Sensitivity = 95.53% | ||||
| Specificity = 50.6% | ||||
|
| ||||
| 6 | Mano and Anand [ | Swarm intelligence and K-means | Figshare dataset [ | Accuracy = 98.7% |
| Sensitivity = 93.4% | ||||
| Specificity = 65.1% | ||||
| F1 score = 98.7% | ||||
| JAC = 66.46% | ||||
| Swati et al. [ | VGG-19 | Figshare dataset [ | Accuracy = 94.82% | |
|
| ||||
| 8 | Khan et al. [ | VGG-16 | Brain MRI dataset [ | Accuracy = 96% |
| Precision = 93% | ||||
| Recall = 100% | ||||
| F1 score = 97% | ||||
| Time = 6846sec | ||||
|
| ||||
| 9 | Khan et al. [ | ResNet-50 | Brain MRI dataset [ | Accuracy = 89% |
| Precision = 87% | ||||
| Recall = 93% | ||||
| F1 score = 90% | ||||
| Time = 9091sec | ||||
|
| ||||
| 10 | Masood et al. [ | Mask-RCNN | Brain MRI dataset [ | Accuracy = 98.34% |
| 11 | Badza and Barjaktarovic [ | CNN | Figshare dataset [ | Accuracy = 95.40% |
| F1 score = 94.94% | ||||
| Kurup et al. [ | CapsuleNet | Figshare dataset [ | Accuracy = 92.60% | |
| F1 score = 93.33% | ||||
Figure 1Block diagram of the proposed methodology.
Figure 2(a) Original image. (b) Denoising image. (c) Contrast-enhanced image.
Figure 3Wireframe outputs of medical images.
Figure 4Sample images of basic patterns present in the 3D image.
Figure 5(a) 27-neighborhood structure of 3 × 3 × 3 window. (b) Representation of co-ordinates.
Representation of a possible combination of convex polyhedrons.
| Groups | Number of pixels eliminated | |
|---|---|---|
| Group P | Nil. Possible combination—1 | P1 = {1,3,7,9,19,21,25,27} |
| Group Q | One of the corner pixels is eliminated. Possible combination—8 | Q1 = {3,7,9,19,21,25,27},Q3 = {1,7,9,19,21,25,27}, Q7 = {1,3,9,19,21,25,27},Q9 = {1,3,7,19,21,25,27}, Q19 = {1,3,7,9,21,25,27},Q21 = {1,3,7,9,19,25,27}, Q25 = {1,3,7,9,19,21,27}, Q27 = {1,3,7,9,19,21,25} |
| Group R | Any two corner pixels are eliminated. Possible combination—28 | R1,3 = {7,9,19,21,25,27}, R1,7 = {3,9,19,21,25,27} |
| R1,9 = {3,7,19,21,25,27}, R1,19 = {3,7,9,21,25,27} | ||
| R1,21 = {3,7,9,19,25,27}, R1,25 = {3,7,9,19,21,27} | ||
| R1,27 = {3,7,9,19,21,25}, R3,7 = {1,9,19,21,25,27} | ||
| R3,9 = {1,7,19,21,25,27}, R3,19 = {1,7,9,21,25,27} | ||
| R3,21 = {1,7,9,19,25,27}, R3,25 = {1,7,9,19,21,27} | ||
| R3,27 = {1,7,9,19,21,25}, R7,9 = {1,3,19,21,25,27} | ||
| R7,19 = {1,3,9,21,25,27}, R7,21 = {1,3,9,19,25,27} | ||
| R7,25 = {1,3,9,19,21,27}, R7,27 = {1,3,9,19,21,25} | ||
| R9,19 = {1,3,7,21,25,27}, R9,21 = {1,3,7,19,25,27} | ||
| R9,25 = {1,3,7,19,21,27}, R9,27 = {1,3,7,19,21,25} | ||
| R19,21 = {1,3,7,9,25,27}, R19,25 = {1,3,7,9,21,27} | ||
| R19,27 = {1,3,7,9,21,25}, R21,25 = {1,3,7,9,19,27} | ||
| R21,27 = {1,3,7,9,19,25}, R25,27 = {1,3,7,9,19,21} | ||
| Group S | Any three pixels are eliminated. Possible combination—56 | Same way for these groups also |
| Group T | Any four pixels are eliminated. Possible combination—70 | |
| Group U | Any five pixels are eliminated. Possible combination—56 | |
| Group V | Any six pixels are eliminated. Possible combination—28 | |
| Group W | Any seven pixels are eliminated. Possible combination—8 | |
| Group X | All corner pixels are eliminated. Possible combination—1 |
|
Figure 63D AlexNet architecture.
Figure 7GUI of the proposed work.
Details of 3DAlexNet classifier.
| Parameter name | Values |
|---|---|
| Epochs | 65 |
| Learning rate | 0.001 |
| Dropout | 0.20 |
| Batch size | 128 |
| No. of convolution | 8 |
| No. of fully connected layer | 3 |
| Pooling layer | Max pooling |
| Pooling layer window size | 2 |
List of evaluation parameters and their formula.
| S. No. | Parameters name | Formula |
|---|---|---|
| 1 | Accuracy | Accuracy=TP+TN/(TP+TN+FP+FN) |
| 2 | Specificity | Specificity=TN/TN+TP |
| 3 | Sensitivity | Sensitivity=TP/TP+FN |
| 4 | F1 score | F1 − score=2 |
TP = true positive, TN = true negative, FP =false positive, and FN =false negative.
Comparative analysis of various edge detection methods on the different 3D views of the estimated tumor inside a cancerous brain.
| Original image | Wireframe | Robert | Sobel | Prewitt |
|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| ||||
|
|
|
|
|
|
Performance analysis of various edge detection methods with the proposed wireframe method.
| Method | Accuracy | Sensitivity | Specificity |
|---|---|---|---|
| Proposed | 0.99 | 0.87 | 1 |
| Robert | 0.82 | 0.15 | 0.85 |
| Sobel | 0.81 | 0.14 | 0.85 |
| Prewitt | 0.85 | 0.19 | 0.90 |
Figure 8Comparative analysis of various edge detection methods with the proposed wireframe method.
Figure 9Comparative analysis of network parameters of existing systems and proposed system on Figshare dataset.
Figure 10Comparative analysis of the proposed method with RCNN-based approach on the Figshare dataset.
Result analysis of the proposed work with other state-of-art methodology on Figshare dataset.
| Method | Model | Accuracy (%) |
|---|---|---|
| Cheng et al. [ | SVM | 87.54 |
| Dheng et al. [ | SVM | 94.68 |
| Abir et al. [ | PNN | 83.33 |
| WIdhiarso et al. [ | CNN | 82 |
| Afshar and Plataniotis [ | CapsNet | 86.56 |
| Abiwinanda et al. [ | CNN | 84.19 |
| Ismael and Qader [ | MPNN | 91.90 |
| Pashaei and Jazayeri [ | ELM | 93.68 |
| Deepak and Ameer [ | GoogleNet and SVM | 97.10 |
| Swati et al. [ | VGG-19 | 94.82 |
| Huang et al. [ | CNN based on complex networks | 95.49 |
| Gumaei et al. [ | GIST descriptor and ELM | 94.93 |
| Chakrabarty [ | Attention module, hyper column technique, residual block | 97.69 |
| Arisha Rehman et al. [ | Fine-tune AlexNet | 97.39 |
| Arisha Rehman et al. [ | Fine-tune GoogleNet | 98.04 |
| Arisha Rehman et al. [ | Fine-tune VGG-16 | 98.69 |
| Proposed methodology | 3D AlexNet |
|
Performance comparison of the proposed method with existing RCNN-based methods on Figshare dataset.
| Method | Model | Evaluation metrics | |||
|---|---|---|---|---|---|
| Accuracy | mAP | Sensitivity | Time (s) | ||
| Rayahan et al. [ | RCNN | 0.920 | 0.910 | 0.950 | 0.47 |
| Ren et al. [ | Faster RCNN | 0.940 | 0.940 | 0.940 | 0.25 |
| Masood et al. [ | ResNet-50 | 0.959 | 0.946 | 0.953 | 0.20 |
| Masood et al. [ | DenseNet-41 | 0.963 | 0.949 | 0.953 | 0.20 |
| Proposed method |
| 0.958 | 0.985 | 0.13 | |
Result analysis of the proposed work with other state-of-art methodology on the Figshare dataset.
| Method | Model | Classes | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1 score (%) |
|---|---|---|---|---|---|---|
| Rehman et al. [ | Freeze AlexNet | — | 91.21 | 90.48 | 95.48 | — |
| Rehman et al. [ | Freeze GoogleNet | — | 93.46 | 95.38 | 97.74 | — |
| Rehman et al. [ | Freeze VGG-16 | — | 89.76 | 87.81 | 94.64 | — |
| Pashaei et al. [ | CNN | — | 93.68 | — | — | 93.00 |
| Badza and Barjaktarovic [ | CNN | — | 95.40 | — | — | 94.94 |
| Kurup et al. [ | CapsuleNet | 92.60 | — | — | 93.33 | |
| Srinivasan et al. [ | GLCM and wavelet packets | 93.30 | — | — | 72.00 | |
| Proposed method | Glioma | 98.94 | 98.94 | 98.79 | 97.3 | |
| Meningioma | 99.18 | 98.11 | 99.01 | 98.7 | ||
| Pituitary | 99.02 | 98.72 | 98.19 | 98.69 |
Result analysis of the proposed work with other state-of-art methodology on the Brain MRI Kaggle dataset and Medical MRI dataset.
| Method | Model | Accuracy | Precision | Recall | F1 score | Time (sec) |
|---|---|---|---|---|---|---|
| Swati et al. [ | VGG-19 | 0.94 | — | — | — | — |
| Masood et al. [ | Mask-RCNN | 0.98 | — | — | — | — |
| Hassan et al. [ | CNN | 1.0 | 1.0 | 1.0 | 1.0 | 3085 |
| Hassan et al. [ | VGG-16 | 0.96 | 0.93 | 1.0 | 0.97 | 6846 |
| Hassan et al. [ | ResNet-50 | 0.89 | 0.87 | 0.93 | 0.90 | 9091 |
| Hassan et al. [ | Inception-v3 | 0.75 | 0.77 | 0.71 | 0.74 | 5630 |
| Proposed work on Brain MRI dataset |
|
| 0.97 | 0.94 | 0.91 | 3706 |
| Proposed work on Medical MRI dataset |
|
| 0.98 | 0.95 | 0.93 | 3891 |
Result analysis of the proposed work on the BraTS 2019 dataset.
| Study | Method | Contrast | Accuracy (%) |
|---|---|---|---|
| Shahzadi et al. [ | CNN with LSTM | T2-FLAIR | 84.00 |
| Pei et al. [ | CNN | T1, T1ce, T2, T2-FLAIR | 74.9 |
| Ge et al. [ | Deep CNN | T1, T2, T2-FLAIR | 90.87 |
| Mzoughi et al. [ | Deep CNN | T1, T1ce, T2, T2-FLAIR | 96.59 |
| Zhuge et al. [ | Deep CNN | T1, T2, T2-FLAIR | 97.1 |
| Ouerghi et al. [ | Random forest | T1, T2, T2-FLAIR | 96.5 |
| Chatterjee et al. [ | Pretrained ResNet mixed convolution | T1ce | 96.98 |
| Proposed | 3D AlexNet | T1, T1ce, T2, T2-FLAIR | 96.91 |
Result analysis of the proposed work with other state-of-art methodology on the BraTS 2019 dataset.
| Study | Method | Classes | Precision (%) | Recall (%) | Specificity (%) | F1 score (%) |
|---|---|---|---|---|---|---|
| Chatterjee et al. [ | ResNet 3D | Low-grade glioma | 0.798 | 0.920 | 0.907 | 0.854 |
| High-grade glioma | 0.933 | 0.828 | 0.962 | 0.877 | ||
| Healthy subjects | 0.993 | 0.99 | 0.99 | 0.99 | ||
|
| ||||||
| Chatterjee et al. [ | Pretrained ResNet 3D | Low-grade glioma | 0.781 | 0.855 | 0.911 | 0.814 |
| High-grade glioma | 0.896 | 0.835 | 0.937 | 0.863 | ||
| Healthy subjects | 1.00 | 0.999 | 1.00 | 0.999 | ||
|
| ||||||
| Chatterjee et al. [ | ResNet(2 + 1)D | Low-grade glioma | 0.786 | 0.914 | 0.902 | 0.844 |
| High-grade glioma | 0.930 | 0.822 | 0.962 | 0.873 | ||
| Healthy subjects | 0.994 | 0.990 | 0.997 | 0.992 | ||
|
| ||||||
| Chatterjee et al. [ | Pretrained ResNet(2 + 1)D | Low-grade glioma | 0.841 | 0.910 | 0.931 | 0.873 |
| High-grade glioma | 0.928 | 0.870 | 0.959 | 0.897 | ||
| Healthy subjects | 0.999 | 0.998 | 0.999 | 0.999 | ||
|
| ||||||
| Chatterjee et al. [ | ResNet mixed convolution | Low-grade glioma | 0.747 | 0.886 | 0.855 | 0.773 |
| High-grade glioma | 0.911 | 0.750 | 0.963 | 0.823 | ||
| Healthy subjects | 0.994 | 0.976 | 0.997 | 0.985 | ||
|
| ||||||
| Chatterjee et al. [ | Pretrained ResNet mixed convolution | Low-grade glioma | 0.863 | 0.931 | 0.912 | 0.894 |
| High-grade glioma | 0.944 | 0.883 | 0.936 | 0.912 | ||
| Healthy subjects | 0.997 | 0.995 | 0.996 | 0.996 | ||
|
| ||||||
| Proposed |
| Low-grade glioma | 0.925 | 0.946 | 0.963 | 0.904 |
| High-grade glioma | 0.959 | 0.895 | 0.998 | 0.919 | ||
| Healthy subjects | 0.998 | 0.956 | 0.978 | 0.968 | ||