| Literature DB >> 36176933 |
Mohammed J Abdulaal1,2, Ibrahim M Mehedi1,2, Abdullah M Abusorrah1, Abdulah Jeza Aljohani1,2, Ahmad H Milyani1, Md Masud Rana3, Mohamed Mahmoud4.
Abstract
Coronavirus 2019 (COVID-19) has become a pandemic. The seriousness of COVID-19 can be realized from the number of victims worldwide and large number of deaths. This paper presents an efficient deep semantic segmentation network (DeepLabv3Plus). Initially, the dynamic adaptive histogram equalization is utilized to enhance the images. Data augmentation techniques are then used to augment the enhanced images. The second stage builds a custom convolutional neural network model using several pretrained ImageNet models and compares them to repeatedly trim the best-performing models to reduce complexity and improve memory efficiency. Several experiments were done using different techniques and parameters. Furthermore, the proposed model achieved an average accuracy of 99.6% and an area under the curve of 0.996 in the COVID-19 detection. This paper will discuss how to train a customized smart convolutional neural network using various parameters on a set of chest X-rays with an accuracy of 99.6%.Entities:
Mesh:
Year: 2022 PMID: 36176933 PMCID: PMC9499792 DOI: 10.1155/2022/5297709
Source DB: PubMed Journal: Contrast Media Mol Imaging ISSN: 1555-4309 Impact factor: 3.009
Summary of the related works.
| Ref. | A | Results (%) | Limitations |
|---|---|---|---|
| [ | Truncated inception network | 98.5 | (i) Limited dataset is used. |
| (ii) In stacking, the original dimensions are solved. | |||
| (iii) Images and the structured images must be the same. | |||
|
| |||
| [ | DarkCovidNet | 87.02 | (i) End-to-end architecture. |
| (ii) Manual feature extraction. | |||
| (iii) Including a severely low number of image samples. | |||
| (iv) Imprecise localization on the chest region | |||
|
| |||
| [ | Bayes-SqueezeNet | 97.9 | (i) This study is conducted on a publicly dataset, which contains less than 100 COVID-19 images, and more than 5,000 non-COVID images. Due to the limited number of COVID-19 images publicly available so far, further experiments are needed on a larger set of cleanly labeled COVID-19 images for a more reliable estimation of the sensitivity rates. |
| [ | DenseNet | 85 | (ii) Limited dataset is used. |
| [ | MobileNet | 94.7 | — |
|
| |||
| [ | Resnet-50+SVM | 94.7 | (i) The limitation of this methodology is that if the patient is in a critical situation and unable to attend for Xray scanning. |
| (ii) Small dataset. | |||
| (iii) Authors involved SARS&MER cases in COVID positive classes. | |||
|
| |||
| [ | CXRVN | 97.5 | (i) Time consuming. |
| (ii) Lack of extract semantic reliable features. | |||
| (iii) Binary classifier. | |||
Figure 1The Proposed Framework.
Figure 2Main subphases in preprocessing stages.
Figure 3Original images: (a) normal CXR, (b) COVID-19 pneumonia CXR.
Figure 4Enhanced images using adaptive histogram equalization: (a) normal CXR, (b) COVID-19 pneumonia CXR.
Figure 5Deeplabv3Plus architecture.
Figure 6Proposed DCNCC model.
Figure 7DCNCC architecture.
Proposed model parameters for training process.
| Parameters | Values | Parameters | Values |
|---|---|---|---|
| Input size | 256 | Pool size | (2, 2) |
| Learning rate | 0.0001 | Batch size | 32 |
| Validation split | 0.2 | Activation function | ReLU |
| Smart optimization | Talos hyperparameter | Filter size | 5 |
| Dropout rate | 0.5 | Padding | SAME |
| Epochs | 50 |
Datasets descriptions.
| Datasets | Image size | #Classes | #Training sets | #Testing sets |
|---|---|---|---|---|
| DS1 [ | 512 | 2 | 15264 | 400 |
| DS2 [ | 1024 | 3 | 1811 | 484 |
DeepLabv3plus training parameters.
| Parameters | Values |
|---|---|
| Input size | 256 |
| Learning rate | 0.001 |
| Epochs | 30 |
| Activation function | SoftMax |
| Batch size | 10 |
Figure 8(a) Confusion matrix of the first experiment using DS1. (b) Confusion matrix of the second experiment using DS2.
Results of experiments on DS1 using (RESNET-50+ DenseNet) and binary classifier.
| Without data augmentation (%) | Without semantic segmentation (%) | Without pruning (%) | Full features (%) | |
|---|---|---|---|---|
| Sensitivity | 90.57 | 93.34 | 96.90 | 99.6 |
| Specificity | 88.43 | 91.23 | 94.33 | 98.9 |
| Accuracy | 89.03 | 92.59 | 95.88 | 99.6 |
|
| 89.20 | 92.70 | 95.90 | 99.6 |
Results of experiments on DS2 using (RESNET-50+ DenseNet) and multilabel classifier.
| Without data augmentation (%) | Without semantic segmentation (%) | Without pruning (%) | Full features (%) | |
|---|---|---|---|---|
| Sensitivity | 92.37 | 95.20 | 97.80 | 99.5 |
| Specificity | 90.63 | 94.80 | 95.43 | 99.1 |
| Accuracy | 91.13 | 95.11 | 96.62 | 99.6 |
|
| 91.15 | 95.09 | 96.80 | 99.6 |
Figure 9Training and validation accuracy curve for proposed model using RESNET-50+ DenseNet, 700 iterations and 50 epochs.
Figure 10Training and validation loss curve for proposed model using RESNET-50+ DenseNet, 700 iterations and 50 epochs.
Figure 11Validation accuracy curve against validation loss curve for proposed model using RESNET-50+ DenseNet, 700 iterations and 50 epochs.
Comparison between proposed model and related works.
| N | Ref. | Model techniques | Average accuracy | Running time (min) |
|---|---|---|---|---|
| 1 | [ | Truncated inception network | 98.5 | 110 |
| 2 | [ | DarkCovidNet | 87.02 | - |
| 3 | [ | Bayes-SqueezeNet | 97.9 | - |
| 4 | [ | DenseNet | 85 | |
| 5 | [ | MobileNet | 94.7 | 40 |
| 6 | [ | Resnet-50 + SVM | 94.7 | 52 |
| 7 | [ | CXRVN | 97.5 | 45 |
| 8 | [ | Weighted average pruned | 98.1 | 38 |
| 9 | Our model | Semantic segmentation + (ResNet 50 and DenseNet) + weighted average pruned | 99.6 | 48 |
Figure 12Accuracy chart between proposed model and related works.