| Literature DB >> 34903956 |
Tripti Goel1, R Murugan1, Seyedali Mirjalili2,3, Deba Kumar Chakrabartty4.
Abstract
Coronavirus Disease 2019 (COVID-19) had already spread worldwide, and healthcare services have become limited in many countries. Efficient screening of hospitalized individuals is vital in the struggle toward COVID-19 through chest radiography, which is one of the important assessment strategies. This allows researchers to understand medical information in terms of chest X-ray (CXR) images and evaluate relevant irregularities, which may result in a fully automated identification of the disease. Due to the rapid growth of cases every day, a relatively small number of COVID-19 testing kits are readily accessible in health care facilities. Thus it is imperative to define a fully automated detection method as an instant alternate treatment possibility to limit the occurrence of COVID-19 among individuals. In this paper, a two-step Deep learning (DL) architecture has been proposed for COVID-19 diagnosis using CXR. The proposed DL architecture consists of two stages, "feature extraction and classification". The "Multi-Objective Grasshopper Optimization Algorithm (MOGOA)" is presented to optimize the DL network layers; hence, these networks have named as "Multi-COVID-Net". This model classifies the Non-COVID-19, COVID-19, and pneumonia patient images automatically. The Multi-COVID-Net has been tested by utilizing the publicly available datasets, and this model provides the best performance results than other state-of-the-art methods.Entities:
Keywords: CNN; COVID-19; Chest X-ray images; Deep learning; MOGOA; Multi-objective optimization
Year: 2021 PMID: 34903956 PMCID: PMC8656152 DOI: 10.1016/j.asoc.2021.108250
Source DB: PubMed Journal: Appl Soft Comput ISSN: 1568-4946 Impact factor: 6.725
State-of-the-art DL model for COVID-19 diagnosis system through CXR images.
| Author | Accuracy (%) | Sensitivity (%) | Specificity (%) | Precision (%) | F-score (%) |
|---|---|---|---|---|---|
| Hassantbar et al. | 93.2 | 96.1 | – | – | – |
| Altan et al. | 95324 | 93.61 | 96.05 | 92.22 | 92.91 |
| Das et al. | 97.4 | 97 | 97.2 | – | 96.9 |
| Imran et al. | 88.76 | 91.71 | 95.27 | 86.60 | 89.08 |
| Khan et al. | 89.5 | 97 | 100 | – | – |
| Mahmud et al | 97.4 | – | – | – | – |
| Minalee et al. | – | 98 | – | – | – |
| Nour et al. | 98.97 | 89.37 | 99.75 | 96.72 | – |
| Oztunk et al. | 87.02 | – | – | – | – |
| Panwar et al. | 95.61 | – | – | – | – |
| Vaid et al. | 96.3 | – | – | – | – |
| Nain et al. | 98 | – | – | – | – |
| Sethy et al. | 95.38 | – | – | – | – |
| Hemdan et al. | 90 | – | – | – | – |
| Wang et al. | 83.5 | – | – | – | – |
| Alazab et al. | 94.80 | – | – | – | – |
| Proposed | 98.21 | 99.63 | 97.59 | 95.39 | 97.46 |
Fig. 1Workflow of Multi-COVID-Net architecture.
Details of dataset.
| Dataset | Images category | No. of images |
|---|---|---|
| Chest imaging | COVID-19 | 134 |
| SIRM COVID-19 database | COVID-19 | 64 |
| COVID-chest X- ray | COVID-19 | 646 |
| COVID-19 | 55 | |
| Provincial peoples hospital | COVID-19 | 1 |
| Kaggle | Normal | 900 |
| Kaggle | Pneumonia | 900 |
| Total | 2700 | |
Training options using MOGOA optimization.
| Parameters | InceptionV3 | ResNet50 |
|---|---|---|
| Rate | ||
| Momentum | 0.5224 | 0.750 |
| Learning rate | 0.0549 | 0.0650 |
| Epoch | 7 | 6 |
| L2Regularization | 1.3042e−04 | 1.1746e−04 |
| Batch size | 32 | 32 |
Fig. 2Training image samples (a–b) Non-COVID-19 (c–d) COVID-19 (e–f) Pneumonia.
Fig. 3Testing image samples (a–b) Non-COVID-19 (c–d) COVID-19 (e–f) Pneumonia.
Performance parameters.
| Parameters | Definition |
|---|---|
| True Positive (TP) | The COVID-19 image is correctly diagnosed |
| False Positive (FP) | The Non-COVID-19 image is misdiagnosed as COVID-19 |
| True Negative (TN) | The Non-COVID-19 image is correctly diagnosed |
| False negative (FN) | The COVID-19 image is misdiagnosed as non-COVID-19 |
Results of the Multi-COVID-Net model.
| Accuracy | Sensitivity | Specificity | Precision | F1 Score |
|---|---|---|---|---|
| 98.27% | 99.63% | 97.59% | 95.39% | 97.46% |
Fig. 4(a) Generated ROC (b) Generated CM.
Parameters of MOGOA.
| Parameters | Values |
|---|---|
| Dimension | 8 |
| Objective function | 3 |
| Number of iterations | 20 |
| Population size | 200 |
| Archive size | 32 |
| Lower bound | [0.2, 0.01, 1.0000e−04, 0.2, 0.01, 1.0000e−04, 8] |
| Upper bound | [0.9, 0.1, 2.0000e−04, 0.9, 0.1, 2.0000e−04, 64] |
Performance comparison of classifiers.
| Method | Accuracy | Sensitivity | Specificity | Precision | F1 Score |
|---|---|---|---|---|---|
| DT | 71.98 | 77.04 | 69.44 | 55.76 | 64.70 |
| K-NN | 73.95 | 96.67 | 62.59 | 56.37 | 71.21 |
| SVM | 77.16 | 94.07 | 68.70 | 60.05 | 73.30 |
| NB | 85.06 | 87.78 | 83.70 | 72.92 | 79.66 |
| RF | 86.67 | 84.41 | 86.30 | 76.13 | 81.38 |
| CNN | 91.96 | 87.04 | 95.93 | 91.44 | 89.18 |
| SE | 75.19 | 95.59 | 66.48 | 58.00 | 71.33 |
| ResNet50 | 88.27 | 97.78 | 83.52 | 74.79 | 84.75 |
| InceptionV3 | 91.85 | 93.33 | 91.11 | 84.00 | 88.42 |
| EDLN | 94.19 | 97.78 | 93.89 | 88.89 | 93.12 |
| Proposed | 98.27 | 99.63 | 97.59 | 95.39 | 97.46 |
Fig. 5ROC of (a) DT (b) K-NN (c) SVM) (d) NB (e) RF (f) CNN (g) SE (h) ResNet50 (i) InceptionV3, (j) EDLN Without Optimization (k) EDLN with MOGOA Optimization algorithms.
Fig. 6Confusion Matrix of (a) DT (b) K-NN (c) SVM) (d) NB (e) RF (f) CNN (g) SE (h) ResNet50 (i) InceptionV3, (j) EDLN Without Optimization (k) EDLN with MOGOA Optimization algorithms.
Performance comparison of optimization algorithms.
| Method | Accuracy | Sensitivity | Specificity | Precision | F1 Score |
|---|---|---|---|---|---|
| GA | 95.80 | 97.04 | 95.19 | 90.97 | 93.91 |
| PS | 94.95 | 95.56 | 93.15 | 87.46 | 91.33 |
| PSO | 95.19 | 96.30 | 94.63 | 89.97 | 93.02 |
| WOA | 94.44 | 97.78 | 92.78 | 87.13 | 92.15 |
| GOA | 96.30 | 98.15 | 95.37 | 91.38 | 94.64 |
| MOGA | 96.53 | 98.63 | 96.48 | 93.40 | 96.42 |
| Proposed | 98.21 | 99.63 | 97.59 | 95.39 | 97.46 |
Fig. 7Comparison of MOGOA with GSS.
Performance comparison on another dataset.
| Method | Accuracy | Sensitivity | Specificity | Precision | F1 Score |
|---|---|---|---|---|---|
| Without optimization | 91.00 | 99.50 | 86.75 | 78.97 | 92.91 |
| With Proposed optimization | 95.33 | 99.50 | 93.25 | 88.05 | 93.43 |
Fig. 8Comparison with cross-validation.