| Literature DB >> 35360478 |
Mohamed Elhoseny1,2, Zahraa Tarek2, Ibrahim M El-Hasnony2.
Abstract
Automated disease prediction has now become a key concern in medical research due to exponential population growth. The automated disease identification framework aids physicians in diagnosing disease, which delivers accurate disease prediction that provides rapid outcomes and decreases the mortality rate. The spread of Coronavirus disease 2019 (COVID-19) has a significant effect on public health and the everyday lives of individuals currently residing in more than 100 nations. Despite effective attempts to reach an appropriate trend to forecast COVID-19, the origin and mutation of the virus is a crucial obstacle in the diagnosis of the detected cases. Even so, the development of a model to forecast COVID-19 from chest X-ray (CXR) and computerized tomography (CT) images with the correct decision is critical to assist with intelligent detection. In this paper, a proposed hybrid model of the artificial neural network (ANN) with parameters optimization by the butterfly optimization algorithm has been introduced. The proposed model was compared with the pretrained AlexNet, GoogLeNet, and the SVM to identify the publicly accessible COVID-19 chest X-ray and CT images. There were six datasets for the examinations: three datasets with X-ray pictures and three with CT images. The experimental results approved the superiority of the proposed model for cognitive COVID-19 pattern recognition with average accuracy 90.48, 81.09, 86.76, and 84.97% for the proposed model, support vector machine (SVM), AlexNet, and GoogLeNet, respectively.Entities:
Mesh:
Year: 2022 PMID: 35360478 PMCID: PMC8964186 DOI: 10.1155/2022/1773259
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
AI techniques for COVID-19.
| Author | Images type | AI methods | Task | Results |
|---|---|---|---|---|
| Nayak et al. [ | Chest X-ray (CXR) images | AlexNet, GoogLeNet, MobileNet-V2, SqueezeNet, VGG-16, ResNet-50, ResNet-34, and Inception-V3 | Classification of COVID-19 from normal cases | Accuracy of ResNet-34 is 98.33% |
| Shorfuzzaman and Hossain [ | CXR images | VGG-16 network | Classification of COVID-19 cases | Accuracy is 95.6% and AUC is 0.97 |
| Linda [ | CXR images | A deep CNN, namely, COVID-Net | Detection of COVID-19 cases | Accuracy is 92.4% |
| Rahman et al. [ | CXR images | Novel U-Net model | Automatic detection of COVID-19 | Accuracy is 95.11% |
| Jin et al. [ | CT images | 2D deep CNN | Rapid COVID-19 detection | Accuracy is 94.98% and AUC is 97.91% |
| Narin et al. [ | CXR images | Pretrained ResNet-50 | Detection of coronavirus pneumonia-infected patient | Accuracy is 98% |
| Chowdhury et al. [ | CXR images | AlexNet, ResNet-18, DenseNet201, and SqueezeNet | Automatic detection of COVID-19 pneumonia | Accuracy is 98.3% |
| Maghdid et al. [ | CXR images and CT images | A new CNN and pretrained AlexNet with transfer learning | Effective COVID-19 detection technique | Accuracy is 98% on X-ray images and 94.1% on CT images |
| Gour and Jain [ | CXR images | Multiple CNN models | Classification CT samples with COVID-19, influenza viral pneumonia, and no infection. | Accuracy is 96%, sensitivity is 98.2% and specificity is 92.2%. |
| Kang et al. [ | CT images | KNN as well as NB | Automatic analysis pipeline for COVID-19 | Accuracy 95%, sensitivity 93.2%, specificity 96.6% |
| Khanday et al. [ | Clinical data | Multinomial naive bayes and logistic regression | Identifying pandemic with clinical text information | Accuracy is 96.2% |
| Sethy et al. [ | CXR images | CNNs with the help of support vector machine (SVM) | Detecting the COVID-19 disease | Accuracy is 95.38% |
| Alakus and Turkoglu [ | CXR images | CNN based LSTM, CNN-RNN | Analyzing the COVID-19 | Accuracy 86.66%, precision 86.75%, recall 99.42% |
| Rasheed et al. [ | CXR images | CNN and logistic regression | Diagnosis of COVID-19 | Accuracy is 97.6 for CNN and 100% for LR |
| Gao et al. [ | CT images | Dual-branch combination network (DCN) | Accurate diagnosis and lesion segmentation of COVID-19 | Accuracy is 96.74% on the internal dataset and 92.87% on the external dataset |
| Goel et al. [ | CXR images | Optimized convolutional neural network (OptCoNet) | Automatic diagnosis of COVID-19 | Accuracy is 97.78% |
| Nour et al. [ | CXR images | SVM | Detection of COVID-19 infection | Accuracy 98.97% sensitivity 89.39% specificity 99.75% |
Figure 1Proposed framework for evaluating different models for COVID-19 prediction.
Figure 2Proposed ANNBOA model.
X-ray and CT-scan for COVID-19 cases and normal cases.
| Type | Name | Code | COVID-19 | Non COVID-19 |
|---|---|---|---|---|
| X-ray | Extensive COVID-19 X-ray images dataset [ | DS_1 | 4044 | 5500 |
| Augmented COVID-19 X-ray images dataset [ | DS_3 | 912 | 912 | |
| Combined COVID-19 dataset [ | DS_5 | 2450 | 4278 | |
|
| ||||
| CT-scan | Extensive COVID-19 CT chest images dataset | DS_2 | 5427 | 2628 |
| SARS-COV-2 Ct-scan dataset [ | DS_4 | 1252 | 1229 | |
| COVID-19 CT scans [ | DS_6 | 198 | 243 | |
Dataset portioning.
| Dataset | COVID-19 | Non-COVID-19 | ||||||
|---|---|---|---|---|---|---|---|---|
| Training | Validation | Testing | Sum | Training | Validation | Testing | Sum | |
| DS_1 | 2880 | 579 | 575 | 4034 | 3822 | 785 | 785 | 5392 |
| DS_2 | 3701 | 850 | 850 | 5401 | 1670 | 475 | 475 | 2620 |
| DS_3 | 650 | 127 | 130 | 907 | 620 | 127 | 130 | 877 |
| DS_4 | 895 | 177 | 177 | 1249 | 845 | 185 | 185 | 1215 |
| DS_5 | 1378 | 407 | 465 | 2250 | 2436 | 715 | 927 | 4078 |
| DS_6 | 132 | 30 | 36 | 198 | 168 | 39 | 36 | 243 |
Figure 3Samples from training and validation of AlexNet.
Figure 4Samples from training and validation of GoogLeNet.
Figure 5Comparison among the three classifiers for the X-ray images.
Figure 6Comparison among the three classifiers for the CT-scan.
Output results for SVM, AlexNet, and GoogLeNet.
| Dataset | Model | Accuracy | Sensitivity | Specificity | Precision | F1-score |
|---|---|---|---|---|---|---|
| DS_1 | SVM | 80.96 | 79.65 | 81.91 | 76.33 | 77.96 |
| DS_2 | 62.47 | 71.88 | 45.78 | 70.16 | 71.01 | |
| DS_3 |
| 99.23 | 99.23 | 99.23 | 99.23 | |
| DS_4 | 83.83 | 79.44 | 88.11 | 86.67 | 82.90 | |
| DS_5 | 98.94 | 98.95 | 98.94 | 98.21 | 98.57 | |
| DS_6 | 61.11 | 100 | 22.22 | 56.25 | 72 | |
|
| ||||||
| DS_1 | AlexNet | 88.82 | 92 | 86.50 | 83.31 | 87.44 |
| DS_2 | 63.94 | 62.24 | 67.47 | 79.83 | 69.95 | |
| DS_3 |
| 100 | 100 | 100 | 100 | |
| DS_4 | 88.21 | 85.56 | 90.81 | 90.06 | 87.75 | |
| DS_5 | 99 | 98.50 | 99.29 | 98.79 | 98.64 | |
| DS_6 | 80.56 | 88.89 | 72.22 | 76.19 | 82.05 | |
|
| ||||||
| DS_1 | GoogLeNet | 89.12 | 90.43 | 88.15 | 84.83 | 87.54 |
| DS_2 | 51.08 | 45.49 | 62.67 | 71.60 | 55.63 | |
| DS_3 |
| 100 | 100 | 100 | 100 | |
| DS_4 | 82.47 | 79.44 | 85.41 | 84.12 | 81.71 | |
| DS_5 | 99.67 | 99.55 | 99.73 | 99.55 | 99.55 | |
| DS_6 | 87.5 | 97.22 | 77.77 | 81.40 | 88.60 | |
|
| ||||||
| DS_1 | BOA + NN | 91.54 | 92.35 | 90.93 | 88.58 | 90.42 |
| DS_2 | 70.46 | 69.37 | 71.89 | 76.39 | 72.71 | |
| DS_3 | 100.00 | 100.00 | 100.00 | 100.00 | 100.00 | |
| DS_4 | 89.86 | 89.82 | 89.90 | 88.24 | 89.02 | |
| DS_5 | 99.72 | 99.70 | 99.73 | 99.55 | 99.62 | |
| DS_6 | 91.30 | 97.22 | 84.85 | 87.50 | 92.11 | |
Figure 7Mean of performance measurements for the proposed techniques.
Friedman test for our proposed techniques.
| Model | Mean | Std deviation | Max | Mean rank | Chi-square | Asymp sig. | ES |
|---|---|---|---|---|---|---|---|
| SVM | 81.35 | 5.63 | 88.19 | 1.40 | 12.12 | 0.007 | 0.808 (strong effect) |
| AlexNet | 87.27 | 0.84 | 88.03 | 2.80 | |||
| GoogLeNet | 85.68 | 0.74 | 86.92 | 1.80 | |||
| BOA + NN | 90.43 | 0.70 | 91.41 | 4.00 |
Wilcoxon signed ranks test for SVM model.
| AlexNet-SVM | GoogLeNet-SVM | BOA–SVM | GoogLeNet-AlexNet | BOA-AlexNet | BOA-GoogLeNet | |
|---|---|---|---|---|---|---|
| Z | −1.753b | −1.483b | −2.023b | −2.023c | −2.023b | −2.023b |
| Asymp. Sig. (2-Tailed) | 0.080 | 0.138 | 0.043 | 0.043 | 0.043 | 0.043 |
b. Based on negative ranks. c. Based on positive ranks.
Spearman's correlation.
| SVM | AlexNet | GoogLeNet | BOA + NN | |
|---|---|---|---|---|
| SVM | ||||
| AlexNet | 0.700 | |||
| GoogLeNet | −0.200 | 0.300 | ||
| BOA + NN | 0.900 | 0.400 | −0.600 |
Correlation is significant at the 0.05 level (2-tailed).
Figure 8Correlation between BOA + NN and SVM.