| Literature DB >> 34971978 |
Gayathri J L1, Bejoy Abraham2, Sujarani M S3, Madhu S Nair4.
Abstract
Several infectious diseases have affected the lives of many people and have caused great dilemmas all over the world. COVID-19 was declared a pandemic caused by a newly discovered virus named Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) by the World Health Organisation in 2019. RT-PCR is considered the golden standard for COVID-19 detection. Due to the limited RT-PCR resources, early diagnosis of the disease has become a challenge. Radiographic images such as Ultrasound, CT scans, X-rays can be used for the detection of the deathly disease. Developing deep learning models using radiographic images for detecting COVID-19 can assist in countering the outbreak of the virus. This paper presents a computer-aided detection model utilizing chest X-ray images for combating the pandemic. Several pre-trained networks and their combinations have been used for developing the model. The method uses features extracted from pre-trained networks along with Sparse autoencoder for dimensionality reduction and a Feed Forward Neural Network (FFNN) for the detection of COVID-19. Two publicly available chest X-ray image datasets, consisting of 504 COVID-19 images and 542 non-COVID-19 images, have been combined to train the model. The method was able to achieve an accuracy of 0.9578 and an AUC of 0.9821, using the combination of InceptionResnetV2 and Xception. Experiments have proved that the accuracy of the model improves with the usage of sparse autoencoder as the dimensionality reduction technique.Entities:
Keywords: CNN; COVID-19; Computer-aided detection; Feed forward neural network; Sparse autoencoder
Mesh:
Year: 2021 PMID: 34971978 PMCID: PMC8668604 DOI: 10.1016/j.compbiomed.2021.105134
Source DB: PubMed Journal: Comput Biol Med ISSN: 0010-4825 Impact factor: 4.589
Pre-trained networks.
| Network | Depth | Input Size |
|---|---|---|
| InceptionResnetV2 | 164 | 299 × 299 |
| Resnet101 | 101 | 224 × 224 |
| Xception | 72 | 299 × 299 |
| EfficientnetB0 | 82 | 224 × 224 |
| Darknet53 | 53 | 256 × 256 |
Fig. 1Architecture of the proposed method.
Fig. 2Graphical analysis of model performance using Single-CNN without using sparse autoencoder.
Performance of the model using CNN and FFNN, without using sparse autoencoder.
| Pre-trained model | Specificity | Sensitivity | F1-Score | Precision | Accuracy | AUC | MCC |
|---|---|---|---|---|---|---|---|
| InceptionResnetV2+Xception | 0.9206 | 0.9350 | 0.9237 | 0.9127 | 0.9273 | 0.9750 | 0.8546 |
| InceptionResnetV2+Resnet101 | 0.9206 | 0.9350 | 0.9237 | 0.9127 | 0.9273 | 0.9750 | 0.8546 |
| Xception + Resnet101 | 0.9271 | 0.9100 | 0.9163 | 0.9226 | 0.9187 | 0.9733 | 0.8374 |
| Darknet53+Resnet101 | 0.9117 | 0.9458 | 0.9228 | 0.9008 | 0.9272 | 0.9564 | 0.8552 |
| Xception + EfficientnetB0 | 0.9225 | 0.9389 | 0.9147 | 0.9266 | 0.9301 | 0.9787 | 0.8604 |
| Xception | 0.9145 | 0.9016 | 0.9087 | 0.9051 | 0.9081 | 0.9536 | 0.8163 |
| Darknet53 | 0.8379 | 0.8891 | 0.8115 | 0.8485 | 0.8604 | 0.9472 | 0.7222 |
| Resnet101 | 0.9021 | 0.9451 | 0.8889 | 0.9162 | 0.9216 | 0.9727 | 0.8441 |
| EfficientnetB0 | 0.9452 | 0.9188 | 0.9425 | 0.9305 | 0.9321 | 0.9756 | 0.8645 |
| InceptionResnetV2 | 0.9066 | 0.9243 | 0.8968 | 0.9104 | 0.9149 | 0.9599 | 0.8298 |
Fig. 3Graphical analysis of model performance using Multi-CNN without using sparse autoencoder.
Performance of the proposed model using Sparse Autoencoder and FFNN.
| Pre-trained model | Specificity | Sensitivity | F1-Score | Precision | Accuracy | AUC | MCC |
|---|---|---|---|---|---|---|---|
| InceptionResnetV2+Xception | 0.9594 | 0.9563 | 0.9563 | 0.9563 | 0.9578 | 0.9821 | 0.9158 |
| Darknet53+Resnet101 | 0.9423 | 0.9613 | 0.9487 | 0.9365 | 0.9511 | 0.9800 | 0.9025 |
| InceptionResnetV2+Resnet101 | 0.9227 | 0.9644 | 0.9378 | 0.9504 | 0.9417 | 0.9805 | 0.8842 |
| Xception + Resnet101 | 0.9454 | 0.9537 | 0.9471 | 0.940 5 | 0.9493 | 0.9893 | 0.8986 |
| Xception + EfficientnetB0 | 0.9436 | 0.9536 | 0.9460 | 0.9385 | 0.9483 | 0.9792 | 0.8967 |
Fig. 4Graphical analysis of Multi-CNN model performance after performing dimension reduction using sparse autoencoder.
Fig. 5Comparison of accuracy before and after employing sparse autoencoder as the dimensionality reduction technique.
Parameter setting for Sparse autoencoder & FFNN.
| Parameters(Sparse autoencoder) | Value |
|---|---|
| Hidden Size | 1000 |
| Random Seed | 2 |
| L2WeightRegularization | 0.001 |
| Sparsity Regularization | 4.000 |
| Sparsity Proportion | 0.0500 |
| Decoder Transfer Function | purelin |
| Parameters(FFNN) | Value |
| Hidden Layers | 1 |
| Random Seed | 1 |
| Maxepochs | 100 |
| K-Fold | 10 |
| trainFcn | trainlm |
| Net | feed forward net |
Performance achieved using various dimensionality reduction/feature selection techniques, in combination with the proposed pre-trained model and FFNN.
| Pre-trained model | Method | Specificity | Sensitivity | F1-Score | Precision | Accuracy | AUC |
|---|---|---|---|---|---|---|---|
| InceptionResnetV2+Xception | Proposed | 0.9594 | 0.9563 | 0.9563 | 0.9563 | 0.9578 | 0.9821 |
| InceptionResnetV2+Xception | CFS | 0.9031 | 0.8146 | 0.8582 | 0.9067 | 0.8556 | 0.9326 |
| InceptionResnetV2+Xception | PCA | 0.9228 | 0.8595 | 0.8900 | 0.9226 | 0.8899 | 0.9469 |
Performance achieved using various classifiers, in combination with the proposed pre-trained model and sparse autoencoder.
| Pre-trained model | Classifier | Specificity | Sensitivity | F1-Score | Precision | Accuracy | AUC |
|---|---|---|---|---|---|---|---|
| InceptionResnetV2+Xception | Bayesnet | 0.8299 | 0.7038 | 0.7713 | 0.8532 | 0.7562 | 0.7970 |
| InceptionResnetV2+Xception | SVM | 0.7949 | 0.7899 | 0.7828 | 0.7758 | 0.7925 | 0.7920 |
| InceptionResnetV2+Xception | KNN | 0.7851 | 0.7930 | 0.7761 | 0.7599 | 0.7887 | 0.7850 |
| InceptionResnetV2+Xception | Random Forest | 0.8667 | 0.7450 | 0.8073 | 0.8810 | 0.7973 | 0.8790 |
| InceptionResnetV2+Xception | Adaboost | 0.8017 | 0.7079 | 0.7587 | 0.8175 | 0.7495 | 0.8300 |
| InceptionResnetV2+Xception | Proposed | 0.9594 | 0.9563 | 0.9563 | 0.9563 | 0.9578 | 0.9821 |
Comparison of the results achieved by the proposed method with other state-of-the-art techniques.
| Method | Number of images | Specificity | Sensitivity | F1-Score | Precision | Accuracy | AUC |
|---|---|---|---|---|---|---|---|
| Proposed | 504 COVID-19 vs. 542 non-COVID-19 | 0.9594 | 0.9563 | 0.9563 | 0.9563 | 0.9578 | 0.9821 |
| Abraham et al. [ | 453 COVID-19 vs. 497 non-COVID-19 | - | 0.9850 | 0.9140 | 0.8530 | 0.9115 | 0.9630 |
| Li et al. [ | 231 COVID-19 vs. 1583 Normal | 0.9191 | 0.9297 | - | - | 0.9723 | 0.9213 |
| Pandit et al. [ | 224 COVID-19 vs. 1204 non-COVID-19 | 0.9727 | 0.9264 | - | - | 0.9600 | - |
| Hassantabar et al. [ | 315 COVID-19 vs. 367 non-COVID-19 | - | 0.9610 | - | - | 0.9320 | - |
| Chandra et al. [ | 696 COVID-19 vs. 696 non-COVID-19 | - | - | - | - | 0.9132 | 0.8310 |
| Sethy et al. [ | 25 COVID-19 vs. 25 non-COVID-19 | - | - | 0.9141 | - | 0.9538 | - |
| Waheed et al. [ | 403 COVID-19 vs. 721 Normal | 0.9700 | 0.9000 | - | 0.9560 | 0.9500 | - |
| Panwar et al. [ | 142 COVID-19 vs. 142 Normal | 0.9700 | 0.9000 | - | 0.9560 | 0.8810 | 0.8809 |
| Hemdan et al. [ | 25 COVID-19 vs. 25 non-COVID-19 | - | 1.0000 | 0.9100 | 0.8300 | - | - |
| Zhang et al. [ | 100 COVID-19 vs. 1431 non-COVID-19 | - | 0.9600 | - | - | - | 0.9500 |
| Ismael and Sengur et al. [ | 180 COVID-19 vs. 200 non-COVID-19 | - | - | - | - | 0.9470 | - |
| Toraman et al. [ | 231 COVID-19 vs. 500 non-COVID-19 | 0.8095 | 0.9600 | 0.9375 | 0.9160 | 0.9124 | - |