| Literature DB >> 34191925 |
Saleh Albahli1, Nasir Ayub2, Muhammad Shiraz2.
Abstract
The 2019 novel coronavirus (COVID-19) originating from China, has spread rapidly among people living in other countries. According to the World Health Organization (WHO), by the end of January, more than 104 million people have been affected by COVID-19, including more than 2 million deaths. The number of COVID-19 test kits available in hospitals is reduced due to the increase in regular cases. Therefore, an automatic detection system should be introduced as a fast, alternative diagnostic to prevent COVID-19 from spreading among humans. For this purpose, three different BiT models: DenseNet, InceptionV3, and Inception-ResNetV4 have been proposed in this analysis for the diagnosis of patients infected with coronavirus pneumonia using X-ray radiographs in the chest. These three models give and examine Receiver Operating Characteristic (ROC) analyses and uncertainty matrices, using 5-fold cross-validation. We have performed the simulations which have visualized that the pre-trained DenseNet model has the best classification efficiency with 92% among two other models proposed (83.47% accuracy for inception V3 and 85.57% accuracy for Inception-ResNetV4).Entities:
Keywords: Biomedical imaging; COVID-19; Convolutional neural network; Deep learning; DenseNet; ResNet
Year: 2021 PMID: 34191925 PMCID: PMC8225990 DOI: 10.1016/j.asoc.2021.107645
Source DB: PubMed Journal: Appl Soft Comput ISSN: 1568-4946 Impact factor: 6.725
Overview of previous approaches, their strengths, gaps and improvement of proposed DenseNet model over them.
| Reference | Proposed approach | Main features |
|---|---|---|
| Apostolopoulos et al. | Used transfer learning models based on MobileNetV2, | Used X-ray and achieved a maximum of 93% accuracy |
| Wang and Wong | Introduced their own model named COVID-Net, the first | Used X-ray and achieved a maximum of 92% accuracy |
| Song et al. | Proposed their own model DRE-Net and compared its | Used CT and achieved a maximum of 86% accuracy |
| Sarker et al. | Used transfer learning on DenseNet-121 network | Used X-rays and achieved 85.35% accuracy with 3 classes |
| Hasan et al. | Used DenseNet-121 network | Used CT and achieved 92% accuracy |
| Zheng et al. | Proposed their own model DeCoVNet for classification | Used CT and achieved 97% accuracy |
| Xu et al. | Proposed modifies version of ResNet-18 based CNN network | Used X-ray and achieved 86% accuracy |
| Minaee et al. | Used ResNet50, ResNet18, DenseNet-121, and | Used X-ray and achieved 97.6% AUC |
| Ozturk et al. | Proposed their own model DarkCovidNet for classification of COVID-19 | Used X-ray and achieved 97% accuracy |
| Ardakani et al. | Used 10 CNN networks including AlexNet and | Used CT and achieved a maximum of 99% accuracy |
| Li et al. | Proposed their own model COV-Net for classifying 3 classes | Used CT and achieved 96% accuracy |
| Yang et al. | Used ResNet-50 and DenseNet-169 network for classification. | Used CT and achieved 79.5% accuracy with DenseNet-16 |
| Abbas et al. | Proposed their own model DeTrac-ResNet18 CNN that uses | Used X-ray and achieved a maximum of 95.12% accuracy |
| Chen et al. | Used UNet++ along with Keras for segmentation of | Used CT and achieved 95.24% accuracy |
Training dataset.
| Pneumonia | COVID-19 | Normal |
|---|---|---|
| 5463 | 490 | 7966 |
Test dataset.
| Pneumonia | COVID-19 | Normal |
|---|---|---|
| 594 | 100 | 885 |
Fig. 1Normal chest X-ray.
Fig. 2Covid-19 affected chest X-ray.
Image augmentation parameter details.
| Parameter | Value |
|---|---|
| Samplewise center | True |
| Samplewise std normalization | True |
| Horizontal flip | True |
| Vertical flip | False |
| Height shift range | 0.05 |
| Width shift range | 0.1 |
| Rotation range | 10 |
| Shear range | 0.1 |
| Fill mode | Nearest |
| Zoom range | 0.15 |
Fig. 3Training loss curve when samplewise_center augmentation is applied (without model learning).
Fig. 4Training loss curve when samplewise_std_normalization augmentation is applied (with model learning).
Fig. 5Contribution before introducing weights for each class.
Fig. 6Contribution after introducing weights for each class.
Fig. 7Labeling after choosing the right value of parameter c.
Value of parameter c vs. Variation in accuracy value.
| Value of C | Accuracy of the COVID-19 class |
|---|---|
| Less than 0 | Decreases |
| 0–5 | Increases |
| Greater than 5 | No learning, Accuracy fluctuates |
Fig. 8Different classes after optimal value of parameter c.
Fig. 9Residual Layer: Building block of ResNet [40].
Fig. 10Architecture of different ResNet models [40].
Training overview (Epoch vs. Learning rate).
| Epoch | Learning rate |
|---|---|
| 1–20 | (0.003 |
| 20–30 | (0.003 |
| 30–40 | (0.003 |
| 40–50 | (0.003 |
| After 50 | (0.003 |
Fig. 12Basic architecture of DenseNet models [41].
Fig. 13Inception-V3 architecture.
Comparison of models training and testing accuracy.
| Model | Average training accuracy | Test accuracy | Accuracy of the COVID class |
|---|---|---|---|
| R152 × 4 | 86% | 85.8% | |
| (N | |||
| R101 × 3 | 87% | 83.47% | 90% |
| R101 × 1 | 93% | 91.13% | 91% |
| DenseNet | 93.5% | 92% | 85% |
| (Using an additional Dense layer with 512 perceptrons) | |||
Comparison of models ROC and AUC.
| Model | Sensitivity and specificity (percentage) | Confusion matrix | Accuracy of the | Accuracy of the | Accuracy of the | ROC AUC |
|---|---|---|---|---|---|---|
| R101 × 1 | 91 and 96 | 541, 16, 37, | 91 | 91.08 | 91.19 | 98 |
| 85 and 99 | 554, 2, 38, | 85 | 93.27 | 92.43 | 98 | |
Fig. 11Training loss vs. Epoch.
Fig. 14R101 × 1 TPR vs. FPR.
Fig. 15DenseNet TPR vs. FPR.