| Literature DB >> 35116074 |
S V Kogilavani1, J Prabhu2, R Sandhiya1, M Sandeep Kumar2, UmaShankar Subramaniam3,4, Alagar Karthick5, M Muhibbullah6, Sharmila Banu Sheik Imam7.
Abstract
SARS-CoV-2 is a novel virus, responsible for causing the COVID-19 pandemic that has emerged as a pandemic in recent years. Humans are becoming infected with the virus. In 2019, the city of Wuhan reported the first-ever incidence of COVID-19. COVID-19 infected people have symptoms that are related to pneumonia, and the virus affects the body's respiratory organs, making breathing difficult. A real-time reverse transcriptase-polymerase chain reaction (RT-PCR) kit is used to diagnose the disease. Due to a shortage of kits, suspected patients cannot be treated promptly, resulting in disease spread. To develop an alternative, radiologists looked at the changes in radiological imaging, like CT scans, that produce comprehensive pictures of the body of excellent quality. The suspected patient's computed tomography (CT) scan is used to distinguish between a healthy individual and a COVID-19 patient using deep learning algorithms. A lot of deep learning methods have been proposed for COVID-19. The proposed work utilizes CNN architectures like VGG16, DeseNet121, MobileNet, NASNet, Xception, and EfficientNet. The dataset contains 3873 total CT scan images with "COVID" and "Non-COVID." The dataset is divided into train, test, and validation. Accuracies obtained for VGG16 are 97.68%, DenseNet121 is 97.53%, MobileNet is 96.38%, NASNet is 89.51%, Xception is 92.47%, and EfficientNet is 80.19%, respectively. From the obtained analysis, the results show that the VGG16 architecture gives better accuracy compared to other architectures.Entities:
Mesh:
Year: 2022 PMID: 35116074 PMCID: PMC8805449 DOI: 10.1155/2022/7672196
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.238
Figure 1Xception architecture.
Figure 2VGG16 architecture.
Figure 3MobileNet architecture.
Figure 4NASNet architecture.
Figure 5DenseNet121 architecture.
Figure 6Efficient net architecture.
Figure 7Proposed system architecture.
Parameters for training the models.
| Performance measures | VGG16 | DenseNet121 | MobileNet | Xception | NASNet | EfficientNet |
|---|---|---|---|---|---|---|
| Batch size | 16 | 16 | 16 | 16 | 16 | 16 |
| Image dimension | 224 × 224 | 224 × 224 | 224 × 224 | 224 × 224 | 224 × 224 | 224 × 224 |
| Optimizer | Adam | Adam | Adam | Adam | Adam | Adam |
| Activation function | Softmax | Softmax | Softmax | Softmax | Softmax | Softmax |
| Loss function | Binary cross-entropy | Binary cross-entropy | Binary cross-entropy | Binary cross-entropy | Binary cross-entropy | Binary cross-entropy |
Kaggle dataset description.
| Dataset | COVID | Non-COVID |
|---|---|---|
| Training | 930 | 915 |
| Validation | 156 | 164 |
| Test | 166 | 150 |
| Total | 1252 | 1229 |
Data augmentation dataset description.
| Dataset | COVID | Non-COVID | Total |
|---|---|---|---|
| Training | 1257 | 1234 | 2491 |
| Validation | 345 | 346 | 691 |
| Test | 356 | 335 | 691 |
| Total | 1958 | 1915 | 3873 |
Confusion matrix representation.
| Predicted | ||
|---|---|---|
| Actual | Yes | No |
| Yes | TP | TN |
| No | FN | FP |
Where TP denotes “True Positive,” TN denotes “True Negative,” FN represents “False Negative,” and FP represents “False Positive”.
Comparison of various evaluation measures for COVID class.
| Performance measure | VGG16 | DenseNet121 | MobileNet | Xception | NASNet | EfficientNet |
|---|---|---|---|---|---|---|
| Precision | 1.00 | 0.96 | 0.99 | 0.90 | 0.96 | 0.91 |
| Recall | 0.96 | 0.99 | 0.94 | 0.96 | 0.83 | 0.46 |
|
| 0.98 | 0.98 | 0.96 | 0.93 | 0.89 | 0.61 |
| Support | 357 | 357 | 357 | 357 | 357 | 315 |
Comparison of various evaluation measures for non-COVID class.
| Performance measure | VGG16 | DenseNet121 | MobileNet | Xception | NASNet | EfficientNet |
|---|---|---|---|---|---|---|
| Precision | 0.96 | 0.99 | 0.94 | 0.95 | 0.84 | 0.63 |
| Recall | 1.00 | 0.96 | 0.99 | 0.89 | 0.96 | 0.95 |
|
| 0.98 | 0.97 | 0.96 | 0.92 | 0.90 | 0.76 |
| Support | 334 | 334 | 334 | 334 | 334 | 307 |
Figure 8Confusion matrix of CNN models.
Figure 9Epochs versus loss graph for all CNN models.
Figure 10Epochs versus accuracy graph for all CNN models.
Comparison of existing system and proposed system.
| Models | Existing system | Proposed system |
|---|---|---|
| VGG16 | 79.01% | 97.68% |
| Xception | 88.03% | 92.47% |
| DenseNet21 | 89.96% | 97.53% |
| NASNet | 85.05% | 89.51% |
| EfficientNet | 93.48% | 80.19% |
Figure 11Accuracies obtained for all CNN models.