| Literature DB >> 34149202 |
Aravind Krishnaswamy Rangarajan1, Hari Krishnan Ramachandran1.
Abstract
The COVID-19 outbreak has catastrophically affected both public health system and world economy. Swift diagnosis of the positive cases will help in providing proper medical attention to the infected individuals and will also aid in effective tracing of their contacts to break the chain of transmission. Blending Artificial Intelligence (AI) with chest X-ray images and incorporating these models in a smartphone can be handy for the accelerated diagnosis of COVID-19. In this study, publicly available datasets of chest X-ray images have been utilized for training and testing of five pre-trained Convolutional Neural Network (CNN) models namely VGG16, MobileNetV2, Xception, NASNetMobile and InceptionResNetV2. Prior to the training of the selected models, the number of images in COVID-19 category has been increased employing traditional augmentation and Generative Adversarial Network (GAN). The performance of the five pre-trained CNN models utilizing the images generated with the two strategies has been compared. In the case of models trained using augmented images, Xception (98%) and MobileNetV2 (97.9%) turned out to be the ones with highest validation accuracy. Xception (98.1%) and VGG16 (98.6%) emerged as models with the highest validation accuracy in the models trained with synthetic GAN images. The best performing models have been further deployed in a smartphone and evaluated. The overall results suggest that VGG16 and Xception, trained with the synthetic images created using GAN, performed better compared to models trained with augmented images. Among these two models VGG16 produced an encouraging Diagnostic Odd Ratio (DOR) with higher positive likelihood and lower negative likelihood for the prediction of COVID-19.Entities:
Keywords: COVID-19; Chest X-rays; Convolutional Neural Network; Deep learning; GAN; Smartphone application
Year: 2021 PMID: 34149202 PMCID: PMC8196480 DOI: 10.1016/j.eswa.2021.115401
Source DB: PubMed Journal: Expert Syst Appl ISSN: 0957-4174 Impact factor: 6.954
Details of previous studies carried out for the identification of COVID-19 using chest X-ray images employing deep learning models (For studies that have utilized more than 1 models, the one with the highest accuracy has been furnished).
| Work | Number of chest X-rays | Architecture | Accuracy (%) |
|---|---|---|---|
| 25: COVID-19 + 25:Normal | VGG16 | 90 | |
| 68: COVID-19 + 1591: Pneumonia + 1203: Normal | COVID-ResNet | 96.23 | |
| 69: COVID-19 + 79: Normal + 158: Pneumonia | Googlenet | 81.5 | |
| 76: COVID-19 + 1583: Normal + 4290:Pneumonia | Deep Bayes - SqeezeNet | 98.3 | |
| 99: COVID-19 + 104: Normal + 80: Pneumonia + 23: Others | GSA-DenseNet121-COVID-19 | 98.38 | |
| 127: COVID-19 + 127: Normal + 127: Pneumonia | ResNet50 + SVM | 95.38 | |
| 125: COVID-19 + 500: Pneumonia + 500: Normal | DarkCovidNet | 87.02 | |
| 180: COVID-19 + 200: Normal | ResNet50 + SVM | 94.7 | |
| 183: COVID-19 + 8066: Normal + 5521: Pneumonia | EfficientNetB3 | 93.9 | |
| 184: COVID-19 + 5000: Non-COVID-19 | SqueezeNet | 92.2 | |
| 224:COVID-19 + 700:Pneumonia + 504:Normal | VGG19 | 93.48 | |
| 250: COVID-19 + 3520: Normal + 500: Pneumonia + 2753: Others | VGG16 | 98 | |
| 250: COVID-19 + 315: Normal + 650: Pneumonia | ResNet50 & ResNet101 | 97.77 | |
| 295: COVID-19 + 65: Normal + 98: Pneumonia | Features from combined MobileNetV2 & SqueezeNet | 99.2 | |
| 305: COVID-19 + 305: Normal + 610: Pneumonia | CovXNet | 90.2 | |
| 341: COVID-19 + 4265: Pneumonia + 2800: Normal | ResNet50 | 99. 7 | |
| 358: COVID-19 + 5538: Pneumonia + 8066: Normal | COVID-CAPS | 98.3 | |
| 358: COVID-19 + 5538: Pneumonia + 8066: Normal | COVID-Net | 93.3 | |
| 403: COVID-19 + 721: Normal | VGG16 | 95 | |
| 558: COVID-19 + 10,434: Normal + 4273: Pneumonia | CSDB CNN + DFL | 97.94 |
Fig. 1Sample X-ray images of category (a) Normal; (b) Pneumonia and (c) COVID-19.
Fig. 2Augmentation applied on sample image (a) Original; (b) Flipped; (c) Rotated left and altered intensity values and (d) Rotated right and altered intensity values.
Fig. 3Developed GAN for the generation of synthetic images for COVID-19.
Fig. 4Architecture of (a) Generator; (b) Discriminator.
Dataset utilized for the study.
| Categories | Dataset |
|---|---|
| Normal | 1304 |
| Pneumonia | 3804 |
| Augmented COVID-19 | 2344 |
| Synthetic COVID-19 | 2598 |
Fig. 5Modified CNN models (a) VGG16; (b) MobileNetV2; (c) Xception (d) NASNetMobile and (e) InceptionResNetV2.
Validation accuracy for the five different models with different dataset.
| Models | Validation accuracy | |
|---|---|---|
| Augmented COVID −19 | Synthetic COVID-19 | |
| VGG16 | 97.1% | 98.6% |
| MobileNetV2 | 97.9% | 87.9% |
| Xception | 98% | 98.1% |
| NASNetmobile | 93.5% | 92.7% |
| InceptionResNetV2 | 97.1% | 97.1% |
Fig. 6Confusion matrix using augmented COVID-19 images (a) VGG16; (b) MobileNetV2; (c) Xception; (d) NASNetMobile and (e) InceptionResNetV2.
Performance metrics for the five models trained with augmented COVID-19 image.
| Models | Class | TPR | TNR | FPR | FNR | PPV | NPV | PL | NL | DOR |
|---|---|---|---|---|---|---|---|---|---|---|
| VGG16 | COVID-19 | 97.2% | 99.7% | 0.3% | 2.8% | 99.3% | 0.7% | 324 | 0.0278 | 11654.6 |
| Pneumonia | 99.5% | 94.6% | 5.4% | 0.5% | 95.1% | 4.9% | 18.55 | 0.0056 | 3312.5 | |
| Normal | 90% | 99.9% | 0.1% | 10% | 99.6% | 0.4% | 1000 | 0.1 | 10,000 | |
| MobileNetV2 | COVID-19 | 99.3% | 99.5% | 0.5% | 0.7% | 98.9% | 1.1% | 202.75 | 0.0065 | 31192.3 |
| Pneumonia | 98.9% | 96.8% | 3.2% | 1.1% | 97% | 3% | 31.31 | 0.0108 | 2899.1 | |
| Normal | 92.3% | 99.7% | 0.3% | 7.7% | 98.7% | 1.3% | 369.20 | 0.0771 | 4788.6 | |
| Xception | COVID-19 | 98.7% | 99.8% | 0.2% | 1.3% | 99.6% | 0.4% | 493.55 | 0.0130 | 37965.4 |
| NASNetmobile | COVID-19 | 92.1% | 100% | 0% | 7.9% | 100% | 0% | ∞ | 0 | ∞ |
| InceptionResNet | COVID-19 | 98.2% | 99.8% | 0.2% | 1.8% | 99.5% | 0.5% | 491.45 | 0.0171 | 28739.8 |
Fig. 7Confusion matrix using synthetic COVID-19 images (a) VGG16; (b) MobileNetV2; (c) Xception; (d) NASNetMobile and (e) InceptionResNetV2.
Performance metrics for the five models trained with synthetic COVID-19 image.
| Models | Class | TPR | TNR | FPR | FNR | PPV | NPV | PL | NL | DOR |
|---|---|---|---|---|---|---|---|---|---|---|
| VGG16 | COVID-19 | 99.2% | 99.7% | 0.3% | 0.8% | 99.4% | 0.6% | 330.73 | 0.0078 | 42401.3 |
| MobileNetV2 | COVID-19 | 97.3% | 90.4% | 9.6% | 2.7% | 83.7% | 16.3% | 10.15 | 0.0298 | 340.6 |
| Xception | COVID-19 | 99.4% | 99.8% | 0.2% | 0.6% | 99.6% | 0.4% | 497.05 | 0.0058 | 85698.3 |
| NASNetmobile | COVID-19 | 90.4% | 100% | 0% | 9.6% | 100% | 0% | ∞ | 0.096 | ∞ |
| InceptionResNet | COVID-19 | 99.2% | 99.2% | 0.8% | 0.8% | 98.5% | 1.5% | 125.59 | 0.0078 | 16101.3 |
Fig. 8A sample screenshot of the diagnosis using X-ray images with the smartphone application integrated with the deep learning model (a) COVID-19; (b) Normal and (c) Pneumonia.
Comparison of the best performing models integrated with smartphone application.
| Models | Memory space | Average Prediction time | Diagnosis test for COVID-19 | ||
|---|---|---|---|---|---|
| PL | NL | DOR | |||
| MobileNetV2 | 9.84 MB | 169.2 ms | 9.7 | 0 | ∞ |
| Xception | 95.3 MB | 2281.7ms | 27.3 | 0 | ∞ |
| Xception with synthetic image | 10.6 | 0 | ∞ | ||
| VGG16 | 106 MB | 3627.2ms | 97.9 | 0.021 | 4617.9 |