| Literature DB >> 33746370 |
Bhawna Nigam1, Ayan Nigam2, Rahul Jain2, Shubham Dodia3, Nidhi Arora2, B Annappa3.
Abstract
In recent months, a novel virus named Coronavirus has emerged to become a pandemic. The virus is spreading not only humans, but it is also affecting animals. First ever case of Coronavirus was registered in city of Wuhan, Hubei province of China on 31st of December in 2019. Coronavirus infected patients display very similar symptoms like pneumonia, and it attacks the respiratory organs of the body, causing difficulty in breathing. The disease is diagnosed using a Real-Time Reverse Transcriptase Polymerase Chain reaction (RT-PCR) kit and requires time in the laboratory to confirm the presence of the virus. Due to insufficient availability of the kits, the suspected patients cannot be treated in time, which in turn increases the chance of spreading the disease. To overcome this solution, radiologists observed the changes appearing in the radiological images such as X-ray and CT scans. Using deep learning algorithms, the suspected patients' X-ray or Computed Tomography (CT) scan can differentiate between the healthy person and the patient affected by Coronavirus. In this paper, popular deep learning architectures are used to develop a Coronavirus diagnostic systems. The architectures used in this paper are VGG16, DenseNet121, Xception, NASNet, and EfficientNet. Multiclass classification is performed in this paper. The classes considered are COVID-19 positive patients, normal patients, and other class. In other class, chest X-ray images of pneumonia, influenza, and other illnesses related to the chest region are included. The accuracies obtained for VGG16, DenseNet121, Xception, NASNet, and EfficientNet are 79.01%, 89.96%, 88.03%, 85.03% and 93.48% respectively. The need for deep learning with radiologic images is necessary for this critical condition as this will provide a second opinion to the radiologists fast and accurately. These deep learning Coronavirus detection systems can also be useful in the regions where expert physicians and well-equipped clinics are not easily accessible.Entities:
Keywords: COVID-19; Coronavirus; Deep learning; Pandemic
Year: 2021 PMID: 33746370 PMCID: PMC7962920 DOI: 10.1016/j.eswa.2021.114883
Source DB: PubMed Journal: Expert Syst Appl ISSN: 0957-4174 Impact factor: 6.954
Comparison of various deep learning COVID-19 diagnostic systems.
| Authors | Dataset(s) used | Techniques used | Performance measures | Remarks |
|---|---|---|---|---|
| (a) Github repository from Joseph Cohen (b) Radiopaedia, (c) Italian Society of Medical and Interventional Radiology (SIRM), (d) Radiological Society of North America (RSNA), (e) Kermany | Transfer learning using CNNs | Sensitivity = 92.85%, Specificity = 98.75%, Accuracy-2 class = 98.75%, Accuracy-3 class = 93.48% | A multi-class classification using VGGNet with a high performance is achieved. There are few drawbacks of the work. 1. The number of images used for COVID-19 patients is less. 2. Some of the cases considered for pneumonia symptoms are taken from old records. There was no match found among the records and images collected from the COVID-19 patients. | |
| (a) COVID-19 Image Data Collection (b) Chest X-ray Dataset Initiative (c) ActualMed COVID-19 Chest X-ray Dataset Initiative (d) RSNA Pneumonia Detection Challenge dataset (e) COVID-19 radiography database | COVID-Net | Accuracy = 93.3% | The architecture proposed uses combinations of 1 × 1 convolution blocks which makes the architecture lighter with fewer number of parameters. Thus, reducing the computational complexity of the network. The model provided better performance in terms of accuracy, but there is still scope in improvising the sensitivity and Positive Predicted Value (PPV) in the model. | |
| Dr. Joseph Cohen (GitHub repository) | Deep CNN and ResNet50 | Accuracy = 98%, Specificity = 100%, Recall= 96% | In this work, deep architectures such as Deep CNN, Inception, and Inception-ResNet are used. The main drawback in this work is the number of images considered for building and testing the model. Deep learning architectures work well for huge data. In this work, only 50 images of each class have been considered. The scope of different variations of the virus spread or occurrence may not be captured in that limited number of images. | |
| GitHub (Dr. Joseph Cohen) and Kaggle (X-ray images of Pneumonia) | ResNet50 + SVM | Accuracy = 95.38%, FPR = 95.52%, F1-score = 91.41%, Kappa = 90.76% | The model used by the authors is a combination of ResNet and Support Vector Machine (SVM). The obtained accuracy is commendable but the model was built on very few samples. | |
| 1. Github repository collected from Joseph Cohen. 2. Collected by a team of researchers from Qatar University in Qatar and the University of Dhaka in Bangladesh along with collaborators from Pakistan and Malaysia medical doctors. | Deep features and fractional-order marine predators algorithm | Accuracy: 98.7; F-score: 99.6 | The method exhibited very promising results using deep features that are extracted from Inception model and the decision is provided by a tree based classifier. However, the drawback of this method is the varying environments used for feature extraction and classification. Also, the method has been tested for very few images. The results may vary in case a larger dataset is fed to the model. | |
| Private dataset of COVID-19, and Influenza-A pneumonia is collected | ResNet and location based attention | Sensitivity = 98.2%, Specificity = 92.2%, AUC = 0.996 | The performance of the deep learning algorithms proved to provide sufficient results even with few data samples. | |
| (a) Japanese Society of Radiological Technology (JSRT), (b) Chest posteroanterior (PA) radiographs were collected from 14 institutions including normal and lung nodule cases, (c) SCR database, (d) U.S. National Library of Medicine (USNLM) collected Montgomery Country (MC) dataset | Patch-based CNN | Accuracy = 93.3% | Authors proposed a solution for handling the issue of training deep neural networks on the limited training dataset. Multiple sources are considered for the collection of thoracic lung and chest radiographs. The limitation of this method is the performance of the proposed system in terms of precision, recall, and accuracy. | |
| (a) Japanese Society of Radiological Technology (JSRT) Dr. Joseph Cohen Github repository and SARS | Decompose, Transfer, and Compose (DeTraC) | Accuracy = 95.12%, Specificity = 91.87%, Sensitivity = 97.91% | The model provided better performance results. The limited data issue in this work is handled by performing a data augmentation step. But, augmenting the X-ray images may not be a proper solution to handle less data as the location of the presence of the virus spread may never be found correctly. Only frontal images of the chest X-rays are selected and given to the further processing in our work to overcome this problem. | |
| (a) Radiological Society of North America. RSNA pneumonia detection challenge, and (b) GitHub (Dr. Joseph Cohen) and Kaggle (X-ray images of pneumonia) | COVID-MobileXpert | Accuracy = 93.5% | The model takes a noisy X-ray snapshot as an input so that a quick screening can be performed to identify presence of COVID-19. DenseNet-121 architecture is employed to pre-train and fine-tune the network. For on-device COVID-19 screening purposes, lightweight CNNs such as MobileNetv2, SqueezeNet, and ShuffleNetV2 are used. | |
| The database contains 349 CT COVID19 positive CT images from 216 patients and 397 CT images of Non-COVID patients | Data augmentation using stationary wavelets, pre-trained CNN models, abnormality localization | Testing accuracy: 99.4% | In this method, an abnormality localization is implemented along with COVID-19 detection. The results obtained from this method is promising and the use of CT images provided better visibility of images as compared to X-ray images. |
Number of images used to train, validate, and test the models.
| Dataset | Normal | COVID | Others |
|---|---|---|---|
| Total | 6000 | 5634 | 5000 |
| Training | 4200 | 3944 | 3500 |
| Validation | 1200 | 1127 | 1000 |
| Testing | 600 | 563 | 500 |
Fig. 1Architecture of VGG16 model.
Fig. 2Diagrammatic representation of DenseNet architecture.
Fig. 3Diagrammatic representation of a module in Xception.
Fig. 4Diagrammatic representation of a NASNet architecture.
Fig. 5Diagrammatic representation of an EfficientNet architecture.
Fig. 6Illustration of X-ray images region selection from raw input images.
Parameters used for training the models.
| Performance measures | VGG16 | DenseNet121 | Xception | NASNet | EfficientNet |
|---|---|---|---|---|---|
| Batch size | 32 | 32 | 32 | 32 | 32 |
| Image dimension | 512 × 512 | 512 × 512 | 512 × 512 | 512 × 512 | 512 × 512 |
| Optimizer | Adam | Stochastic Gradient Descent | Adam | Adam | Adam |
| Learning rate | 1e−4 | 1e−4 | 1e−4 | 1e−4 | 1e−3 |
| Decay rate | 1e−5 | 1e−5 | 1e−5 | 1e−5 | 1e−4 |
| Activation function | Softmax | Softmax | Softmax | Softmax | Softmax |
| Loss function | Weighted binary cross entropy | Weighted binary cross entropy | Weighted binary cross entropy | Categorical cross entropy | Categorical cross entropy |
| Training accuracy | 80.32% | 92.98% | 90.84% | 89.99% | 97.17% |
| Testing accuracy | 79.01% | 89.96% | 88.03% | 85.03% | 93.48% |
Fig. 7Heatmap extracted from EfficientNet model.
Fig. 8Confusion matrices obtained for all the models.
Class-wise Precision, Recall and F1 Score for all the models.
| Performance measures | Class | VGG16 | DenseNet121 | Xception | NASNet | EfficientNet |
|---|---|---|---|---|---|---|
| Precision | COVID | 79% | 90% | 88% | 85% | 93% |
| Precision | Normal | 79% | 90% | 88% | 85% | 94% |
| Precision | Other | 79% | 90% | 88% | 85% | 93% |
| Recall | COVID | 76% | 90% | 88% | 87% | 93% |
| Recall | Normal | 82% | 89% | 87% | 81% | 93% |
| Recall | Other | 79% | 90% | 89% | 88% | 91% |
| F1 Score | COVID | 0.78 | 0.90 | 0.88 | 0.86 | 0.93 |
| F1 Score | Normal | 0.80 | 0.90 | 0.88 | 0.83 | 0.94 |
| F1 Score | Other | 0.79 | 0.90 | 0.89 | 0.86 | 0.93 |
Comparison of previous works with the proposed work
| Authors | Number of cases | Method/s used | Accuracy |
|---|---|---|---|
| 25 COVID-19(+) | COVIDX-Net | 90.0% | |
| 100 COVID-19(+) | 18-layer residual CNN | 96% | |
| (a) 125 COVID-19(+) | DarkCovidNet | (a) 98.08% | |
| (b) 125 COVID-19(+) | (b) 87.02% | ||
| 8851 Normal | Patch-based CNN | 88.9% | |
| 80 Normal | DeTRAC | 95.12% | |
| Proposed method | 795 COVID-19(+) |
Fig. 9Illustration of misclassified X-ray images.
Fig. 10Epochs versus loss graph for all the models.
Fig. 11Epochs versus accuracy graph for all the models.