| Literature DB >> 34764547 |
Sakshi Ahuja1, Bijaya Ketan Panigrahi1, Nilanjan Dey2, Venkatesan Rajinikanth3, Tapan Kumar Gandhi1.
Abstract
Lung abnormality is one of the common diseases in humans of all age group and this disease may arise due to various reasons. Recently, the lung infection due to SARS-CoV-2 has affected a larger human community globally, and due to its rapidity, the World-Health-Organisation (WHO) declared it as pandemic disease. The COVID-19 disease has adverse effects on the respiratory system, and the infection severity can be detected using a chosen imaging modality. In the proposed research work; the COVID-19 is detected using transfer learning from CT scan images decomposed to three-level using stationary wavelet. A three-phase detection model is proposed to improve the detection accuracy and the procedures are as follows; Phase1- data augmentation using stationary wavelets, Phase2- COVID-19 detection using pre-trained CNN model and Phase3- abnormality localization in CT scan images. This work has considered the well known pre-trained architectures, such as ResNet18, ResNet50, ResNet101, and SqueezeNet for the experimental evaluation. In this work, 70% of images are considered to train the network and 30% images are considered to validate the network. The performance of the considered architectures is evaluated by computing the common performance measures. The result of the experimental evaluation confirms that the ResNet18 pre-trained transfer learning-based model offered better classification accuracy (training = 99.82%, validation = 97.32%, and testing = 99.4%) on the considered image dataset compared with the alternatives. © Springer Science+Business Media, LLC, part of Springer Nature 2020.Entities:
Keywords: COVID-19; CT scan; ResNet18; Transfer learning; Wavelets
Year: 2020 PMID: 34764547 PMCID: PMC7440966 DOI: 10.1007/s10489-020-01826-w
Source DB: PubMed Journal: Appl Intell (Dordr) ISSN: 0924-669X Impact factor: 5.019
Summary of state-of-the-art-of COVID-19 detection techniques
| 1 | Inception-Net [ | D1: 73 COVID-19 positives and 340 healthy images from the Shenzhen, China collections. D2: 73 COVID-19 positive and 80 healthy cases from Montgomery County, USA. D3: 73 COVID-19 positives and 1583 healthy cases from the Pneumonia collections are considered [ | The Inception Net model achieved an accuracy of 99.96% and AUC of 1 in classifying COVID- 19 positive cases from combined pneumonia and normal patients. An accuracy of 99.92% and AUC of 0.99 in classifying COVID-19 positive cases from combined pneumonia, tuberculosis, and normal X-ray. | |
| 2 | Darknet [ | 127 COVID-19 positive cases [ | Achieved an accuracy of 98.08% for binary classification of COVID vs. Non-COVID. For multi-class classification (COVID, pneumonia, and no findings) accuracy of 87.02% is achieved. | |
| 3 | Generative Adversarial Networks (GAN) and transfer learning-based models namely, AlexNet, GoogleNet, ResNet 18 and squeezeNet [ | X-ray | 5863 images divided into two classes: normal and pneumonia [ | ResNet18 achieved precision, recall, and F1 score of 98.97%. |
| 4 | Deep learning techniques (Resnet50 and VGG16) [ | 102 both positive COVID-19 cases and pneumonia cases [ | An accuracy of 89.2% and an AUC of 0.95 is obtained. | |
| 5 | Convolution Neural Networks (ResNet18, ResNet50, SqueezeNet, and DenseNet-121) [ | COVID-Xray-5k dataset [ | Obtained sensitivity of 97.5% and specificity of 90%. | |
| 6 | Transfer learning-based model [ | Two types of datasets are used: a) 224 images of COVID-19 disease, 700 images of common bacterial pneumonia and 504 images of normal patients, b) 224 images of COVID-19 disease, 714 images of bacterial pneumonia and 504 images of normal patients [[ | An accuracy, sensitivity, and specificity of 96.78%, 98.66%, and 96.46% respectively is obtained. | |
| 7 | Deep learning-based model (composed of three components: a) backbone network, b) classification head, and c) anomaly detection head[ | X-ray | 100 chest images of COVID-19 positive cases [18] and 1431 images of pneumonia [ | The sensitivity of 96.00% and specificity of 70.65% is achieved. |
| 8 | ResNet50, InceptionV3 and Inception-ResNetV2 [ | 50 images of each COVID-19 positive cases [18] and normal cases [ | An accuracy of 98% obtained with ResNet50, 97% accuracy with InceptionV3, and Inception-ResNetV2 gives 87% accuracy. | |
| 9 | Otsu based method [ | 90 slices of coronal-view and 20 slices of axial-view of lungs [ | COVID-19 pneumonia infection and its rate are detected in CT scan images of both coronal and axial views. | |
| 10 | Transfer learning based DenseNet model [ | CT | 349 COVID positive CT samples and 463 non-COVID-19 CT samples [ | Obtained an accuracy of 84.7% and F1 score 85.3% on the binary classification of data into classes: COVID CT scan and Non-COVID CT scan. |
| 11 | 5 different CNN models namely, AlexNet, VGG16, VGG19, GoogleNet, and ResNet50 [ | 349 COVID positive and 397 Non-COVID CT scan [ | ResNet50 is the best performing model and achieved 82.91% testing accuracy. | |
| 12 | 2D and 3D deep learning models [ | Datasets of Chinese control and infected patients. Creation of heat map or a 3D volume display of COVID-19 cases along-with Corona score. | The sensitivity of 98.2%, the specificity of 92.2%, and AUC of 0.996, achieved for binary classification of the COVID vs Non-COVID dataset. | |
| 13 | Deep learning-based Multi-Objective Differential Evolution (MODE) and (CNN) [ | The binary classification of a person into COVID-19 affected or not. | The proposed model outperforms from other CNN models in terms of F-measure, sensitivity, specificity, Kappa statistics, accuracy by 2.09%, 1.82%, 1.68%, 1.92%, and 1.97%, respectively. | |
| 14 | Joint Classification and Segmentation (JCS) [ | Collected large scale COVID-19 dataset with 144,167 images of 400 COVID-19 patients and 350 Non-COVID cases. | JCS system provided an average sensitivity of 95% and 93% specificity for the classification, and dice score of 78.3% on the segmentation test set. | |
| 15 | 3-D CNN model (Residual networks) [ | CT | 528/90 (Training and validation data: COVID-19 = 189, Influenza-A viral pneumonia = 194, and normal= 145) (Testing data: COVID-19 = 30, Influenza-A viral pneumonia= 30, and normal= 30) | AUC is 0.996, 98.2% sensitivity and 92.2% specificity. |
| 16 | Random forest (RF) model [ | 176 chest CT images of COVID-19 positive cases. | For the COVID-19 severity detection, the accuracy achieved is 87.5%, AUC is 0.91, and True positive rate (TP) is 93.3%. |
Fig. 1Schematic diagram of the proposed methodology of COVID-19 detection
Fig. 2Sample lung CT scan images of a COVID-19 and b Non-COVID patients
Brief detail of the input dataset used for the proposed work
| Category | Training data without augmentation | Training data with augmentation | Validaton data | Testing data |
|---|---|---|---|---|
| COVID | 178 | 1602 | 76 | 95 |
| Non-COVID | 228 | 2052 | 97 | 72 |
Fig. 3Generalized block diagram of the data augmentation using wavelet decomposition up to 3 levels
Fig. 4Augmented training data after rotation, shear, and translation operations
Fig. 5Descriptive block diagram of ResNet18 architecture
Fig. 6Skip connection scenario in the residual network
Brief details of hyperparameters used for pre-trained ResNet18 model
| Sr. No. | Title | Value |
|---|---|---|
| A. Optimization parameters | ||
| 1 | Learning rate | 0.0003 |
| 2 | Weight learn rate factor | 10 units |
| 3 | Bias learn rate factor | 10 units |
| 4 | Optimizer | ‘sgdm’ |
| 5 | Mini batch size | 64 |
| 6 | Epochs | 50 |
| 7 | Validation frequency | 3 |
| B. Model specific parameters | ||
| 8 | Number of layers | 71 |
| 9 | Size of the input image | 224x224x3 |
| 10 | New “Fc” | Weight- 2x 512, Bias-2 x1, |
| Activation-1x1x2 | ||
Fig. 7Convergence graph of Accuracy and loss function using ResNet18 model up to epoch 50
Comparison of training, validation, and testing accuracy of the ResNet18 model on the used dataset with and without augmentation
| Dataset | Training (%) | Validation (%) | Testing (%) |
|---|---|---|---|
| Without augmentation | 96.36 | 95.24 | 97.15 |
| With augmentation (SWT+Rotation+Translation+Shear) | 99.82 | 97.32 | 99.4 |
Fig. 8ROC characteristics curve of ResNet18
Performance parameters of transfer learning models on testing data
| Architecture | TP | TN | FP | FN | AUC | Precision (%) | NPV (%) | Sensitivity (%) | Specificity (%) | F1-score (%) 2. | Accuracy (%) ( |
|---|---|---|---|---|---|---|---|---|---|---|---|
| ResNet18 | 95 | 71 | 1 | 0 | 0.996 | 99.0 | 100 | 100 | 98.6 | 99.5 | 99.4 |
| ResNet50 | 95 | 70 | 2 | 0 | 0.992 | 97.9 | 100 | 100 | 97.2 | 98.9 | 98.8 |
| ResNet101 | 93 | 69 | 3 | 2 | 0.993 | 96.9 | 97.2 | 97.9 | 95.8 | 97.4 | 97.0 |
| SqueezeNet | 91 | 68 | 4 | 4 | 0.995 | 95.8 | 94.4 | 95.8 | 94.4 | 95.7 | 95.2 |
Fig. 9GIyph plot for the performance evaluation of pre-trained transfer learning models
Performance parameters of transfer learning models on testing data
| Techniques | No. of Images (Training+Validation/Testing) | Performance |
|---|---|---|
| Self-supervised learning with transfer learning [ | 349 COVID CT scan and 397 Non-COVID CT scan | An accuracy of 86%, AUC of 91%, and an F1 score of 85% is achieved with DenseNet169 in an unfrozen state. |
| Multi-tasking learning approach [ | 349 COVID positive CT samples and 463 non-COVID-19 CT samples | For binary classification with the JCS COVID-seg combination dataset, an accuracy of 83%, F1 score of 85%, and AUC- 95%, is obtained. |
| 5 different CNN models namely, AlexNet, VGGNet16, VGGNet19, GoogleNet, and ResNet50 [ | 349 COVID CT scan and 397 Non-COVID CT scan | ResNet50 is the best performing model and achieved 82.91% testing accuracy. |
| Proposed methodology a) Augmentation: SWT + Rotation + Translation + Shear b) Transfer Learning: ResNet18, ResNet50, ResNet101, SqueezeNet | COVID-CT: 349 CT scan and Normal: 397 CT scan | 2 class: Best performing model is ResNet18 Training accuracy- 99.82%, validation accuracy- 97.32% and testing accuracy- 99.4%. Also, NPV is 100%, sensitivity of 100%, the specificity of 98.6% and F1-score of 99.5%. |