| Literature DB >> 34220289 |
Vipul Kumar Singh1, Maheshkumar H Kolekar1.
Abstract
The novel coronavirus outbreak has spread worldwide, causing respiratory infections in humans, leading to a huge global pandemic COVID-19. According to World Health Organization, the only way to curb this spread is by increasing the testing and isolating the infected. Meanwhile, the clinical testing currently being followed is not easily accessible and requires much time to give the results. In this scenario, remote diagnostic systems could become a handy solution. Some existing studies leverage the deep learning approach to provide an effective alternative to clinical diagnostic techniques. However, it is difficult to use such complex networks in resource constraint environments. To address this problem, we developed a fine-tuned deep learning model inspired by the architecture of the MobileNet V2 model. Moreover, the developed model is further optimized in terms of its size and complexity to make it compatible with mobile and edge devices. The results of extensive experimentation performed on a real-world dataset consisting of 2482 chest Computerized Tomography scan images strongly suggest the superiority of the developed fine-tuned deep learning model in terms of high accuracy and faster diagnosis time. The proposed model achieved a classification accuracy of 96.40%, with approximately ten times shorter response time than prevailing deep learning models. Further, McNemar's statistical test results also prove the efficacy of the proposed model.Entities:
Keywords: COVID-19; Chest CT scan; Deep Learning; Diagnosis; Edge Computing; MobileNet V2
Year: 2021 PMID: 34220289 PMCID: PMC8236565 DOI: 10.1007/s11042-021-11158-7
Source DB: PubMed Journal: Multimed Tools Appl ISSN: 1380-7501 Impact factor: 2.577
Fig. 1Representation of newly registered cases of COVID-19 infection across India from April 2020 to August 2020
State-of-the-art studies focusing on COVID-19 diagnosis
| Study | Dataset | Methodology | Results/Remarks |
|---|---|---|---|
| Chen et al. [ | CT scan: 70 COVID-19 and 66 Non-COVID | Used 41 radiological features and 26 clinical | Area under curve: 0.986 |
| features | Study is performed on limited | ||
| data. | |||
| Song et al. [ | CT scan: 777 COVID-19 and 505 bacterial | Feature pyramid network on top of ResNet 50 | Accuracy: 86% |
| pneumonia | transfer learning model (DRE-Net) | Area under curve: 0.96 | |
| Accuracy score needs to be | |||
| addressed. | |||
| He et al. [ | CT scan: 349 COVID-19 and 397 Non-COVID | Proposed a self transfer learning | Accuracy: 86% |
| model using DenseNet 169 | F-1 score: 0.85 | ||
| Area under curve: 0.94 | |||
| Accuracy score is not | |||
| optimal. | |||
| Zhang et al. [ | Two in house X-ray dataset | Confidence aware anomaly detection | Accuracy: 72.77% |
| X-VIRAL: 5977 viral and 18774 healthy | model (CAAD) | Sensitivity: 71.70% | |
| X-COVID: 106 COVID-19 and 107 Non-COVID | Area under curve: 0.836 | ||
| Performance on X-COVID dataset | |||
| needs to be improved. | |||
| Hemdan et al. [ | X-ray: 25 COVID-19 and 25 Non-COVID | VGG 19 and DenseNet 201 based | Accuracy: 90% |
| transfer learning model | Sensitivity: 90% | ||
| Time: 4 sec | |||
| Failed to address the diagnosis | |||
| time. | |||
| Ismael et al. [ | X-ray: 180 COVID-19 and 201 Non-COVID | Deep transfer learning features | Accuracy: 94.7% |
| extracted from ResNet 50 with | Sensitivity: 91% | ||
| linear SVM classifier | Area under curve: 0.99 | ||
| Das et al. [ | X-ray: 219 COVID-19, 1340 normal | VGG 16 transfer learning | Accuracy: 97.67% |
| and 1345 pneumonia | model | Sensitivity: 96.54% | |
| Failed to minimize | |||
| the computational cost. | |||
| Brunese et al. [ | X-ray: 3003 Pulmonary diseases (250 COVID-19) | Two VGG 16 based transfer learning | Accuracy: 97% (avg.) |
| 3520 healthy | models, first model to discriminate | Sensitivity: 91% (avg.) | |
| healthy cases from pulmonary diseases | Time: 2.5 s | ||
| and second to discriminate COVID-19 | High computational | ||
| cases from other pulmonary diseases | cost. | ||
| Goldstein et al. [ | X-ray: 1191 COVID-19 and 1135 Non-COVID | X-ray images are segmented using U-Net | Accuracy: 90.5% |
| model and ensemble of four transfer | Sensitivity: 88.8% | ||
| learning models (ResNet 50,ResNet 152, | Sensitivity score needs to be | ||
| CheXpert and VGG 16) | addressed. | ||
| Mishra et al. [ | CT scan: 360 COVID-19 and 397 Non-COVID | Ensemble of VGG 16 | Accuracy: 88.34% |
| Inception V3, ResNet 50, DenseNet 121 and | Sensitivity: 88% | ||
| DenseNet 201 | Ensemble of transfer learning | ||
| model is not feasible to deploy | |||
| on low power edge | |||
| devices | |||
| Loey et al. [ | CT scan: 345 COVID-19 and 397 Non-COVID | Increased training and validation data using | Accuracy: 82.91% |
| Conditional Generative Adversarial Network | Sensitivity: 77.66% | ||
| (CGAN) and ResNet 50 transfer learning | False negative rate is | ||
| model | not optimal. |
Fig. 2Radar plot comparing the edge and cloud computing technologies in terms of privacy, resources, latency, storage, and reliability metrics
Fig. 3Typical connected smart healthcare system architecture, which includes IoT devices, edge layer, and cloud layer
Fig. 4Knowledge transfer process in transfer learning
Fig. 5The proposed transfer learning framework for screening of COVID-19 infection using chest CT scan images
Fig. 6Concept of Depthwise separable convolutions
Fig. 7Basic building block consisting of an expansion layer, a depthwise convolution, and a projection layer in MobileNet V2 architecture
Fig. 8The proposed MobileNet V2 based fine-tuned model. The figure depicts the transfer of knowledge learnt from the ImageNet dataset for the application of COVID-19 diagnosis
Fig. 9Collaborative edge cloud computing framework
Fig. 10Few sample images of chest CT scan from the dataset
Fig. 11Representation of confusion matrices of each transfer learning model for the prediction of COVID-19 infection on the test data
Fig. 12Comparison of overall performance in terms of accuracy, precision, F-1 score, and MCC values across VGG 16, VGG 19, DenseNet 201, and proposed model
Fig. 13Comparison of sensitivity and specificity scores of each transfer learning
Fig. 14Representation of performance in terms of ROC curve for all developed transfer learning models. The plot is zoomed from the top left corner for better visualization
Fig. 15Analysis of the time complexity for each transfer learning model in terms of average time taken for classification of test data images
Fig. 16Analysis of the hardware complexity for each transfer learning model in terms of disk size taken by the trained model
Fig. 17Few sample inferences obtained from the model using Python Interpreter
Fig. 18Representation of contingency matrix developed in McNemars statistical test
Results of McNemar’s statistical test on the test data
| Comparison |
|
|
| p-value | Remarks | ||
|---|---|---|---|---|---|---|---|
| Proposed Model vs. | 459 | 23 | 14 | 4 | 1.73 | 0.188 | Failed to reject |
| VGG 16 | the null hypothesis | ||||||
| Proposed Model vs. | 452 | 30 | 15 | 3 | 4.35 | 0.036 | Reject the |
| VGG 19 | null hypothesis | ||||||
| Proposed Model vs. | 442 | 40 | 11 | 7 | 15.37 | 8.83 × 10− 5 | Reject the |
| DenseNet 201 | null hypothesis |
Comparison of proposed model with state-of-the-art methodologies
| Study | Images | Model | Acc. | Se. | Avg. time (s) | Parameters |
|---|---|---|---|---|---|---|
| Azemin et al. [ | X-ray | ResNet 101 | 0.72 | 0.77 | 0.132 | 44.71M |
| Jaiswal et al. [ | CT | DenseNet 201 | 0.96 | 0.96 | – | 20.24M |
| Brunese et al. [ | X-ray | VGG 16 | 0.97 | 0.91 | 2.5 | 138.35M |
| Song et al. [ | CT | DRE-Net | 0.86 | 0.96 | – | – |
| Mishra et al. [ | CT | Fusion | 0.88 | 0.88 | 0.136 | – |
| Gianchandani et al. [ | X-ray | Ensemble | 0.96 | 0.96 | – | – |
| Hemdan et al. [ | X-ray | VGG 19 | 0.90 | 0.90 | 4 | 143.67M |
| Pathak et al. [ | CT | ResNet 50 | 0.93 | 0.95 | – | 25.64M |