| Literature DB >> 35530169 |
Anubhav Sharma1, Karamjeet Singh1, Deepika Koundal2.
Abstract
Coronavirus disease is a viral infection caused by a novel coronavirus (CoV) which was first identified in the city of Wuhan, China somewhere in the early December 2019. It affects the human respiratory system by causing respiratory infections with symptoms (mild to severe) like fever, cough, and weakness but can further lead to other serious diseases and has resulted in millions of deaths until now. Therefore, an accurate diagnosis for such types of diseases is highly needful for the current healthcare system. In this paper, a state of the art deep learning method is described. We propose COVDC-Net, a Deep Convolutional Network-based classification method which is capable of identifying SARS-CoV-2 infected amongst healthy and/or pneumonia patients from their chest X-ray images. The proposed method uses two modified pre-trained models (on ImageNet) namely MobileNetV2 and VGG16 without their classifier layers and fuses the two models using the Confidence fusion method to achieve better classification accuracy on the two currently publicly available datasets. It is observed through exhaustive experiments that the proposed method achieved an overall classification accuracy of 96.48% for 3-class (COVID-19, Normal and Pneumonia) classification tasks. For 4-class classification (COVID-19, Normal, Pneumonia Viral, and Pneumonia Bacterial) COVDC-Net method delivered 90.22% accuracy. The experimental results demonstrate that the proposed COVDC-Net method has shown better overall classification accuracy as compared to the existing deep learning methods proposed for the same task in the current COVID-19 pandemic.Entities:
Keywords: COVID-19; Chest X-ray; Confidence fusion; Deep learning; Transfer learning
Year: 2022 PMID: 35530169 PMCID: PMC9057938 DOI: 10.1016/j.bspc.2022.103778
Source DB: PubMed Journal: Biomed Signal Process Control ISSN: 1746-8094 Impact factor: 5.076
represents the 3-Class Fold wise analysis.
| Fold 1 | MobileNetV2 | 0.9525 | 0.9502 | 0.9512 | 0.9484 |
| VGG16 | 0.9527 | 0.9522 | 0.9523 | 0.9509 | |
| Fusion | 0.9677 | 0.9663 | 0.9668 | 0.9656 | |
| Fold 2 | MobileNetV2 | 0.9476 | 0.9501 | 0.9487 | 0.9476 |
| VGG16 | 0.9579 | 0.957 | 0.9572 | 0.9558 | |
| Fusion | 0.9658 | 0.9676 | 0.9666 | 0.9656 | |
| Fold 3 | MobileNetV2 | 0.9461 | 0.9454 | 0.9458 | 0.9443 |
| VGG16 | 0.9445 | 0.9412 | 0.9427 | 0.9410 | |
| Fusion | 0.9611 | 0.959 | 0.96 | 0.9590 | |
| Fold 4 | MobileNetV2 | 0.9379 | 0.9388 | 0.9383 | 0.9369 |
| VGG16 | 0.9537 | 0.9534 | 0.9535 | 0.9525 | |
| Fusion | 0.969 | 0.9699 | 0.9694 | 0.9689 | |
| AVERAGE | MobileNetV2 | 0.9460 | 0.9461 | 0.9460 | 0.9443 |
| VGG16 | 0.9522 | 0.95095 | 0.9514 | 0.9500 | |
| Fusion | 0.9659 | 0.9657 | 0.9657 | 0.9648 |
Comparative analysis of Models.
| MobileNetV2 + DenseNet | 0.9608 | 0.9618 | 0.9620 | 0.9619 |
| MobileNetV2 + VGG19 | 0.9584 | 0.9598 | 0.9598 | 0.9597 |
| MobileNetV2 + Resnet50 | 0.9479 | 0.9498 | 0.9499 | 0.9499 |
| VGG16 + DenseNet | 0.9627 | 0.9641 | 0.9642 | 0.9641 |
| VGG16 + VGG19 | 0.9574 | 0.9588 | 0.9597 | 0.9597 |
| VGG16 + Resnet50 | 0.9477 | 0.9505 | 0.9489 | 0.9495 |
| VGG19 + DenseNet | 0.9580 | 0.9598 | 0.9597 | 0.9596 |
| VGG19 + Resnet50 | 0.9277 | 0.9332 | 0.9305 | 0.9306 |
Fig. 1Convolution operation with stride 1.
Fig. 2Activation map output plots of layers in increasing depth for (a) MobileNetV2, (b) VGG16.
Dataset analysis.
| DATASET | Class | Number Of Samples |
|---|---|---|
| D1 | COVID-19 | 1784 |
| Healthy | 1755 | |
| Pneumonia | 1345 | |
| D2 | COVID-19 | 305 |
| Healthy | 375 | |
| Pneumonia Bacterial | 379 | |
| Pneumonia Viral | 355 |
Fig. 3Sample of the X-ray images used in our experiments.
Fig. 4Framework for the proposed methodology.
Fig. 5Architecture and fusion representation.
Fig. 6Validation, training loss, validation accuracy curves obtained for fold-2 for (a) MobileNetV2 (b) VGG16.
Fig. 7Confusion Matrices for each fold for each model.
Class wise analysis.
| METHOD | CLASS | |||
|---|---|---|---|---|
| MobileNetV2 | COVID-19 | 0.9474 | 0.9395 | 0.9434 |
| Normal | 0.9258 | 0.9316 | 0.9286 | |
| Pneumonia | 0.9648 | 0.9673 | 0.9659 | |
| VGG16 | COVID-19 | 0.9584 | 0.94 | 0.9491 |
| Normal | 0.9252 | 0.9515 | 0.9382 | |
| Pneumonia | 0.9730 | 0.9614 | 0.967 | |
| Fusion | COVID-19 | 0.9711 | 0.9591 | 0.965 |
| Normal | 0.9495 | 0.9618 | 0.9556 | |
| Pneumonia | 0.9771 | 0.9762 | 0.9766 |
Analysis of Dataset-2.
| Class | METHOD | OVERALL AVG ACCURACY |
|---|---|---|
| 4 Class | MobileNetV2 | 0.8774 |
| VGG16 | 0.8739 | |
| Fusion | 0.9022 | |
| 3 Class | MobileNetV2 | 0.9603 |
| VGG16 | 0.9567 | |
| Fusion | 0.9702 |
Comparison of the proposed system with existing systems in terms of accuracy.
| Shibly et al. [44] | 2 Class: | Chest X-Ray | R–CNN | 97.36% | faster R–CNN | Limited data |
| Wang | 2 Class | Chest CT | DeCoVNet (UNet + 3D Deep Network) | 90.0% | light-weight 3D CNN | Limited data |
| Ozturk et al. [18] | 2 Class | Chest X-Ray | DarkCovidNet | 98.08% | The heatmaps produced by the model can be evaluated by an expert radiologist. | Limited data |
| 3 Class | 87.02% | |||||
| Apostolopoulos et al. [17] | 3 Class | Chest X-Ray | VGG-19 | 93.48% | Multiple models used for testing | Limited no of evaluation metrics |
| 3 Class | MobileNet v2 | 92.85% | ||||
| Wang et al. [15] | 3 Class | Chest X-Ray | COVID-Net | 93.3% | Low architectural complexity | Data-set imbalance |
| Law and Lin [45] | 3 Class | Chest X-Ray | VGG-16 | 94% | Multiple Models used. | Cant generalize results of data augmentation |
| Cengil and Cinar [46] | 3 Class | Chest X-Ray | AlexNet + EfficientNet-b0 + NASNetLarge + Exception | 95.9% | 3 different datasets used i.e. robust | High model complexity |
| Khan et al. [24] | 3 Class | Chest X-Ray | Crornet | 95% | 4-Class Classification results | Limited data for COVID-19 Class |
| 4 Class | 89.6% | |||||
| Balanced Dataset | Hybrid Methods are computationally expensive | |||||
Fig. 8ROC Curves for (a) MobileNetV2 (b) VGG16 (c) Proposed.