| Literature DB >> 35154300 |
Mehrad Aria1, Esmaeil Nourani1, Amin Golzari Oskouei2.
Abstract
Rapid diagnosis of COVID-19 with high reliability is essential in the early stages. To this end, recent research often uses medical imaging combined with machine vision methods to diagnose COVID-19. However, the scarcity of medical images and the inherent differences in existing datasets that arise from different medical imaging tools, methods, and specialists may affect the generalization of machine learning-based methods. Also, most of these methods are trained and tested on the same dataset, reducing the generalizability and causing low reliability of the obtained model in real-world applications. This paper introduces an adversarial deep domain adaptation-based approach for diagnosing COVID-19 from lung CT scan images, termed ADA-COVID. Domain adaptation-based training process receives multiple datasets with different input domains to generate domain-invariant representations for medical images. Also, due to the excessive structural similarity of medical images compared to other image data in machine vision tasks, we use the triplet loss function to generate similar representations for samples of the same class (infected cases). The performance of ADA-COVID is evaluated and compared with other state-of-the-art COVID-19 diagnosis algorithms. The obtained results indicate that ADA-COVID achieves classification improvements of at least 3%, 20%, 20%, and 11% in accuracy, precision, recall, and F1 score, respectively, compared to the best results of competitors, even without directly training on the same data. The implementation source code of the ADA-COVID is publicly available at https://github.com/MehradAria/ADA-COVID.Entities:
Mesh:
Year: 2022 PMID: 35154300 PMCID: PMC8826267 DOI: 10.1155/2022/2564022
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Diagram of the proposed approach.
Transformations.
| Augmentation | Range/type |
|---|---|
| Brightness | [−10%, +10%] |
| Contrast | [−10%, +10%] |
| Rotation | [−20°, +20°] |
| JPEG noise | [50, 100] |
| Flip | Horizontal |
Figure 2Preprocessed samples. (a) COVID-19 and (b) non-COVID-19.
Figure 3Domain-invariant CNN encoder based on ResNet-50 architecture.
Figure 4General representation of a deep model trained with the adversarial training methodology in the multisource transfer learning setting.
Characteristics of the utilized datasets.
| Datasets | No. of samples | No. of COVID-19 samples | No. of non-COVID-19 samples | Image size |
|---|---|---|---|---|
| SARS-CoV-2 CT scan (source dataset) | 2482 | 1252 | 1230 | 119 × 104 416 × 512 |
|
| ||||
| COVID-19 CT (target dataset) | 746 | 349 | 397 | 124 × 153 1485 × 1853 |
Performance comparison of different models for detecting COVID-19 on the source dataset (the best rates are bold-faced)
| Model/method | Evaluation metrics | |||
|---|---|---|---|---|
| Accuracy | Precision | Recall | F1 | |
| AdaBoost | 95.1 | 93.6 | 96.7 | 95.1 |
| AlexNet | 93.7 | 94.9 | 92.2 | 93.6 |
| Decision tree | 79.4 | 76.8 | 83.1 | 79.8 |
| EfficientNetB0 | 98.9 | 99.1 | 98.9 | 99.0 |
| GoogleNet | 91.7 | 90.2 | 93.5 | 91.8 |
| ResNet50 | 94.9 | 93.0 | 97.1 | 95.0 |
| ResNet50V2 | 94.2 | 92.8 | 96.7 | 94.1 |
| ShuffleNet | 97.5 | 96.1 | 99.0 | 97.5 |
| SqueezeNet | 95.1 | 94.2 | 96.2 | 95.2 |
| VGG-16 | 94.9 | 94.0 | 95.4 | 94.9 |
| Xception | 98.8 | 99.0 | 98.6 | 98.8 |
|
| ||||
| Contrastive learning [ | 90.8 | 95.7 | 85.8 | 90.8 |
| COVID CT-Net [ | 90.7 | 88.5 | 85.0 | 90.0 |
| DenseNet201-based [ | 96.2 | 96.2 | 96.2 | 96.2 |
| Modified VGG19 [ | 95.0 | 95.3 | 94.0 | 94.3 |
| xDNN [ | 97.3 | 99.1 | 95.5 | 97.3 |
|
| ||||
|
|
|
|
|
|
Figure 5Confusion matrix of evaluation on the test set of the source dataset.
Figure 6Performance evaluation on 25 random samples from the test set. “I” is the image index, “P” is the predicted value, and “L” is the ground truth label.
Performance comparison of different models for detecting COVID-19 on the target dataset.
| Model/Method | Evaluation metrics | |||
|---|---|---|---|---|
| Accuracy | Recall | Specificity | F1 | |
| AlexNet | 74.5 | 70.4 | 79.0 | 75.0 |
| DenseNet-121 | 88.9 | 88.8 | 88.9 | 88.2 |
| DenseNet-169 | 91.2 | 93.3 | 88.9 | 90.8 |
| DenseNet-201 | 91.7 | 88.6 | 94.1 | 91.9 |
| GoogleNet | 78.9 | 75.9 | 82.3 | 79.0 |
| Inception-ResNet-v2 | 86.3 | 88.1 | 84.2 | 87.0 |
| Inception-v3 | 89.4 | 90.0 | 88.9 | 88.8 |
| MobileNet-v2 | 87.2 | 93.2 | 77.6 | 89.0 |
| NasNet-large | 85.2 | 79.3 | 91.9 | 84.0 |
| NasNet-Mobile | 83.4 | 84.8 | 81.9 | 85.0 |
| ResNet-101 | 89.7 | 82.2 | 89.2 | 89.0 |
| ResNet-18 | 90.1 | 89.4 | 90.9 | 91.0 |
| ResNet-50 | 90.8 | 90.0 | 91.0 | 90.1 |
| ResNeXt-101 | 90.9 | 93.1 | 88.9 | 90.6 |
| ResNeXt-50 | 90.6 | 93.4 | 88.2 | 90.3 |
| ShuffleNet | 86.1 | 83.5 | 89.0 | 86.0 |
| SqueezeNet | 78.5 | 86.5 | 63.8 | 82.0 |
| VGG-16 | 78.5 | 74.6 | 82.8 | 76.0 |
| VGG-19 | 83.2 | 90.7 | 74.7 | 85.0 |
| Xception | 85.6 | 88.3 | 80.6 | 87.7 |
|
| ||||
| Contrastive learning [ | 78.6 | 78.0 | 77.0 | 78.8 |
| Decision function [ | 88.3 | 87.0 | 87.9 | 86.7 |
| DenseNet-121 + SVM [ | 85.9 | 84.9 | 86.8 | 86.2 |
| DenseNet-169-based [ | 83.0 | 84.8 | 85.5 | 81.0 |
| DenseNet-169-based [ | 87.7 | 85.6 | 86.9 | 87.8 |
| ResNet-101-based [ | 80.3 | 85.7 | 86.0 | 81.8 |
|
| ||||
|
|
|
|
|
|
|
|
|
|
|
|
Crossdataset evaluation results.
| Method | Training dataset | Test dataset | Evaluation metrics | ||
|---|---|---|---|---|---|
| Accuracy | Recall | Precision | |||
|
| Source | Target (train set) | 59.12 | 64.14 | 54.95 |
| Source | Target (test set) | 56.16 | 53.06 | 54.74 | |
| Source | Target (all data) | 58.31 | 61.03 | 54.90 | |
| Target | Source | 45.25 | 54.39 | 46.36 | |
|
| |||||
|
| Source | Target (train set) | 64.28 | 65.10 | 64.12 |
| Source | Target (test set) | 62.05 | 63.30 | 62.00 | |
| Source | Target (all data) | 65.37 | 66.80 | 65.22 | |
| Target | Source | 57.00 | 59.41 | 56.88 | |
|
| |||||
|
| Source | Target (train set) | 91.04 | 92.70 | 92.11 |
| Source | Target (test set) | 90.88 | 91.00 | 91.90 | |
| Source | Target (all data) |
|
|
| |
| Target | Source | 83.07 | 87.51 | 84.26 | |
ADA-COVID vs. pretrained models.
| Reference | Data sources | No. of samples | Model | Performance |
|---|---|---|---|---|
| Ardakani et al. [ | Real-time data from the hospital environment. | Total: 1,020 | AlexNet, VGG-16, | Accuracy: 99.51 |
| Chen et al. [ | Renmin Hospital of Wuhan University. | Total: 35,355 | UNet++ | Accuracy: 98.85 |
| Cifci [ | Kaggle benchmark dataset [ | Total: 5,800 | AlexNet, Inception-V4 | Accuracy: 94.74 |
| Javaheri et al. [ | Five medical centers in Iran, SPIE-AAPM-NCI [ | Total: 89,145 | BCDU-Net (U-Net) | Accuracy: 91.66 |
| Jin et al. [ | Wuhan Union Hospital, | Total: 1,881 | ResNet152 | Accuracy: 94.98 |
| Jin et al. [ | Five different hospitals of China. | Total: 1,391 | DPN-92, Inception-v3, | Recall: 97.04 |
| Li et al. [ | Multiple hospitals environment. | Total: 4,536 | ResNet50 | Recall: 90 |
| Wu et al. [ | China Medical University, | Total: 495 | ResNet50 | Accuracy: 76 |
| Xu et al. [ | Zhejiang University, Hospital of Wenzhou, Hospital of Wenling. | Total: 618 | ResNet18 | Accuracy: 86.7 |
| Yousefzadeh et al. [ | Real-time data from the hospital environment. | Total: 2,124 | DenseNet, ResNet, | Accuracy: 96.4 |
|
| ||||
|
| SARS-CoV-2 CT scan dataset | Total: 2,482 | ResNet50 | Accuracy: |
ADA-COVID vs. customized models.
| Reference | Data sources | No. of samples | Model | Performance |
|---|---|---|---|---|
| Amyar et al. [ | COVID CT [ | Total: 1,044 | Encoder-decoder with multilayer perceptron | Accuracy: 86.0 |
| Elghamrawy and Hassanien. [ | COVID-19 database [ | Total: 583 | WOA-CNN | Accuracy: 96.40 |
| Farid et al. [ | Kaggle benchmark dataset [ | Total: 102 | CNN | Accuracy: 94.11 |
| Hasan et al. [ | COVID-19 [ | Total: 321 | QDE–DF | Accuracy: 99.68 |
| He et al. [ | COVID-19 database [ | Total: 746 | CRNet | Accuracy: 86.0 |
| Liu et al. [ | Ten designated COVID-19 hospitals in China | Total: 1,993 | Modified DenseNet-264 | Accuracy: 94.3 |
| Singh et al. [ | COVID-19 patient chest CT images [ | Total: 150 | MODE-CNN | Accuracy: 93.25 |
| Wang et al. [ | Xi'an Jiaotong University, Nanchang University, Xi'an Medical College | Total: 1,065 | Modified inception | Accuracy: 79.3 |
| Song et al. [ | Hospital of Wuhan University, third affiliated hospital | Total: 1,990 | DRE-Net | Accuracy: 94.3 |
| Zheng et al. [ | Union Hospital, Tongji Medical College, Huazhong University of Science and Technology | Total: 630 | DeCoVNet | Accuracy: 90.1 |
|
| ||||
|
| SARS-CoV-2 CT scan dataset | Total: 2,482 | ResNet50 | Accuracy: |