| Literature DB >> 35530170 |
Hongbin Zhang1, Weinan Liang1, Chuanxiu Li2, Qipeng Xiong1, Haowei Shi1, Lang Hu1, Guangli Li2.
Abstract
COVID-19 is a form of disease triggered by a new strain of coronavirus. Automatic COVID-19 recognition using computer-aided methods is beneficial for speeding up diagnosis efficiency. Current researches usually focus on a deeper or wider neural network for COVID-19 recognition. And the implicit contrastive relationship between different samples has not been fully explored. To address these problems, we propose a novel model, called deep contrastive mutual learning (DCML), to diagnose COVID-19 more effectively. A multi-way data augmentation strategy based on Fast AutoAugment (FAA) was employed to enrich the original training dataset, which helps reduce the risk of overfitting. Then, we incorporated the popular contrastive learning idea into the conventional deep mutual learning (DML) framework to mine the relationship between diverse samples and created more discriminative image features through a new adaptive model fusion method. Experimental results on three public datasets demonstrate that the DCML model outperforms other state-of-the-art baselines. More importantly, DCML is easier to reproduce and relatively efficient, strengthening its high practicality.Entities:
Keywords: Adaptive model fusion; COVID-19 recognition; Contrastive learning; Deep mutual learning; Fast AutoAugment
Year: 2022 PMID: 35530170 PMCID: PMC9058053 DOI: 10.1016/j.bspc.2022.103770
Source DB: PubMed Journal: Biomed Signal Process Control ISSN: 1746-8094 Impact factor: 5.076
Fig. 1The technological pipeline of the DCML model.
Fig. 2The technological procedure of FAA.
Fig. 3The adaptive model fusion process.
Fig. 4Datasets exhibition.
The statistical data of the three public COVID image datasets. Unit: image.
| Dataset | Train | Test | |||
|---|---|---|---|---|---|
| Pos | Neg | Pos | Neg | ||
| COVID-CT | Original | 251 | 292 | 98 | 105 |
| After FAA | 502 | 584 | 98 | 105 | |
| SARS-Cov-2 | Original | 939 | 923 | 313 | 307 |
| After FAA | 1878 | 1846 | 313 | 307 | |
| COVID-19_Radiography_Dataset | Original | 2712 | 7644 | 904 | 2548 |
| After FAA | 5424 | 15,288 | 904 | 2548 | |
Performance comparisons with baselines. The best value of each column is shown as . “w” means with FAA while “w/o” means without FAA.
| Dataset | Model | Accuracy↑ | F1↑ | Sensitivity↑ | Precision↑ | AUC↑ |
|---|---|---|---|---|---|---|
| COVID-CT | ResNet50 | 0.7600 | 0.7300 | 0.6700 | 0.8000 | 0.7600 |
| ResNet101 | 0.7900 | 0.7800 | 0.8000 | 0.7600 | 0.7900 | |
| DenseNet201 | 0.7900 | 0.7700 | 0.7300 | 0.8100 | 0.7900 | |
| VGG-19 | 0.8000 | 0.7900 | 0.7900 | 0.8100 | 0.8000 | |
| SqueezeNet | 0.7300 | 0.7000 | 0.6500 | 0.8000 | 0.7300 | |
| SeR50 | 0.7488 | 0.7302 | 0.7041 | 0.7582 | 0.7588 | |
| SeR101 | 0.7537 | 0.7573 | 0.7959 | 0.7222 | 0.7301 | |
| Evidential Covid-Net | 0.7310 | 0.7020 | / | / | 0.8700 | |
| COVID-Net | 0.6312 | 0.6109 | 0.5773 | 0.6403 | 0.7109 | |
| Series Adapter | 0.7001 | 0.6708 | 0.7491 | 0.6304 | 0.7392 | |
| Parallel Adapter | 0.7493 | 0.7346 | 0.7181 | 0.7984 | 0.8029 | |
| MS-Net | 0.7623 | 0.7654 | 0.7407 | 0.7929 | 0.8219 | |
| Zhao et al. | 0.7869 | 0.7883 | 0.7971 | 0.7802 | 0.8532 | |
| SeR50 (DML) | 0.8079 | 0.7983 | 0.8102 | 0.8087 | 0.8083 | |
| SeR101 (DML) | 0.8621 | 0.8516 | 0.8029 | 0.8559 | ||
| DCML (w/o FAA) | 0.8768 | 0.8737 | 0.8807 | 0.8829 | 0.8807 | |
| DCML (w FAA) | 0.8800 | |||||
| SARS-Cov-2 | ResNet101 | 0.9496 | 0.9503 | 0.9715 | 0.9300 | 0.9498 |
| GoogleNet | 0.9173 | 0.9182 | 0.9350 | 0.9020 | 0.9179 | |
| VGG-16 | 0.9496 | 0.9497 | 0.9543 | 0.9402 | 0.9496 | |
| AlexNet | 0.9375 | 0.9361 | 0.9228 | 0.9498 | 0.9368 | |
| SeR50 | 0.8824 | 0.8828 | 0.8786 | 0.8871 | 0.8821 | |
| SeR101 | 0.9098 | 0.9088 | 0.8914 | 0.9269 | 0.9311 | |
| Angelov et al. | 0.8860 | 0.8915 | 0.8860 | 0.8970 | / | |
| Panwar et al. | 0.9404 | 0.9450 | 0.9400 | 0.9500 | / | |
| Sen et al. | 0.9532 | 0.9530 | 0.9530 | 0.9530 | / | |
| Jaiswal et al. | 0.9625 | 0.9629 | 0.9629 | 0.9629 | / | |
| COVID-Net | 0.7712 | 0.7603 | 0.7097 | 0.8004 | 0.8408 | |
| Series Adapter | 0.8573 | 0.8619 | 0.8191 | 0.9098 | 0.9293 | |
| Parallel Adapter | 0.8213 | 0.8239 | 0.8002 | 0.8351 | 0.8999 | |
| MS-Net | 0.8798 | 0.8873 | 0.8491 | 0.9378 | 0.9437 | |
| Zhao et al. | 0.9083 | 0.9087 | 0.8589 | 0.9575 | 0.9624 | |
| SeR50 (DML) | 0.9274 | 0.9251 | 0.9423 | 0.9103 | 0.9229 | |
| SeR101 (DML) | 0.9452 | 0.9400 | 0.9350 | 0.9494 | 0.9419 | |
| DCML (w/o FAA) | 0.9565 | 0.9553 | 0.9596 | 0.9561 | 0.9596 | |
| DCML (w FAA) | ||||||
| COVID-19_Radiography_Dataset | VGG19 | 0.9000 | 0.8800 | 0.9400 | 0.8300 | 0.8700 |
| ResNet101 | 0.8751 | 0.9220 | 0.8701 | 0.8923 | 0.8910 | |
| ResNet50 | 0.8866 | 0.8744 | 0.8996 | 0.8612 | 0.9566 | |
| InceptionResNetV2 | 0.9485 | 0.9617 | 0.9401 | 0.9447 | 0.9201 | |
| Fast-CovNet | 0.8211 | 0.8006 | 0.6765 | 0.7664 | ||
| Inceptionv3 | 0.7668 | 0.8163 | 0.7009 | 0.9771 | 0.8021 | |
| NasNet | 0.9583 | 0.9594 | 0.9337 | 0.9858 | ||
| GLCM | 0.9222 | 0.9030 | 0.8895 | 0.7911 | 0.8156 | |
| EfficientNet | 0.8084 | 0.7707 | 0.7244 | 0.8602 | 0.8200 | |
| SeR50 | 0.8674 | 0.8719 | 0.9271 | 0.8065 | 0.8696 | |
| SeR101 | 0.8786 | 0.8878 | 0.9392 | 0.8148 | 0.8873 | |
| SeR50 (DML) | 0.9545 | 0.9442 | 0.9532 | 0.8834 | 0.9412 | |
| SeR101 (DML) | 0.9539 | 0.9576 | 0.9591 | 0.9470 | 0.9667 | |
| DCML (w/o FAA) | 0.9826 | 0.9762 | 0.9770 | 0.9778 | 0.9771 | |
| DCML (w FAA) | 0.9770 | 0.9837 |
Five-fold cross-validation. Avg means average value. Std means Standard Deviation.
| 1st fold | 2nd fold | 3rd fold | 4th fold | 5th fold | Avg | Std | |
|---|---|---|---|---|---|---|---|
| Accuracy | 0.9849 | 0.9693 | 0.9880 | 0.9841 | 0.9913 | 0.9835 | 0.0084 |
| F1 | 0.9794 | 0.9386 | 0.9842 | 0.9785 | 0.9879 | 0.9737 | 0.0200 |
| Sensitivity | 0.9822 | 0.9346 | 0.9876 | 0.9764 | 0.9893 | 0.9740 | 0.0226 |
| Precision | 0.9833 | 0.9509 | 0.9879 | 0.9838 | 0.9891 | 0.9790 | 0.0159 |
| AUC | 0.9819 | 0.9588 | 0.9873 | 0.9757 | 0.9893 | 0.9786 | 0.0123 |
Real-time efficiency. The best value on each dataset is shown as 5.21.
| Dataset | Model | Real-Time Efficiency (e-4)/s ↓ |
|---|---|---|
| COVID-CT | SeR50 | 6.86 |
| SeR101 | 5.22 | |
| SeR50 (DML) | 5.52 | |
| SeR101 (DML) | 5.22 | |
| DCML | ||
| SARS-Cov-2 | SeR50 | 2.55 |
| SeR101 | 3.53 | |
| SeR50 (DML) | 2.03 | |
| SeR101 (DML) | 1.98 | |
| DCML | ||
| COVID-19_Radiography_Dataset | SeR50 | 6.37 |
| SeR101 | 6.29 | |
| SeR50 (DML) | 5.79 | |
| SeR101 (DML) | 5.50 | |
| DCML |
Fig. 5Real-time running curves.
Comparisons with baselines. The best value of each column is shown as . Here, “w” means with FAA whereas “w/o” means without FAA.
| Model | Accuracy |
|---|---|
| ResNet18 (w/o FAA) | 0.9000 |
| ResNet18 (w FAA) | 0.9300 |
| COVIDNet-Small (w/o FAA) | 0.9000 |
| COVIDNet-Small (w FAA) | 0.9200 |
| COVIDNet-Large (w/o FAA) | 0.9400 |
| COVIDNet-Large (w FAA) | 0.9600 |
| WildCat (w/o FAA) | 0.9032 |
| WildCat (w FAA) | 0.9174 |
| DCML (w/o FAA) | 0.9826 |
| DCML (w FAA) |
Fig. 6Comparisons with the conventional DML model.
Fig. 7Ablation analysis on each dataset.
| Split |
| |
| Train |
| |
| |
| |
| |