| Literature DB >> 34945278 |
Gaetano Zazzaro1, Francesco Martone1, Gianpaolo Romano1, Luigi Pavone2.
Abstract
BACKGROUND: The aim of this study was to evaluate the performance of an automated COVID-19 detection method based on a transfer learning technique that makes use of chest computed tomography (CT) images.Entities:
Keywords: COVID-19; computed tomography; deep learning; medical imaging; transfer learning
Year: 2021 PMID: 34945278 PMCID: PMC8708436 DOI: 10.3390/jcm10245982
Source DB: PubMed Journal: J Clin Med ISSN: 2077-0383 Impact factor: 4.241
Figure 1Summary of the pipeline used in this study.
Transfer learning subsettings.
| Subsetting Name | Description | Label Information | |
|---|---|---|---|
| 1 | Inductive TL | the target task is different, but related, | comes from the target domain |
| 2 | Transductive TL | the source and target tasks are the same, while the source and target domains are | comes from the source domain |
| 3 | Unsupervised TL | the target is different from, but related to, the source task, and the focus is on solving unsupervised learning tasks in the target | is always unknown for both the source and the target domains |
Figure 2The typical architecture of a convolutional neural network.
Number of selected features by information gain for each deep neural network.
| N | Deep Neural Network | Maximum Value of Information Gain | N. of Original Features | N. of Selected Features | Percentage Reduction of Features by IG |
|---|---|---|---|---|---|
| 1 | DenseNet121 | 0.145 | 1025 | 930 | 9.27% |
| 2 | DenseNet169 | 0.162 | 1665 | 1488 | 10.63% |
| 3 | DenseNet201 | 0.165 | 1921 | 1669 | 13.12% |
| 4 | EfficientNetB0 | 0.203 | 1281 | 1159 | 9.52% |
| 5 | EfficientNetB1 | 0.160 | 1281 | 1202 | 6.17% |
| 6 | EfficientNetB2 | 0.187 | 1409 | 1314 | 6.74% |
| 7 | EfficientNetB3 | 0.207 | 1537 | 1431 | 6.90% |
| 8 | EfficientNetB4 | 0.141 | 1793 | 1598 | 10.88% |
| 9 | EfficientNetB5 | 0.164 | 2049 | 1903 | 7.13% |
| 10 | EfficientNetB6 | 0.181 | 2305 | 2074 | 10.02% |
| 11 | EfficientNetB7 | 0.157 | 2561 | 2315 | 9.61% |
| 12 | InceptionResNetV2 | 0.126 | 1737 | 1533 | 11.74% |
| 13 | InceptionV3 | 0.186 | 2049 | 1899 | 7.32% |
| 14 | MobileNet | 0.175 | 1025 | 829 | 19.12% |
| 15 | MobileNetV2 | 0.144 | 1281 | 889 | 30.60% |
| 16 | MobileNetV3Large | 0.116 | 1281 | 1080 | 15.69% |
| 17 | MobileNetV3Small | 0.111 | 1025 | 849 | 17.17% |
| 18 | ResNet50 | 0.333 | 2049 | 1735 | 15.32% |
| 19 | ResNet50V2 | 0.150 | 2049 | 955 | 53.39% |
| 20 | ResNet101 | 0.204 | 2049 | 1717 | 16.20% |
| 21 | ResNet101V2 | 0.183 | 2049 | 797 | 61.10% |
| 22 | ResNet152 | 0.284 | 2049 | 1665 | 18.74% |
| 23 | ResNet152V2 | 0.206 | 2049 | 1518 | 25.92% |
| 24 | VGG16 | 0.156 | 513 | 404 | 21.25% |
| 25 | VGG19 | 0.184 | 513 | 440 | 14.23% |
| 26 | Xception | 0.138 | 2049 | 1071 | 47.73% |
Average accuracies of the 26 classifiers.
| Average Accuracy | Average Accuracy | ||
|---|---|---|---|
| N | Name | Accuracy of k-NN (k = 1) 10-Fold Cross Validated | Accuracy of k-NN (k = 1) on 10% Test Set |
| 1 | DenseNet121 | 93.4186% | 93.3014% |
| 2 | DenseNet169 | 92.5926% | 94.7368% |
| 3 | DenseNet201 | 91.3403% | 90.1914% |
| 4 | EfficientNetB0 | 95.2038% | 95.933% |
| 5 | EfficientNetB1 | 96.7493% | 96.6507% |
| 6 | EfficientNetB2 | 93.6318% | 94.7368% |
| 7 | EfficientNetB3 | 93.7393% | 96.1722% |
| 8 | EfficientNetB4 | 92.8058% | 93.5407% |
| 9 | EfficientNetB5 | 91.8732% | 91.3876% |
| 10 | EfficientNetB6 | 88.5159% | 87.5598% |
| 11 | EfficientNetB7 | 92.3261% | 94.0191% |
| 12 | InceptionResNetV2 | 78.737% | 77.2727% |
| 13 | InceptionV3 | 88.5425% | 88.756% |
| 14 | MobileNet | 90.9406% | 90.6699% |
| 15 | MobileNetV2 | 91.3669% | 93.7799% |
| 16 | MobileNetV3Large | 88.5425% | 90.1914% |
| 17 | MobileNetV3Small | 84.3858% | 83.7321% |
| 18 | ResNet50 | 95.3637% | 96.1722% |
| 19 | ResNet50V2 | 81.1617% | 81.5789% |
| 20 | ResNet101 | 94.1913% | 93.5407% |
| 21 | ResNet101V2 | 86.2776% | 84.9282% |
| 22 | ResNet152 | 94.1114% | 96.89% |
| 23 | ResNet152V2 | 84.0927% | 85.6459% |
| 24 | VGG16 | 94.7509% | 93.3014% |
| 25 | VGG19 | 94.5377% | 95.933% |
| 26 | Xception | 82.6272% | 85.4067% |
Confusion matrix on test set.
| YES | NO | Classified as | |
|---|---|---|---|
| YES | TP = 215 | FN = 2 | |
| NO | FP = 2 | TN = 199 | |
| Meta-Classifier Accuracy 99.04% | |||
Performance metrics of the meta-classifier on test set.
| Symbol | Performance Metric | Definition as | What Does It Measure? | Value |
|---|---|---|---|---|
| CCR | Correctly Classified instance Rate—Accuracy | (TP + TN)/(TP + TN + FP + FN) | How good the model is at correctly predicting both positive and negative cases | 0.9904 |
| TPR | True Positive Rate—Sensitivity—Recall | TP/(TP + FN) | How good the model is at correctly predicting positive cases | 0.9908 |
| FPR | False Positive Rate—Fall-out | FP/(FP + TN) | Proportion of incorrectly classified negative cases | 0.010 |
| PPV | Positive Predictive Value—Precision | TP/(TP + FP) | Proportion of correctly classified positive cases out of total positive predictions | 0.9908 |
| AUC | ROC Area | Area under the ROC curve | Area under plot of TPR against FPR | 0.997 |
Related works for COVID-19 infection detection.
| Author | ML Approach | Data Source | Transfer | Achieved |
|---|---|---|---|---|
| Alshazly et al. [ | Pre-trained SqueezeNet, Inception, ResNet, ResNeXt, Xception, ShuffleNet and DenseNet CNN with fine tuning | 2482 CT images + 746 CT images | Not Declared | Accuracy: 99.4% and 92.9% on the two datasets |
| Xu et al. [ | ROI segmentation with 3D CNN + Classification with ad hoc ResNet-18 CNN | 618 CT images | No | Accuracy: 86.7% |
| Wang et al. [ | Pre-trained Inception CNN with fine tuning | 1065 CT images | Not Declared | Accuracy: 79.3% |
| Gozes et al. [ | Pre-trained ResNet-50 CNN with fine tuning | 206 patients CT scans | ImageNet | AUC: 0.996 |
| Hasan et al. [ | DenseNet-121 CNN | 2482 CT images | No | Accuracy: 92% |
| Rohila et al. [ | Ad hoc deep learning network based on ResNet-101 | 1110 patients CT scans | Yes, but no ImageNet | Accuracy: 94.9% |
| Soares et al. [ | xDNN (eXplainable Deep Neural Network) | 2482 CT images | ImageNet | Accuracy: 97.4% |
| Loddo et al. [ | Pre-trained AlexNet, Residual Networks, ResNet18, ResNet50, ResNet101, GoogLeNet, ShuffleNet, MobileNetV2, InceptionV3, VGG16 and VGG19 | 470 + 194,122 Chest CT images | No | Accuracy: 98.87% (nets comparison) |
| Our | Pre-trained CNNs, k Nearest Neighbors with 10-fold cross validation, majority voting approach | 2482 CT images | Yes ImageNet | Accuracy: 99.04% |