| Literature DB >> 34141886 |
Waleed M Bahgat1, Hossam Magdy Balaha2, Yousry AbdulAzeem3, Mahmoud M Badawy2.
Abstract
Accurate and fast detection of COVID-19 patients is crucial to control this pandemic. Due to the scarcity of COVID-19 testing kits, especially in developing countries, there is a crucial need to rely on alternative diagnosis methods. Deep learning architectures built on image modalities can speed up the COVID-19 pneumonia classification from other types of pneumonia. The transfer learning approach is better suited to automatically detect COVID-19 cases due to the limited availability of medical images. This paper introduces an Optimized Transfer Learning-based Approach for Automatic Detection of COVID-19 (OTLD-COVID-19) that applies an optimization algorithm to twelve CNN architectures to diagnose COVID-19 cases using chest x-ray images. The OTLD-COVID-19 approach adapts Manta-Ray Foraging Optimization (MRFO) algorithm to optimize the network hyperparameters' values of the CNN architectures to improve their classification performance. The proposed dataset is collected from eight different public datasets to classify 4-class cases (COVID-19, pneumonia bacterial, pneumonia viral, and normal). The experimental result showed that DenseNet121 optimized architecture achieves the best performance. The evaluation results based on Loss, Accuracy, F1-score, Precision, Recall, Specificity, AUC, Sensitivity, IoU, and Dice values reached 0.0523, 98.47%, 0.9849, 98.50%, 98.47%, 99.50%, 0.9983, 0.9847, 0.9860, and 0.9879 respectively.Entities:
Keywords: COVID-19; Classification; Deep convolutional neural network; Transfer learning; X-ray images
Year: 2021 PMID: 34141886 PMCID: PMC8176553 DOI: 10.7717/peerj-cs.555
Source DB: PubMed Journal: PeerJ Comput Sci ISSN: 2376-5992
Figure 1Coronavirus outbreaks and lowering infection rates (World Health Organization, 2020b).
Figure 2Coronavirus diagnose techniques.
Figure 3The guidelines that represent the contributions of this study.
The CNN architectures comparison.
| Architecture | Parameters | Error rate | Category |
|---|---|---|---|
| 60 M | ImageNet: 16.4 | Spatial Exploitation | |
| 138 M | ImageNet: 7.3 | Spatial Exploitation | |
| 4 M | ImageNet: 6.7 | Spatial Exploitation | |
| ImageNet: 3.5 | |||
| 23.6 M | Multi-Crop: 3.58 | Depth + Width | |
| Single-Crop: 5.6 | |||
| 35 M | ImageNet: 4.01 | Depth +Width | |
| 55.8 M | ImageNet: 3.52 | Depth + Width + Multi-Path | |
| 25.6 M | ImageNet: 3.6 | Depth + Multi-Path | |
| 1.7 M | CIFAR-10: 6.43 | ||
| 22.8 M | ImageNet: 0.055 | Width | |
| CIFAR-10: 3.58 | |||
| CIFAR-100: 17.31 | |||
| 68.1 M | ImageNet: 4.4 | Width | |
| 25.6 M | CIFAR-10+: 3.46 | ||
| 25.6 M | CIFAR100+: 17.18 | Multi-Path | |
| 15.3 M | CIFAR-10: 5.19 | ||
| 15.3 M | CIFAR-100: 19.64 | ||
| 4.2 M | ImageNet: 10.5 | Depth + Width | |
| 3.5 M | ImageNet | Depth + Width |
The CNN hyperparameters.
| Category | Hyperparameters | Definition |
|---|---|---|
| Network Structure | Hidden layers | It represents the number of layers between the input and output layer. |
| Kernel Size | It indicates the height and width of the 2D convolution window. | |
| Kernel Type | It specifies the applied filter (e.g. edge detection, sharpen). | |
| Stride | It specifies the step size of the kernel when crossing the image. | |
| Padding | The extra pixels of filler around the boundary of the input image that are set to zero. | |
| Dropout | It defines the percentage of neurons that should be ignored to prevent overfitting. | |
| Activation Functions | They are the mathematical equations that allow the model to learn nonlinear prediction boundaries. | |
| Training Methodology | Learning Rate | It defines how quickly a network updates its parameters. |
| Momentum | It specifies the value to let the previous update affect the current weight update. | |
| The Epochs Number | The number of iteration when the dataset is trained. | |
| Batch Size | It defines the number of patterns applied to the network before the weights are updated. | |
| Optimizer | It defines the parameters updating technique. |
Figure 4Chain foraging behavior in the 2-D space illustration (Zhao, Zhang & Wang, 2020).
Figure 5Cyclone foraging behavior in the 2-D space illustration (Zhao, Zhang & Wang, 2020).
Figure 6Somersault foraging behavior in the MRFO illustration (Zhao, Zhang & Wang, 2020).
Figure 7The proposed OTLD-COVID-19 approach.
The TCO Pesudocode.
The Manta Ray Foragin Optimization (MRFO) Algorithm Pesudocode.
The used datasets description.
| # | Dataset | COVID-19 | P-virus | P-bacteria | Normal |
|---|---|---|---|---|---|
| 1 | COVID-19 Radiography Database ( | 408 | 38 | 46 | 0 |
| 2 | Pneumonia (virus) Vs. COVID-19 ( | 70 | 1,493 | 0 | 0 |
| 3 | COVID-19 X-ray images using CNN ( | 140 | 0 | 0 | 144 |
| 4 | COVID-19 X-ray Images ( | 88 | 0 | 0 | 1,002 |
| 5 | COVID-19 Patients Lungs X Ray Images 10000 ( | 70 | 0 | 0 | 28 |
| 6 | COVID-19 Chest X Rays ( | 69 | 0 | 0 | 79 |
| 7 | COVID-19 Dataset ( | 25 | 0 | 0 | 25 |
| 8 | Curated Chest X-Ray Image Dataset for COVID-19 ( | 1,281 | 1,656 | 3,001 | 3,270 |
| 2,151 | 3,187 | 3,047 | 4,548 | ||
Figure 8Distribution of the datasets and their categories.
Experiments configurations summarization.
| Key | Value |
|---|---|
| Dataset | 8 Resources |
| Categories | “Normal”, “P-Viral”, “P-Bacterial”, and “COVID-19” |
| Dataset Size | 12,933 |
| Pre-trained Models | DenseNet121, DenseNet169, DenseNet201, Xception, MobileNet, MobileNetV2, MobileNetV3Small, MobileNetV3Large, EfficientNetB0, ResNet50V2, ResNet101V2, and ResNet152V2 |
| Parameters Initializer | ImageNet |
| Parameters Optimizers | Adam, NAdam, Ftrl, AdaDelta, AdaGrad, AdaMax, RMSProp, and SGD |
| Output Activation Function | SoftMax |
| Model Learn Ratios | [0 : 5 : 100]% |
| Batch Sizes | [8 : 8 : 100] |
| Dropout Ratios | [1 : 1 : 60]% |
| Number of Epochs | 8 Epochs |
| Performance Metrics | Accuracy, Loss, Precision, F1-score, AUC, Dice Coef., IoU Coef., Specificity, and Recall |
| Number of Iterations | 20 Iterations |
| Population Size | 10 Candidates |
| Split Ratio | 85% to 15% |
| Image Size | (64, 64, 3) |
| Data Augmentation | Applied (in Two Stages) |
| Training Environment | Google Colab (using its GPU) |
The used performance metrics.
| Metric | Definition | Formula |
|---|---|---|
| Accuracy | The ratio between the correct predictions made by the model and all kinds’ predictions made. | |
| Precision | The ratio between the true positive predicted values and full positive predicted values. | |
| Recall or Sensitivity | The ratio between the true positive values of prediction and all predicted values. | |
| F1-score | Twice the ratio between the multiplication to the summation of precision and recall metrics. | |
| Specificity | The ratio between the false-negative data that is mistakenly considered positive and all negative data. | |
| AUC | Plotting the cumulative distribution function of the True Positive Rate (TPR) in the y-axis versus the cumulative distribution function of the False Positive Rate (FPR) on the X-axis. | |
| IoU Coefficient | The ratio between the area of intersection and area of union. | |
| Dice Coefficient | Twice the ratio between the true positive predicted values and all other values. | |
| Loss | The distance between the true values of the problem and the values predicted by the model. |
DenseNet121 learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | RMSProp | 32 | 0.600 | 10% | 0.2614 | 89.59% | 0.8963 | 89.96% | 89.31% | 96.68% | 0.9866 | 0.8931 | 0.8953 | 0.9110 |
| 2 | AdaGrad | 32 | 0.025 | 25% | 0.2771 | 90.36% | 0.9044 | 90.69% | 90.20% | 96.91% | 0.9854 | 0.9020 | 0.9141 | 0.9249 |
| 3 | AdaMax | 32 | 0.600 | 10% | 0.2529 | 90.71% | 0.9057 | 90.93% | 90.22% | 97.00% | 0.9875 | 0.9022 | 0.8996 | 0.9149 |
| 4 | AdaMax | 24 | 0.550 | 10% | 0.2392 | 90.82% | 0.9066 | 91.12% | 90.22% | 97.07% | 0.9891 | 0.9022 | 0.8977 | 0.9143 |
| 5 | NAdam | 24 | 0.025 | 15% | 0.2392 | 91.56% | 0.9160 | 91.78% | 91.43% | 97.27% | 0.9885 | 0.9143 | 0.9111 | 0.9245 |
| 6 | AdaMax | 48 | 0.325 | 20% | 0.2810 | 91.69% | 0.9177 | 91.99% | 91.56% | 97.34% | 0.9847 | 0.9156 | 0.9258 | 0.9345 |
| 7 | AdaMax | 72 | 0.075 | 50% | 0.2292 | 91.99% | 0.9208 | 92.29% | 91.88% | 97.44% | 0.9895 | 0.9188 | 0.9129 | 0.9263 |
| 8 | Adam | 16 | 0.000 | 15% | 0.2108 | 92.64% | 0.9268 | 92.92% | 92.45% | 97.65% | 0.9909 | 0.9245 | 0.9225 | 0.9344 |
| 9 | AdaMax | 32 | 0.550 | 10% | 0.2002 | 92.69% | 0.9276 | 93.21% | 92.32% | 97.76% | 0.9917 | 0.9232 | 0.9108 | 0.9260 |
| 10 | AdaMax | 32 | 0.550 | 10% | 0.1941 | 92.76% | 0.9272 | 92.96% | 92.49% | 97.67% | 0.9927 | 0.9249 | 0.9129 | 0.9277 |
| 11 | Adam | 24 | 0.000 | 20% | 0.1897 | 93.03% | 0.9297 | 93.42% | 92.54% | 97.83% | 0.9928 | 0.9254 | 0.9093 | 0.9255 |
| 12 | NAdam | 24 | 0.025 | 15% | 0.2334 | 93.18% | 0.9321 | 93.34% | 93.09% | 97.79% | 0.9891 | 0.9309 | 0.9420 | 0.9487 |
| 13 | Adam | 64 | 0.025 | 55% | 0.1846 | 94.61% | 0.9460 | 94.76% | 94.45% | 98.26% | 0.9920 | 0.9445 | 0.9500 | 0.9562 |
| 14 | AdaMax | 40 | 0.075 | 55% | 0.1494 | 96.04% | 0.9604 | 96.08% | 96.01% | 98.69% | 0.9945 | 0.9601 | 0.9666 | 0.9701 |
| 15 | AdaMax | 40 | 0.075 | 55% | 0.1268 | 97.39% | 0.9738 | 97.44% | 97.33% | 99.15% | 0.9962 | 0.9733 | 0.9750 | 0.9783 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
DenseNet121 model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | −0.1053 | 0.2908 | −0.5941 |
| Accuracy | 0.2305 | −0.4626 | 0.7213 |
| F1 | 0.2410 | −0.4763 | 0.7279 |
| Precision | 0.2282 | −0.4580 | 0.7132 |
| Recall | 0.2517 | −0.4921 | 0.7397 |
| Specificity | 0.2251 | −0.4526 | 0.7095 |
| AUC | 0.0421 | −0.2492 | 0.5048 |
| Sensitivity | 0.2517 | −0.4921 | 0.7397 |
| IoU Coef | 0.2899 | −0.5523 | 0.7699 |
| Dice Coef | 0.2781 | −0.5439 | 0.7694 |
DenseNet169 learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | AdaGrad | 96 | 0.000 | 100% | 0.1524 | 94.94% | 0.9491 | 95.03% | 94.79% | 98.35% | 0.9943 | 0.9479 | 0.9517 | 0.9580 |
| 2 | AdaMax | 56 | 0.000 | 60% | 0.1864 | 94.95% | 0.9493 | 95.04% | 94.82% | 98.35% | 0.9925 | 0.9482 | 0.9527 | 0.9585 |
| 3 | SGD | 88 | 0.025 | 90% | 0.1349 | 95.09% | 0.9511 | 95.18% | 95.03% | 98.40% | 0.9951 | 0.9503 | 0.9538 | 0.9604 |
| 4 | NAdam | 72 | 0.000 | 70% | 0.1410 | 95.28% | 0.9530 | 95.40% | 95.20% | 98.47% | 0.9948 | 0.9520 | 0.9550 | 0.9613 |
| 5 | AdaMax | 96 | 0.000 | 100% | 0.1854 | 95.72% | 0.9569 | 95.71% | 95.67% | 98.57% | 0.9920 | 0.9567 | 0.9638 | 0.9676 |
| 6 | AdaMax | 56 | 0.000 | 55% | 0.1659 | 95.85% | 0.9594 | 96.01% | 95.86% | 98.67% | 0.9940 | 0.9586 | 0.9643 | 0.9684 |
| 7 | NAdam | 72 | 0.000 | 70% | 0.1266 | 95.91% | 0.9587 | 95.97% | 95.77% | 98.66% | 0.9961 | 0.9577 | 0.9568 | 0.9635 |
| 8 | NAdam | 96 | 0.000 | 90% | 0.1052 | 96.05% | 0.9605 | 96.11% | 95.99% | 98.70% | 0.9974 | 0.9599 | 0.9561 | 0.9634 |
| 9 | RMSProp | 64 | 0.025 | 70% | 0.1698 | 96.13% | 0.9609 | 96.13% | 96.06% | 98.71% | 0.9943 | 0.9606 | 0.9667 | 0.9704 |
| 10 | NAdam | 88 | 0.000 | 85% | 0.1120 | 96.65% | 0.9667 | 96.72% | 96.62% | 98.91% | 0.9961 | 0.9662 | 0.9718 | 0.9751 |
| 11 | NAdam | 88 | 0.000 | 85% | 0.0975 | 96.85% | 0.9688 | 96.97% | 96.80% | 98.99% | 0.9962 | 0.9680 | 0.9696 | 0.9737 |
| 12 | NAdam | 56 | 0.000 | 55% | 0.0854 | 97.36% | 0.9735 | 97.40% | 97.31% | 99.13% | 0.9968 | 0.9731 | 0.9734 | 0.9774 |
| 13 | AdaMax | 64 | 0.000 | 65% | 0.0823 | 98.05% | 0.9806 | 98.07% | 98.04% | 99.36% | 0.9969 | 0.9804 | 0.9811 | 0.9835 |
| 14 | SGD | 88 | 0.025 | 90% | 0.0617 | 98.38% | 0.9838 | 98.40% | 98.36% | 99.47% | 0.9981 | 0.9836 | 0.9820 | 0.9847 |
| 15 | SGD | 88 | 0.025 | 90% | 0.0523 | 98.47% | 0.9849 | 98.50% | 98.47% | 99.50% | 0.9983 | 0.9847 | 0.9860 | 0.9879 |
Note:
POO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
DenseNet169 model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | −0.2079 | −0.2777 | −0.1250 |
| Accuracy | 0.0105 | 0.3294 | −0.0119 |
| F1 | 0.0068 | 0.3264 | −0.0180 |
| Precision | 0.0037 | 0.3202 | −0.0227 |
| Recall | 0.0094 | 0.3315 | −0.0139 |
| Specificity | 0.0050 | 0.3222 | −0.0209 |
| AUC | 0.1963 | 0.3047 | 0.0951 |
| Sensitivity | 0.0094 | 0.3315 | −0.0139 |
| IoU Coef | −0.0308 | 0.3537 | −0.0244 |
| Dice Coef | −0.0175 | 0.3557 | −0.0202 |
DenseNet201 learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | SGD | 72 | 0.000 | 35% | 0.2268 | 92.58% | 0.9268 | 92.90% | 92.46% | 97.65% | 0.9885 | 0.9246 | 0.9275 | 0.9374 |
| 2 | RMSProp | 56 | 0.050 | 50% | 2.0370 | 92.60% | 0.9256 | 92.63% | 92.49% | 97.55% | 0.9812 | 0.9249 | 0.9390 | 0.9451 |
| 3 | AdaMax | 80 | 0.125 | 60% | 0.2259 | 93.25% | 0.9326 | 93.37% | 93.15% | 97.80% | 0.9892 | 0.9315 | 0.9372 | 0.9448 |
| 4 | SGD | 80 | 0.000 | 45% | 0.1912 | 93.50% | 0.9344 | 93.55% | 93.32% | 97.86% | 0.9925 | 0.9332 | 0.9415 | 0.9488 |
| 5 | AdaMax | 56 | 0.000 | 25% | 0.1833 | 93.59% | 0.9361 | 93.76% | 93.46% | 97.93% | 0.9933 | 0.9346 | 0.9361 | 0.9452 |
| 6 | SGD | 88 | 0.000 | 40% | 0.1874 | 93.80% | 0.9383 | 93.91% | 93.76% | 97.97% | 0.9928 | 0.9376 | 0.9456 | 0.9521 |
| 7 | SGD | 80 | 0.000 | 40% | 0.2059 | 93.93% | 0.9397 | 94.17% | 93.78% | 98.06% | 0.9898 | 0.9378 | 0.9411 | 0.9485 |
| 8 | AdaMax | 56 | 0.000 | 25% | 0.1776 | 94.10% | 0.9417 | 94.38% | 93.96% | 98.13% | 0.9931 | 0.9396 | 0.9429 | 0.9508 |
| 9 | RMSProp | 80 | 0.000 | 40% | 0.2449 | 94.99% | 0.9497 | 95.00% | 94.95% | 98.33% | 0.9886 | 0.9495 | 0.9616 | 0.9646 |
| 10 | AdaMax | 56 | 0.000 | 25% | 0.1660 | 95.02% | 0.9504 | 95.15% | 94.93% | 98.39% | 0.9929 | 0.9493 | 0.9526 | 0.9586 |
| 11 | SGD | 88 | 0.000 | 40% | 0.1551 | 95.15% | 0.9516 | 95.24% | 95.07% | 98.42% | 0.9944 | 0.9507 | 0.9530 | 0.9590 |
| 12 | SGD | 88 | 0.000 | 50% | 0.1193 | 96.09% | 0.9608 | 96.23% | 95.94% | 98.75% | 0.9960 | 0.9594 | 0.9603 | 0.9660 |
| 13 | SGD | 80 | 0.000 | 40% | 0.1104 | 96.14% | 0.9615 | 96.28% | 96.02% | 98.76% | 0.9967 | 0.9602 | 0.9595 | 0.9658 |
| 14 | RMSProp | 72 | 0.000 | 40% | 0.1356 | 96.27% | 0.9630 | 96.37% | 96.24% | 98.79% | 0.9952 | 0.9624 | 0.9653 | 0.9696 |
| 15 | SGD | 80 | 0.000 | 45% | 0.1153 | 96.69% | 0.9675 | 96.90% | 96.61% | 98.97% | 0.9958 | 0.9661 | 0.9667 | 0.9711 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
DenseNet201 model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | −0.4154 | 0.3397 | 0.2799 |
| Accuracy | 0.3403 | −0.3941 | −0.0038 |
| F1 | 0.3359 | −0.3995 | −0.0169 |
| Precision | 0.3299 | −0.4090 | −0.0302 |
| Recall | 0.3415 | −0.3895 | −0.0032 |
| Specificity | 0.3294 | −0.4085 | −0.0301 |
| AUC | 0.3475 | −0.4665 | −0.2434 |
| Sensitivity | 0.3415 | −0.3895 | −0.0032 |
| IoU Coef | 0.3457 | −0.3346 | 0.1004 |
| Dice Coef | 0.3447 | −0.3607 | 0.0696 |
Xception learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | RMSProp | 80 | 0.525 | 70% | 0.1574 | 95.12% | 0.9516 | 95.28% | 95.03% | 98.43% | 0.9933 | 0.9503 | 0.9560 | 0.9617 |
| 2 | NAdam | 24 | 0.100 | 15% | 0.1757 | 95.26% | 0.9530 | 95.37% | 95.23% | 98.46% | 0.9925 | 0.9523 | 0.9594 | 0.9634 |
| 3 | RMSProp | 96 | 0.600 | 75% | 0.2901 | 95.50% | 0.9549 | 95.51% | 95.47% | 98.50% | 0.9894 | 0.9547 | 0.9638 | 0.9669 |
| 4 | RMSProp | 88 | 0.550 | 70% | 0.1571 | 95.69% | 0.9571 | 95.84% | 95.58% | 98.62% | 0.9946 | 0.9558 | 0.9630 | 0.9671 |
| 5 | NAdam | 24 | 0.125 | 20% | 0.1279 | 95.83% | 0.9572 | 95.82% | 95.61% | 98.61% | 0.9961 | 0.9561 | 0.9536 | 0.9610 |
| 6 | AdaMax | 56 | 0.375 | 55% | 0.1394 | 95.89% | 0.9582 | 95.95% | 95.70% | 98.65% | 0.9943 | 0.9570 | 0.9487 | 0.9573 |
| 7 | NAdam | 24 | 0.150 | 25% | 0.1115 | 96.06% | 0.9606 | 96.11% | 96.01% | 98.71% | 0.9971 | 0.9601 | 0.9565 | 0.9636 |
| 8 | NAdam | 24 | 0.100 | 15% | 0.1138 | 96.13% | 0.9615 | 96.30% | 96.00% | 98.77% | 0.9969 | 0.9600 | 0.9545 | 0.9620 |
| 9 | NAdam | 24 | 0.100 | 15% | 0.1097 | 96.17% | 0.9616 | 96.22% | 96.10% | 98.74% | 0.9968 | 0.9610 | 0.9558 | 0.9631 |
| 10 | NAdam | 24 | 0.100 | 15% | 0.0957 | 96.31% | 0.9628 | 96.37% | 96.19% | 98.79% | 0.9979 | 0.9619 | 0.9573 | 0.9647 |
| 11 | NAdam | 24 | 0.100 | 15% | 0.1031 | 96.41% | 0.9640 | 96.48% | 96.32% | 98.83% | 0.9975 | 0.9632 | 0.9567 | 0.9641 |
| 12 | RMSProp | 96 | 0.600 | 75% | 0.1374 | 96.79% | 0.9676 | 96.78% | 96.74% | 98.93% | 0.9940 | 0.9674 | 0.9728 | 0.9756 |
| 13 | NAdam | 24 | 0.100 | 15% | 0.0861 | 96.99% | 0.9699 | 97.07% | 96.91% | 99.03% | 0.9979 | 0.9691 | 0.9627 | 0.9692 |
| 14 | RMSProp | 88 | 0.550 | 70% | 0.1618 | 97.12% | 0.9710 | 97.15% | 97.05% | 99.05% | 0.9967 | 0.9705 | 0.9723 | 0.9757 |
| 15 | RMSProp | 80 | 0.525 | 70% | 0.0888 | 97.78% | 0.9783 | 97.90% | 97.76% | 99.30% | 0.9961 | 0.9776 | 0.9750 | 0.9789 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
Xception model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | 0.5463 | 0.5275 | 0.5068 |
| Accuracy | 0.1073 | 0.1006 | 0.0943 |
| F1 | 0.1169 | 0.1094 | 0.1023 |
| Precision | 0.1042 | 0.0979 | 0.0923 |
| Recall | 0.1288 | 0.1202 | 0.1117 |
| Specificity | 0.1005 | 0.0943 | 0.0888 |
| AUC | −0.5791 | −0.5741 | −0.5661 |
| Sensitivity | 0.1288 | 0.1202 | 0.1117 |
| IoU Coef | 0.6394 | 0.6098 | 0.5760 |
| Dice Coef | 0.5584 | 0.5324 | 0.5025 |
MobileNet learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | SGD | 96 | 0.325 | 100% | 0.3538 | 87.74% | 0.8779 | 88.32% | 87.29% | 96.15% | 0.9790 | 0.8729 | 0.8864 | 0.9013 |
| 2 | Adam | 8 | 0.200 | 30% | 0.2981 | 87.75% | 0.8782 | 88.77% | 86.91% | 96.33% | 0.9824 | 0.8691 | 0.8720 | 0.8921 |
| 3 | NAdam | 40 | 0.050 | 20% | 0.3393 | 89.47% | 0.8948 | 89.64% | 89.33% | 96.56% | 0.9806 | 0.8933 | 0.9121 | 0.9215 |
| 4 | SGD | 96 | 0.325 | 100% | 0.3017 | 89.97% | 0.9002 | 90.41% | 89.64% | 96.83% | 0.9823 | 0.8964 | 0.9031 | 0.9163 |
| 5 | NAdam | 24 | 0.100 | 15% | 0.2593 | 90.09% | 0.9011 | 90.59% | 89.66% | 96.90% | 0.9870 | 0.8966 | 0.8945 | 0.9105 |
| 6 | Adam | 32 | 0.025 | 15% | 0.2775 | 91.44% | 0.9138 | 91.63% | 91.13% | 97.22% | 0.9864 | 0.9113 | 0.9177 | 0.9287 |
| 7 | NAdam | 48 | 0.050 | 25% | 0.2553 | 92.34% | 0.9230 | 92.60% | 92.01% | 97.55% | 0.9861 | 0.9201 | 0.9297 | 0.9383 |
| 8 | AdaMax | 24 | 0.450 | 45% | 0.2351 | 92.39% | 0.9239 | 92.63% | 92.16% | 97.56% | 0.9890 | 0.9216 | 0.9238 | 0.9345 |
| 9 | RMSProp | 24 | 0.475 | 45% | 0.2552 | 92.78% | 0.9282 | 93.04% | 92.62% | 97.69% | 0.9878 | 0.9262 | 0.9289 | 0.9387 |
| 10 | NAdam | 72 | 0.075 | 35% | 0.2226 | 93.43% | 0.9342 | 93.56% | 93.28% | 97.86% | 0.9885 | 0.9328 | 0.9374 | 0.9456 |
| 11 | SGD | 32 | 0.600 | 60% | 0.2281 | 93.68% | 0.9361 | 93.67% | 93.55% | 97.89% | 0.9896 | 0.9355 | 0.9453 | 0.9513 |
| 12 | Adam | 8 | 0.175 | 30% | 0.1777 | 94.35% | 0.9423 | 94.63% | 93.84% | 98.23% | 0.9934 | 0.9384 | 0.9187 | 0.9334 |
| 13 | SGD | 16 | 0.600 | 50% | 0.1740 | 94.51% | 0.9446 | 94.66% | 94.26% | 98.23% | 0.9929 | 0.9426 | 0.9300 | 0.9418 |
| 14 | AdaMax | 24 | 0.450 | 45% | 0.1444 | 95.85% | 0.9576 | 96.06% | 95.46% | 98.69% | 0.9938 | 0.9546 | 0.9483 | 0.9567 |
| 15 | NAdam | 56 | 0.050 | 25% | 0.1247 | 96.40% | 0.9636 | 96.39% | 96.33% | 98.80% | 0.9952 | 0.9633 | 0.9664 | 0.9706 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
MobileNet model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | 0.3770 | −0.1914 | 0.2906 |
| Accuracy | −0.2703 | 0.2201 | −0.2438 |
| F1 | −0.2666 | 0.2226 | −0.2407 |
| Precision | −0.2938 | 0.2259 | −0.2452 |
| Recall | −0.2424 | 0.2191 | −0.2363 |
| Specificity | −0.2990 | 0.2258 | −0.2463 |
| AUC | −0.4568 | 0.2247 | −0.3325 |
| Sensitivity | −0.2424 | 0.2191 | −0.2363 |
| IoU Coef | −0.0551 | 0.1504 | −0.1983 |
| Dice Coef | −0.0981 | 0.1654 | −0.2108 |
MobileNetV2 learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Adam | 8 | 0.000 | 0% | 0.6198 | 75.70% | 0.7524 | 78.53% | 72.31% | 93.42% | 0.9376 | 0.7231 | 0.7659 | 0.7967 |
| 2 | Adam | 8 | 0.000 | 0% | 0.6198 | 75.70% | 0.7524 | 78.53% | 72.31% | 93.42% | 0.9376 | 0.7231 | 0.7659 | 0.7967 |
| 3 | Adam | 8 | 0.000 | 0% | 0.6110 | 75.87% | 0.7505 | 78.32% | 72.13% | 93.34% | 0.9399 | 0.7213 | 0.7722 | 0.8016 |
| 4 | Adam | 8 | 0.000 | 0% | 0.6110 | 75.87% | 0.7505 | 78.32% | 72.13% | 93.34% | 0.9399 | 0.7213 | 0.7722 | 0.8016 |
| 5 | Adam | 8 | 0.000 | 0% | 0.6290 | 76.37% | 0.7595 | 79.11% | 73.14% | 93.57% | 0.9370 | 0.7314 | 0.7723 | 0.8015 |
| 6 | Adam | 8 | 0.000 | 0% | 0.6290 | 76.37% | 0.7595 | 79.11% | 73.14% | 93.57% | 0.9370 | 0.7314 | 0.7723 | 0.8015 |
| 7 | Adam | 8 | 0.000 | 0% | 0.6100 | 76.69% | 0.7617 | 78.93% | 73.67% | 93.44% | 0.9398 | 0.7367 | 0.7774 | 0.8058 |
| 8 | Adam | 8 | 0.000 | 0% | 0.6068 | 76.70% | 0.7625 | 79.13% | 73.66% | 93.52% | 0.9399 | 0.7366 | 0.7728 | 0.8031 |
| 9 | Adam | 8 | 0.000 | 0% | 0.6045 | 77.02% | 0.7656 | 79.36% | 74.03% | 93.58% | 0.9417 | 0.7403 | 0.7779 | 0.8071 |
| 10 | Adam | 8 | 0.000 | 0% | 0.6045 | 77.02% | 0.7656 | 79.36% | 74.03% | 93.58% | 0.9417 | 0.7403 | 0.7779 | 0.8071 |
| 11 | Adam | 8 | 0.000 | 0% | 0.6045 | 77.02% | 0.7656 | 79.36% | 74.03% | 93.58% | 0.9417 | 0.7403 | 0.7779 | 0.8071 |
| 12 | Adam | 8 | 0.000 | 0% | 0.6045 | 77.02% | 0.7656 | 79.36% | 74.03% | 93.58% | 0.9417 | 0.7403 | 0.7779 | 0.8071 |
| 13 | Adam | 8 | 0.000 | 0% | 0.6045 | 77.02% | 0.7656 | 79.36% | 74.03% | 93.58% | 0.9417 | 0.7403 | 0.7779 | 0.8071 |
| 14 | Adam | 8 | 0.000 | 0% | 0.6045 | 77.02% | 0.7656 | 79.36% | 74.03% | 93.58% | 0.9417 | 0.7403 | 0.7779 | 0.8071 |
| 15 | RMSProp | 16 | 0.425 | 55% | 2.5140 | 81.54% | 0.8158 | 81.69% | 81.48% | 93.91% | 0.9352 | 0.8148 | 0.8635 | 0.8728 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
MobileNetV2 model correlation results.
| Metric | Batch Size | Dropout Ratio | Model Learn Ratio |
|---|---|---|---|
| Loss | 0.9998 | 0.9998 | 0.9998 |
| Accuracy | 0.9277 | 0.9277 | 0.9277 |
| F1 | 0.9241 | 0.9241 | 0.9241 |
| Precision | 0.8682 | 0.8682 | 0.8682 |
| Recall | 0.9397 | 0.9397 | 0.9397 |
| Specificity | 0.7533 | 0.7533 | 0.7533 |
| AUC | −0.5546 | −0.5546 | −0.5546 |
| Sensitivity | 0.9397 | 0.9397 | 0.9397 |
| IoU Coef | 0.9838 | 0.9838 | 0.9838 |
| Dice Coef | 0.9793 | 0.9793 | 0.9793 |
MobileNetV3Small learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Adam | 8 | 0.000 | 0% | 0.9894 | 55.61% | 0.4983 | 71.14% | 38.65% | 94.75% | 0.8318 | 0.3865 | 0.5649 | 0.6192 |
| 2 | Adam | 8 | 0.000 | 0% | 0.9894 | 55.61% | 0.4983 | 71.14% | 38.65% | 94.75% | 0.8318 | 0.3865 | 0.5649 | 0.6192 |
| 3 | AdaDelta | 40 | 0.075 | 55% | 0.9877 | 58.12% | 0.5627 | 63.96% | 50.45% | 90.53% | 0.8370 | 0.5045 | 0.6332 | 0.6749 |
| 4 | Adam | 8 | 0.000 | 0% | 1.0320 | 58.49% | 0.3180 | 75.81% | 20.50% | 97.76% | 0.8247 | 0.2050 | 0.5357 | 0.5922 |
| 5 | Adam | 8 | 0.000 | 0% | 1.0320 | 58.49% | 0.3180 | 75.81% | 20.50% | 97.76% | 0.8247 | 0.2050 | 0.5357 | 0.5922 |
| 6 | Adam | 8 | 0.000 | 0% | 1.0320 | 58.49% | 0.3180 | 75.81% | 20.50% | 97.76% | 0.8247 | 0.2050 | 0.5357 | 0.5922 |
| 7 | NAdam | 8 | 0.000 | 25% | 0.9177 | 60.73% | 0.5948 | 64.47% | 55.34% | 89.82% | 0.8647 | 0.5534 | 0.6570 | 0.6989 |
| 8 | Adam | 8 | 0.000 | 0% | 0.9881 | 61.23% | 0.4416 | 70.65% | 32.55% | 95.50% | 0.8416 | 0.3255 | 0.5613 | 0.6159 |
| 9 | Adam | 8 | 0.000 | 0% | 0.9881 | 61.23% | 0.4416 | 70.65% | 32.55% | 95.50% | 0.8416 | 0.3255 | 0.5613 | 0.6159 |
| 10 | Adam | 8 | 0.000 | 0% | 0.9736 | 62.57% | 0.4341 | 73.52% | 31.22% | 96.25% | 0.8492 | 0.3122 | 0.5568 | 0.6131 |
| 11 | Adam | 8 | 0.000 | 0% | 0.9736 | 62.57% | 0.4341 | 73.52% | 31.22% | 96.25% | 0.8492 | 0.3122 | 0.5568 | 0.6131 |
| 12 | AdaDelta | 40 | 0.075 | 55% | 0.9041 | 63.23% | 0.6162 | 68.65% | 56.10% | 91.44% | 0.8636 | 0.5610 | 0.6605 | 0.7007 |
| 13 | AdaGrad | 8 | 0.025 | 40% | 0.7490 | 70.61% | 0.6714 | 78.95% | 58.66% | 94.81% | 0.9046 | 0.5866 | 0.6709 | 0.7157 |
| 14 | RMSProp | 8 | 0.075 | 100% | 1.1210 | 79.15% | 0.7897 | 79.65% | 78.32% | 93.33% | 0.9380 | 0.7832 | 0.8183 | 0.8381 |
| 15 | RMSProp | 8 | 0.075 | 100% | 27.860 | 80.06% | 0.7995 | 80.50% | 79.43% | 93.59% | 0.9270 | 0.7943 | 0.8159 | 0.8365 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
MobileNetV3Small model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | −0.1066 | 0.4354 | 0.5648 |
| Accuracy | −0.1279 | 0.6368 | 0.8362 |
| F1 | 0.1918 | 0.7671 | 0.8984 |
| Precision | −0.5384 | 0.0936 | 0.2804 |
| Recall | 0.2159 | 0.7967 | 0.9284 |
| Specificity | −0.5904 | −0.6059 | −0.5832 |
| AUC | −0.0728 | 0.6555 | 0.8645 |
| Sensitivity | 0.2159 | 0.7967 | 0.9284 |
| IoU Coef | 0.1355 | 0.8106 | 0.9677 |
| Dice Coef | 0.1248 | 0.8031 | 0.9639 |
MobileNetV3Large learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Adam | 8 | 0.000 | 0% | 0.8226 | 68.66% | 0.6448 | 77.23% | 55.62% | 94.52% | 0.8866 | 0.5562 | 0.6415 | 0.6907 |
| 2 | Adam | 8 | 0.000 | 0% | 0.8226 | 68.66% | 0.6448 | 77.23% | 55.62% | 94.52% | 0.8866 | 0.5562 | 0.6415 | 0.6907 |
| 3 | Adam | 8 | 0.000 | 0% | 0.7898 | 70.73% | 0.6601 | 79.48% | 56.74% | 95.12% | 0.8971 | 0.5674 | 0.6457 | 0.6953 |
| 4 | Adam | 8 | 0.000 | 0% | 0.7898 | 70.73% | 0.6601 | 79.48% | 56.74% | 95.12% | 0.8971 | 0.5674 | 0.6457 | 0.6953 |
| 5 | Adam | 8 | 0.000 | 0% | 0.7898 | 70.73% | 0.6601 | 79.48% | 56.74% | 95.12% | 0.8971 | 0.5674 | 0.6457 | 0.6953 |
| 6 | Adam | 8 | 0.000 | 0% | 0.8042 | 71.07% | 0.6517 | 77.69% | 56.44% | 94.60% | 0.8927 | 0.5644 | 0.6352 | 0.6870 |
| 7 | Adam | 8 | 0.000 | 0% | 0.8042 | 71.07% | 0.6517 | 77.69% | 56.44% | 94.60% | 0.8927 | 0.5644 | 0.6352 | 0.6870 |
| 8 | Adam | 8 | 0.000 | 0% | 0.8042 | 71.07% | 0.6517 | 77.69% | 56.44% | 94.60% | 0.8927 | 0.5644 | 0.6352 | 0.6870 |
| 9 | Adam | 8 | 0.000 | 0% | 0.8042 | 71.07% | 0.6517 | 77.69% | 56.44% | 94.60% | 0.8927 | 0.5644 | 0.6352 | 0.6870 |
| 10 | AdaDelta | 8 | 0.125 | 15% | 0.6713 | 71.79% | 0.7056 | 76.87% | 65.40% | 93.45% | 0.9212 | 0.6540 | 0.6968 | 0.7403 |
| 11 | AdaDelta | 8 | 0.125 | 15% | 0.6632 | 71.90% | 0.7065 | 77.63% | 65.00% | 93.76% | 0.9230 | 0.6500 | 0.7059 | 0.7474 |
| 12 | AdaDelta | 8 | 0.100 | 15% | 0.7113 | 71.99% | 0.7028 | 77.77% | 64.30% | 93.87% | 0.9117 | 0.6430 | 0.6955 | 0.7374 |
| 13 | AdaDelta | 8 | 0.125 | 15% | 0.6725 | 72.91% | 0.7204 | 78.43% | 66.77% | 93.88% | 0.9214 | 0.6677 | 0.7166 | 0.7553 |
| 14 | AdaDelta | 8 | 0.125 | 15% | 0.6711 | 73.09% | 0.7202 | 78.62% | 66.62% | 93.97% | 0.9217 | 0.6662 | 0.7163 | 0.7552 |
| 15 | AdaMax | 16 | 0.175 | 20% | 0.5192 | 79.57% | 0.7851 | 83.15% | 74.50% | 94.97% | 0.9531 | 0.7450 | 0.7550 | 0.7931 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
MobileNetV3Large model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | −0.7073 | −0.9595 | −0.9387 |
| Accuracy | 0.8701 | 0.7588 | 0.7283 |
| F1 | 0.7091 | 0.9543 | 0.9381 |
| Precision | 0.8393 | 0.3440 | 0.3038 |
| Recall | 0.6494 | 0.9804 | 0.9681 |
| Specificity | 0.2680 | −0.6046 | −0.6364 |
| AUC | 0.6954 | 0.9539 | 0.9323 |
| Sensitivity | 0.6494 | 0.9804 | 0.9681 |
| IoU Coef | 0.5888 | 0.9843 | 0.9745 |
| Dice Coef | 0.6102 | 0.9836 | 0.9723 |
EfficientNetB0 learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Ftrl | 96 | 0.600 | 100% | 1.3860 | 27.61% | 0.0000 | 0.00% | 0.00 % | 100.00% | 0.5174 | 0.0000 | 0.4546 | 0.5001 |
| 2 | NAdam | 24 | 0.100 | 15% | 1.4030 | 27.61% | 0.0000 | 0.00% | 0.00 % | 100.00% | 0.5263 | 0.0000 | 0.4578 | 0.5029 |
| 3 | Adam | 8 | 0.025 | 5% | 1.3850 | 27.61% | 0.0000 | 0.00% | 0.00 % | 100.00% | 0.5283 | 0.0000 | 0.4551 | 0.5007 |
| 4 | AdaMax | 16 | 0.250 | 20% | 1.4160 | 27.61% | 0.1125 | 63.48% | 6.30% | 98.86% | 0.5479 | 0.0630 | 0.4719 | 0.5173 |
| 5 | SGD | 88 | 0.275 | 100% | 1.8270 | 27.77% | 0.2868 | 33.40% | 25.25% | 83.21% | 0.6204 | 0.2525 | 0.5341 | 0.5557 |
| 6 | AdaMax | 64 | 0.400 | 65% | 1.2800 | 27.97% | 0.2946 | 33.80% | 26.22% | 82.82% | 0.7180 | 0.2622 | 0.5146 | 0.5603 |
| 7 | SGD | 88 | 0.375 | 95% | 1.3910 | 28.90% | 0.0230 | 25.92% | 1.22% | 99.34% | 0.5543 | 0.0122 | 0.4625 | 0.5081 |
| 8 | AdaDelta | 48 | 0.150 | 50% | 1.3880 | 31.70% | 0.2163 | 39.42% | 15.12% | 92.22% | 0.6085 | 0.1512 | 0.4891 | 0.5326 |
| 9 | AdaGrad | 48 | 0.050 | 50% | 2.7130 | 36.25% | 0.3615 | 36.23% | 36.07% | 78.83% | 0.6923 | 0.3607 | 0.5642 | 0.5772 |
| 10 | AdaMax | 16 | 0.275 | 20% | 1.2570 | 36.43% | 0.1971 | 92.71% | 11.26% | 99.81% | 0.6712 | 0.1126 | 0.4865 | 0.5354 |
| 11 | AdaGrad | 88 | 0.400 | 60% | 1.2100 | 38.65% | 0.3226 | 41.50% | 26.59% | 87.51% | 0.7317 | 0.2659 | 0.5088 | 0.5591 |
| 12 | AdaGrad | 48 | 0.150 | 80% | 1.1590 | 45.03% | 0.3849 | 51.30% | 31.05% | 90.15% | 0.7643 | 0.3105 | 0.5574 | 0.6049 |
| 13 | AdaMax | 24 | 0.400 | 30% | 4.4170 | 45.29% | 0.4335 | 45.38% | 41.55% | 83.31% | 0.5967 | 0.4155 | 0.5144 | 0.5513 |
| 14 | SGD | 88 | 0.425 | 100% | 1.0970 | 50.95% | 0.4904 | 54.48% | 44.71% | 87.54% | 0.8072 | 0.4471 | 0.6118 | 0.6525 |
| 15 | RMSProp | 64 | 0.375 | 80% | 1.3980 | 51.23% | 0.5070 | 52.92% | 48.72% | 85.55% | 0.8003 | 0.4872 | 0.6478 | 0.6724 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
EfficientNetB0 model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | −0.2555 | −0.0024 | −0.2248 |
| Accuracy | 0.1105 | 0.1989 | 0.2437 |
| F1 | 0.2052 | 0.1662 | 0.3176 |
| Precision | −0.1674 | 0.0925 | −0.0458 |
| Recall | 0.2264 | 0.1731 | 0.3315 |
| Specificity | −0.2941 | −0.0617 | −0.3210 |
| AUC | 0.3096 | 0.1485 | 0.3918 |
| Sensitivity | 0.2264 | 0.1731 | 0.3315 |
| IoU Coef | 0.2958 | 0.1116 | 0.4449 |
| Dice Coef | 0.2969 | 0.1630 | 0.4439 |
ResNet50V2 learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | RMSProp | 80 | 0.500 | 25% | 0.3382 | 89.52% | 0.8941 | 89.78% | 89.06% | 96.62% | 0.9797 | 0.8906 | 0.8969 | 0.9114 |
| 2 | AdaMax | 88 | 0.475 | 10% | 0.3767 | 89.53% | 0.8959 | 89.89% | 89.30% | 96.65% | 0.9779 | 0.8930 | 0.9077 | 0.9180 |
| 3 | AdaGrad | 16 | 0.100 | 30% | 0.2124 | 91.81% | 0.9188 | 92.36% | 91.43% | 97.48% | 0.9911 | 0.9143 | 0.9055 | 0.9213 |
| 4 | NAdam | 8 | 0.050 | 15% | 0.2144 | 91.84% | 0.9202 | 92.39% | 91.66% | 97.48% | 0.9903 | 0.9166 | 0.9066 | 0.9222 |
| 5 | AdaGrad | 16 | 0.125 | 35% | 0.2326 | 92.09% | 0.9217 | 92.42% | 91.93% | 97.49% | 0.9891 | 0.9193 | 0.9266 | 0.9362 |
| 6 | NAdam | 8 | 0.050 | 15% | 0.2144 | 92.96% | 0.9302 | 93.13% | 92.92% | 97.71% | 0.9906 | 0.9292 | 0.9335 | 0.9423 |
| 7 | NAdam | 8 | 0.050 | 15% | 0.1843 | 93.33% | 0.9327 | 93.56% | 92.99% | 97.87% | 0.9933 | 0.9299 | 0.9304 | 0.9407 |
| 8 | NAdam | 8 | 0.050 | 15% | 0.2119 | 93.56% | 0.9354 | 93.62% | 93.46% | 97.88% | 0.9905 | 0.9346 | 0.9345 | 0.9436 |
| 9 | AdaMax | 72 | 0.450 | 25% | 0.2266 | 94.23% | 0.9427 | 94.38% | 94.17% | 98.13% | 0.9885 | 0.9417 | 0.9513 | 0.9565 |
| 10 | AdaGrad | 16 | 0.100 | 30% | 0.1717 | 94.35% | 0.9434 | 94.74% | 93.96% | 98.26% | 0.9937 | 0.9396 | 0.9304 | 0.9419 |
| 11 | AdaMax | 72 | 0.450 | 25% | 0.1161 | 96.42% | 0.9641 | 96.50% | 96.32% | 98.84% | 0.9964 | 0.9632 | 0.9632 | 0.9684 |
| 12 | AdaGrad | 16 | 0.100 | 30% | 0.1204 | 96.83% | 0.9678 | 96.87% | 96.69% | 98.96% | 0.9956 | 0.9669 | 0.9721 | 0.9750 |
| 13 | AdaGrad | 16 | 0.100 | 30% | 0.0868 | 96.96% | 0.9700 | 97.14% | 96.87% | 99.05% | 0.9978 | 0.9687 | 0.9637 | 0.9700 |
| 14 | AdaGrad | 16 | 0.100 | 30% | 0.0920 | 97.13% | 0.9716 | 97.31% | 97.01% | 99.11% | 0.9977 | 0.9701 | 0.9600 | 0.9671 |
| 15 | AdaGrad | 16 | 0.100 | 30% | 0.0792 | 97.46% | 0.9737 | 97.60% | 97.14% | 99.20% | 0.9984 | 0.9714 | 0.9635 | 0.9703 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
ResNet50V2 model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | 0.5448 | 0.5166 | −0.5133 |
| Accuracy | −0.3539 | −0.3319 | 0.4854 |
| F1 | −0.3611 | −0.3395 | 0.4834 |
| Precision | −0.3719 | −0.3503 | 0.4991 |
| Recall | −0.3509 | −0.3291 | 0.4691 |
| Specificity | −0.3721 | −0.3505 | 0.5021 |
| AUC | −0.6208 | −0.5935 | 0.4764 |
| Sensitivity | −0.3509 | −0.3291 | 0.4691 |
| IoU Coef | −0.1844 | −0.1642 | 0.4113 |
| Dice Coef | −0.2259 | −0.2043 | 0.4341 |
ResNet101V2 learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | Adam | 32 | 0.025 | 15% | 0.1214 | 96.23% | 0.9619 | 96.25% | 96.14% | 98.75% | 0.9949 | 0.9614 | 0.9671 | 0.9711 |
| 2 | Adam | 32 | 0.050 | 15% | 0.1114 | 96.34% | 0.9626 | 96.42% | 96.10% | 98.81% | 0.9966 | 0.9610 | 0.9514 | 0.9600 |
| 3 | Adam | 96 | 0.175 | 65% | 0.1203 | 96.58% | 0.9659 | 96.66% | 96.53% | 98.89% | 0.9955 | 0.9653 | 0.9653 | 0.9700 |
| 4 | Adam | 32 | 0.025 | 15% | 0.0924 | 96.81% | 0.9681 | 96.86% | 96.77% | 98.95% | 0.9972 | 0.9677 | 0.9658 | 0.9712 |
| 5 | Adam | 40 | 0.050 | 20% | 0.1036 | 96.95% | 0.9692 | 96.96% | 96.88% | 98.99% | 0.9965 | 0.9688 | 0.9701 | 0.9741 |
| 6 | Adam | 56 | 0.075 | 30% | 0.0928 | 97.21% | 0.9725 | 97.33% | 97.17% | 99.11% | 0.9967 | 0.9717 | 0.9658 | 0.9713 |
| 7 | Adam | 96 | 0.175 | 65% | 0.1150 | 97.33% | 0.9729 | 97.32% | 97.26% | 99.11% | 0.9944 | 0.9726 | 0.9763 | 0.9789 |
| 8 | Adam | 40 | 0.050 | 20% | 0.1052 | 97.40% | 0.9743 | 97.48% | 97.38% | 99.16% | 0.9955 | 0.9738 | 0.9769 | 0.9796 |
| 9 | Adam | 40 | 0.050 | 20% | 0.0882 | 97.52% | 0.9743 | 97.57% | 97.30% | 99.19% | 0.9969 | 0.9730 | 0.9669 | 0.9727 |
| 10 | Adam | 56 | 0.075 | 30% | 0.0823 | 97.54% | 0.9752 | 97.53% | 97.51% | 99.18% | 0.9968 | 0.9751 | 0.9765 | 0.9797 |
| 11 | Adam | 96 | 0.125 | 45% | 0.0788 | 97.57% | 0.9756 | 97.74% | 97.39% | 99.25% | 0.9977 | 0.9739 | 0.9676 | 0.9733 |
| 12 | Adam | 56 | 0.075 | 30% | 0.0762 | 97.57% | 0.9758 | 97.63% | 97.53% | 99.21% | 0.9978 | 0.9753 | 0.9741 | 0.9781 |
| 13 | Adam | 32 | 0.025 | 15% | 0.0706 | 97.61% | 0.9765 | 97.69% | 97.61% | 99.23% | 0.9984 | 0.9761 | 0.9704 | 0.9757 |
| 14 | Adam | 56 | 0.075 | 30% | 0.0673 | 97.76% | 0.9774 | 97.76% | 97.72% | 99.25% | 0.9984 | 0.9772 | 0.9750 | 0.9790 |
| 15 | Adam | 56 | 0.075 | 30% | 0.0673 | 98.47% | 0.9845 | 98.45% | 98.45% | 99.48% | 0.9972 | 0.9845 | 0.9862 | 0.9878 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
ResNet101V2 model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | 0.0858 | 0.2366 | 0.2198 |
| Accuracy | 0.1830 | 0.0798 | 0.0957 |
| F1 | 0.1872 | 0.0804 | 0.1002 |
| Precision | 0.2013 | 0.0872 | 0.1000 |
| Recall | 0.1745 | 0.0743 | 0.1008 |
| Specificity | 0.2080 | 0.0941 | 0.1061 |
| AUC | −0.2223 | −0.3566 | −0.3593 |
| Sensitivity | 0.1745 | 0.0743 | 0.1008 |
| IoU Coef | 0.2010 | 0.1405 | 0.1986 |
| Dice Coef | 0.1876 | 0.1170 | 0.1720 |
ResNet152V2 learning and optimization top-1 results.
| # | PO | BS | DR | MLR | Loss | Accuracy | F1 | Precision | Recall | Specificity | AUC | Sensitivity | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | SGD | 32 | 0.400 | 35% | 0.2944 | 88.74% | 0.8865 | 89.23% | 88.09% | 96.46% | 0.9838 | 0.8809 | 0.8837 | 0.9009 |
| 2 | NAdam | 64 | 0.050 | 45% | 0.2780 | 88.88% | 0.8892 | 89.44% | 88.42% | 96.52% | 0.9850 | 0.8842 | 0.8803 | 0.8994 |
| 3 | Adam | 32 | 0.025 | 20% | 0.2059 | 93.12% | 0.9316 | 93.24% | 93.10% | 97.75% | 0.9909 | 0.9310 | 0.9360 | 0.9444 |
| 4 | Adam | 32 | 0.025 | 20% | 0.1699 | 94.15% | 0.9418 | 94.47% | 93.90% | 98.17% | 0.9939 | 0.9390 | 0.9326 | 0.9433 |
| 5 | Adam | 40 | 0.025 | 20% | 0.1943 | 94.22% | 0.9424 | 94.30% | 94.17% | 98.10% | 0.9916 | 0.9417 | 0.9479 | 0.9540 |
| 6 | Adam | 40 | 0.025 | 20% | 0.1318 | 95.46% | 0.9551 | 95.62% | 95.40% | 98.54% | 0.9964 | 0.9540 | 0.9515 | 0.9588 |
| 7 | NAdam | 96 | 0.075 | 50% | 0.1089 | 95.87% | 0.9587 | 96.00% | 95.73% | 98.67% | 0.9971 | 0.9573 | 0.9493 | 0.9585 |
| 8 | Adam | 32 | 0.025 | 20% | 0.1553 | 96.03% | 0.9611 | 96.20% | 96.02% | 98.74% | 0.9932 | 0.9602 | 0.9656 | 0.9696 |
| 9 | NAdam | 80 | 0.075 | 50% | 0.1087 | 96.30% | 0.9629 | 96.53% | 96.05% | 98.85% | 0.9963 | 0.9605 | 0.9565 | 0.9639 |
| 10 | Adam | 32 | 0.025 | 20% | 0.1158 | 96.49% | 0.9650 | 96.66% | 96.34% | 98.89% | 0.9956 | 0.9634 | 0.9578 | 0.9646 |
| 11 | Adam | 40 | 0.025 | 20% | 0.1053 | 96.66% | 0.9659 | 96.70% | 96.49% | 98.90% | 0.9960 | 0.9649 | 0.9679 | 0.9725 |
| 12 | NAdam | 96 | 0.075 | 50% | 0.0935 | 96.84% | 0.9687 | 96.98% | 96.77% | 99.00% | 0.9972 | 0.9677 | 0.9629 | 0.9692 |
| 13 | Adam | 40 | 0.025 | 20% | 0.0978 | 96.84% | 0.9688 | 96.95% | 96.82% | 98.99% | 0.9970 | 0.9682 | 0.9625 | 0.9689 |
| 14 | NAdam | 72 | 0.050 | 35% | 0.0970 | 96.92% | 0.9688 | 96.96% | 96.80% | 98.99% | 0.9969 | 0.9680 | 0.9649 | 0.9706 |
| 15 | NAdam | 96 | 0.075 | 50% | 0.1008 | 96.92% | 0.9694 | 97.02% | 96.86% | 99.01% | 0.9967 | 0.9686 | 0.9663 | 0.9713 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; AUC, area under curve; IoU, intersection over union.
ResNet152V2 model correlation results.
| Metric | Batch size | Dropout ratio | Model learn ratio |
|---|---|---|---|
| Loss | −0.4200 | 0.5295 | −0.0916 |
| Accuracy | 0.3011 | −0.5775 | −0.0420 |
| F1 | 0.2995 | −0.5862 | −0.0447 |
| Precision | 0.3077 | −0.5748 | −0.0309 |
| Recall | 0.2914 | −0.5960 | −0.0575 |
| Specificity | 0.3100 | −0.5705 | −0.0272 |
| AUC | 0.3840 | −0.5760 | 0.0391 |
| Sensitivity | 0.2914 | −0.5960 | −0.0575 |
| IoU Coef | 0.2161 | −0.5871 | −0.1328 |
| Dice Coef | 0.2380 | −0.5881 | −0.1108 |
The top-1 results of all reported experiments.
| Model | PO | BS | DR | MLR | Loss | Acc. | F1 | Prec. | Recall | Spec. | AUC | Sen. | IoU | Dice |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| DenseNet121 | AdaMax | 40 | 0.075 | 55% | 0.1268 | 97.39% | 0.9738 | 97.44% | 97.33% | 99.15% | 0.9962 | 0.9733 | 0.9750 | 0.9783 |
| DenseNet169 | SGD | 88 | 0.025 | 90% | 0.0523 | 98.47% | 0.9849 | 98.50% | 98.47% | 99.50% | 0.9983 | 0.9847 | 0.9860 | 0.9879 |
| DenseNet201 | SGD | 80 | 0.000 | 45% | 0.1153 | 96.69% | 0.9675 | 96.90% | 96.61% | 98.97% | 0.9958 | 0.9661 | 0.9667 | 0.9711 |
| Xception | RMSProp | 80 | 0.525 | 70% | 0.0888 | 97.78% | 0.9783 | 97.90% | 97.76% | 99.30% | 0.9961 | 0.9776 | 0.9750 | 0.9789 |
| MobileNet | NAdam | 56 | 0.050 | 25% | 0.1247 | 96.40% | 0.9636 | 96.39% | 96.33% | 98.80% | 0.9952 | 0.9633 | 0.9664 | 0.9706 |
| MobileNetV2 | RMSProp | 16 | 0.425 | 55% | 2.5140 | 81.54% | 0.8158 | 81.69% | 81.48% | 93.91% | 0.9352 | 0.8148 | 0.8635 | 0.8728 |
| MobileNetV3Small | RMSProp | 8 | 0.075 | 100% | 27.860 | 80.06% | 0.7995 | 80.50% | 79.43% | 93.59% | 0.9270 | 0.7943 | 0.8159 | 0.8365 |
| MobileNetV3Large | AdaMax | 16 | 0.175 | 20% | 0.5192 | 79.57% | 0.7851 | 83.15% | 74.50% | 94.97% | 0.9531 | 0.7450 | 0.7550 | 0.7931 |
| EfficientNetB0 | RMSProp | 64 | 0.375 | 80% | 1.3980 | 51.23% | 0.5070 | 52.92% | 48.72% | 85.55% | 0.8003 | 0.4872 | 0.6478 | 0.6724 |
| ResNet50V2 | AdaGrad | 16 | 0.100 | 30% | 0.0792 | 97.46% | 0.9737 | 97.60% | 97.14% | 99.20% | 0.9984 | 0.9714 | 0.9635 | 0.9703 |
| ResNet101V2 | Adam | 56 | 0.075 | 30% | 0.0673 | 98.47% | 0.9845 | 98.45% | 98.45% | 99.48% | 0.9972 | 0.9845 | 0.9862 | 0.9878 |
| ResNet152V2 | NAdam | 96 | 0.075 | 50% | 0.1008 | 96.92% | 0.9694 | 97.02% | 96.86% | 99.01% | 0.9967 | 0.9686 | 0.9663 | 0.9713 |
Note:
PO, parameters optimizer; BS, batch size; DR, dropout ratio; MLR, model learn ratio; Acc., accuracy; Prec., precision; Spec., specificity; AUC, area under curve; Sen., sensitivity; IoU, intersection over union.
Figure 9The top-1 accuracies of all reported experiments.
Comparison the proposed technique with other state-of-the-art techniques.
| Research | Classes # | Accuracy | Sensitivity | Specificity | Precision | F1-Score | AUC | Recall |
|---|---|---|---|---|---|---|---|---|
| 2 | 96.92% | 94.20% | 100.0% | 100.0% | 97.01% | 99.22% | N/A | |
| 3 | 92.40% | 80.00% | N/A | 88.90% | N/A | N/A | N./A | |
| 4 | N/A | 94.52% | 99.35% | N/A | N/A | N/A | N/A | |
| 2 | 96.28% | 97.90% | N/A | 94.80% | N/A | N/A | N/A | |
| 4 | 91.20% | 91.76% | 93.48% | 92.04% | 90.04% | N/A | 91.90% | |
| 4 | 96.23% | 100.0% | 100.0% | N/A | 100.0% | N/A | N/A | |
| 3 | 96.78% | 98.66% | 96.46% | N/A | N/A | N/A | N/A | |
| 2 | 98.00% | 96.00% | 100.0% | 100.0% | 98.00% | 100.0% | N/A | |
| 2 | 96.10% | N/A | N/A | N/A | N/A | N/A | N/A | |
| 2 | 89.33% | 100.0% | N/A | N/A | N/A | N/A | N/A | |
| 4 | 89.60% | N/A | 97.90% | 93.17% | 95.61% | N/A | 98.25% | |
| 3 | 99.40% | 99.30% | 99.20% | N/A | 98.90% | 99.90% | N/A | |
| 3 | 98.70% | N/A | 99.30% | N/A | 98.80% | 99.00% | 98.80% | |
| 3 | 95.33% | 95.33% | N/A | N/A | 95.34% | N/A | N/A | |
| OTLD-COVID-19 Approach | 4 | 98.47% | 98.47% | 99.50% | 98.50% | 98.49% | 99.83% | 98.47% |
Figure 10A delta comparison between the current study and related works.