| Literature DB >> 35996653 |
Mazin Abed Mohammed1, Belal Al-Khateeb1, Mohammed Yousif2, Salama A Mostafa3, Seifedine Kadry4, Karrar Hameed Abdulkareem5,6, Begonya Garcia-Zapirain7.
Abstract
Due to the COVID-19 pandemic, computerized COVID-19 diagnosis studies are proliferating. The diversity of COVID-19 models raises the questions of which COVID-19 diagnostic model should be selected and which decision-makers of healthcare organizations should consider performance criteria. Because of this, a selection scheme is necessary to address all the above issues. This study proposes an integrated method for selecting the optimal deep learning model based on a novel crow swarm optimization algorithm for COVID-19 diagnosis. The crow swarm optimization is employed to find an optimal set of coefficients using a designed fitness function for evaluating the performance of the deep learning models. The crow swarm optimization is modified to obtain a good selected coefficient distribution by considering the best average fitness. We have utilized two datasets: the first dataset includes 746 computed tomography images, 349 of them are of confirmed COVID-19 cases and the other 397 are of healthy individuals, and the second dataset are composed of unimproved computed tomography images of the lung for 632 positive cases of COVID-19 with 15 trained and pretrained deep learning models with nine evaluation metrics are used to evaluate the developed methodology. Among the pretrained CNN and deep models using the first dataset, ResNet50 has an accuracy of 91.46% and a F1-score of 90.49%. For the first dataset, the ResNet50 algorithm is the optimal deep learning model selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5715.988 for COVID-19 computed tomography lung images case considered differential advancement. In contrast, the VGG16 algorithm is the optimal deep learning model is selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5758.791 for the second dataset. Overall, InceptionV3 had the lowest performance for both datasets. The proposed evaluation methodology is a helpful tool to assist healthcare managers in selecting and evaluating the optimal COVID-19 diagnosis models based on deep learning.Entities:
Mesh:
Year: 2022 PMID: 35996653 PMCID: PMC9392599 DOI: 10.1155/2022/1307944
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1The selection approach for optimal deep learning COVID-19 diagnostic model based on novel CSO algorithm.
Algorithm 1Pseudocode for the selection approach for optimal deep learning COVID-19 diagnostic model based on novel CSO algorithm.
Figure 2COVID-19 CT lung scan cases.
Deep learning models.
| No. | DL model | Description | Remark |
|---|---|---|---|
| 1. | CNN | It consists of a set of fully connected layers and convolution layers | Require a few input parameters |
| 2. | DarkNet | Classification model for object detection | Used for real-time object detection |
| 3. | DNN | It has many hidden nodes compared with the conventional neural network | Performs deep nonlinear analysis |
| 4. | GoogleNet | It is an improved DL model for image analysis | Used for object detection with a few input parameters |
| 5. | InceptionResNetV2 | It has a fixable architecture of a CNN | Used for different types of applications |
| 6. | Inceptionv3 | Third generation of Google's Inception CNN | Used for classifying visual objects for computer vision applications |
| 7. | LSTM | A type of recurrent neural network (RNN) | Used for dealing with sequences of data |
| 8. | MobileNetV2 | A lower complexity and model size DL neural network proposed by Google for mobile phone image processing applications | Used for object detection, classification, and semantic segmentation |
| 9. | NASNet-Large | A CNN modeled to deal with a large scale of image datasets. | Used to classify objects |
| 10. | ResNet34 | A CNN architecture but with shortcuts and bottleneck block mechanisms between layers to speed up solving problems. | Used for deep real-time analysis |
| 11. | ResNet50 | A type of CNN that performs deeper analysis to solve complex problems | The deeper analysis might degrade the accuracy of the network |
| 12. | SAE | A multilayer neural network with a stacked autoencoder | Used for datasets with a small dimension of features. |
| 13. | VGG16 | A CNN with multiple 3 × 3 kernel-sized filters in the convolutional layers | Used for recognition tasks of a large-scale number of images dataset |
| 14. | VGG19 | A CNN with multiple 3 × 3 kernel-sized filters in the convolutional layers with additional layers than the VGG16 | Used for recognition tasks of a large-scale number of images dataset |
| 15. | Xception | An improved version of the Inception family of CNN | Used for classifying visual objects for computer vision applications with a slightly higher accuracy |
Parameters of the COVID-19 deep learning models.
| Model no. | Deep learning model | Tuning parameters |
|---|---|---|
| 1 | CNN | Momentum = 0.5 to 0.9; number of epochs = 0.9; batch size = 32. |
| 2 | DarkNet | batch = 64; momentum = 0.9; learning_rate = 0.000008. |
| 3 | DNN | batch_size = |
| 4 | GoogleNet | Each one must be resized from 647 × 511 × 3 to 227 × 227 × 3 pixels, the dimensions used to train GoogleNet 224 × 224 × 3 pixels. |
| 5 | InceptionResNetV2 | Outputs = Dense (100, activation = 'softmax') (base_model.output) model = Model (base_model.inputs, outputs). |
| 6 | Inceptionv3 | batch_size = |
| 7 | LSTM | Rule search (evaluation measure) = entropy; minimum rule coverage = 2, maximum rule length = 6. |
| 8 | MobileNetV2 | learning_rate = 0.0001; no. of epochs = 10. |
| 9 | NASNet-large | learning_rate = 0.0002; no. of epochs = 20. |
| 10 | ResNet34 | Optimization method: Adam; momentum: 0.90; weight-decay: 0.0006; dropout: 0.6; batch size: 100; learning rate: 0.02; total no. of epochs: 20. |
| 11 | ResNet50 | Optimization method: Adam; momentum: 0.97; weight-decay: 0.0005; dropout: 0.7; batch size: 100; learning rate: 0.03; total no. of epochs: 30. |
| 12 | SAE | batch_size = |
| 13 | VGG16 | Optimization method: SGD; momentum: 0.90; weight-decay: 0.0004; dropout: 0.6; batch size: 164; learning rate: 0.06; total no. of epochs: 60. |
| 14 | VGG19 | Optimization method: SGD; momentum: 0.97; weight-decay: 0.0005; dropout: 0.3; batch size: 128; learning rate: 0.07; total no. of epochs: 40. |
| 15 | Xception | Optimizer method: SGD; momentum: 0.8; learning rate: 0.035; learning rate decay: decay of rate 0.92 every 4 epochs. |
Algorithm 2Pseudocode of CSO.
Unimodal benchmark functions.
| Function | Equation | Test name |
| Range | Opt |
|---|---|---|---|---|---|
| F1 |
| Sphere | 30 | −100, 100 | 0 |
| F2 |
| Schwefel 2.22 | 2 | −100, 100 | 0 |
| F3 |
| Schwefel 2.21 | 2 | −100, 100 | 0 |
| F4 |
| Three-Hump Camel | 2 | −5, 5 | 0 |
| F5 |
| Ackley 2 | 2 | −32, 32 | −200 |
| F6 |
| Bohachevskyn N.1 | 2 | −100, 100 | 0 |
| F7 |
| Booth | 2 | −10, 10 | 0 |
| F8 |
| Trid | 6 | −36, 36 | −50 |
| F9 |
| Zakharov | 2 | −5.12, 5.12 | 0 |
| F10 |
| Drop Wave | 2 | −4.5, 4.5 | −1 |
| F11 |
| Schwefel 2.23 | 2 | −100, 100 | 0 |
| F12 |
| Schwefel 2.20 | 2 | −100, 100 | 0 |
| F13 |
| Powell | 10 | −4, 5 | 0 |
| F14 |
| PowerSum | 4 | 0, 4 | 0 |
| F15 |
| Rosenbrock | 30 | −2.048, 2.048 | 0 |
Multimodal benchmark functions.
| Function | Equation | Type | Test name | D | Range | Opt |
|---|---|---|---|---|---|---|
| F16 |
| N | Ackley | 2 | −10, 10 | 0 |
| F17 |
| N | Quartic | 10 | −1.28, 1.28 | 0+rand |
| F18 |
| N | Six-Hump Camel | 2 | −5, 5 | −1.0316 |
| F19 |
| Branin | 2 | −5, 15 | 0.3979 | |
| F20 |
| N | Goldstein Price | 2 | −2, 2 | 3 |
| F21 |
| F | Hartmann 3-D | 3 | 1, 0 | −3.8628 |
| F22 |
| F | Hartmann 6-D | 6 | 1, 0 | −3.3224 |
| F23 |
| Ackley 3 | 2 | −32, 32 | −195.629 | |
| F24 |
| N | Bohachevskyn N.2 | 2 | −10, 10 | 0 |
| F25 |
| N | Brid | 2 | −2pi, 2pi | −106.7645 |
| F26 |
| N | Cross in Tiny | 2 | −10, 10 | −2.06261 |
| F27 |
| F | Easom | 2 | −100, 100 | −1 |
| F28 |
| N | Keane | 2 | 0, 10 | −0.6737 |
| F29 |
| N | Holder | 2 | −10, 10 | −19.2085 |
| F30 |
| N | Michalewics | 2 | 1.57, 2.21 | −1.8013 |
Unimodal benchmark functions.
| Name | CSO | GWO | HHO | |||
|---|---|---|---|---|---|---|
| AV | STD | AV | STD | AV | STD | |
| Sphere | 1.10132 | 2.17067 | 8.32056 | 2.01043 | 5.75505 | 1.21415 |
| Schwefel 2.22 | 5.5237 | 1.5496 | 8.74 | 0 | 1.11321 | 1.51435 |
| Schwefel 2.21 | 1.4406 | 6.7475 | 1.26 | 0 | 2.95204 | 6.83343 |
| Camel3 | 0 | 0 | 0 | 0 | 1.1048 | 2.3653 |
| Ackley2 | −200 | 0 | −200 | 0 | −200 | 0 |
| Bohachevskyn N. 1 | 0 | 0 | 0 | 0 | 0 | 0 |
| Booth | 0 | 0 | 1.54098 | 1.11392 | 2.36488 | 2.5515 |
| Trid | −50 | 0 | −49.99989 | 8.03012 | −1367.28225 | 4.374233259 |
| Zakharov | 7.9142 | 0 | 0 | 0 | 9.5294 | 2.13084 |
| Drop Wave | −0.98496333 | 0.027373716 | −0.99574666 | 0.016186579 | −1 | 0 |
| Schwefel 2.23 | 0 | 0 | 0 | 0 | 0 | 0 |
| Schwefel 2.20 | 1.0876 | 5.4846 | 3.21 | 0 | 3.48466 | 4.72514 |
| Powell | 0.001705466 | 0.001128045 | 4.94865 | 6.09827 | 4.9489 | 7.662 |
| PowerSum | 0.051633374 | 0.070129514 | 0.106802381 | 0.261322987 | 1.4218 | 2.4194 |
| Rosenbrock | 7.610173333 | 21.01106746 | 26.74104667 | 0.704408086 | 0.008242153 | 0.010289518 |
Multimodal benchmark functions.
| Name | CSO | GWO | HHO | |||
|---|---|---|---|---|---|---|
| AV | STD | AV | STD | AV | STD | |
| Ackley | 8.8818 | 4.01173 | 8.8818 | 4.01173 | −8.8818 | 0 |
| Quartic | 0.031253333 | 0.018374115 | 0.000971688 | 0.000902519 | 0.000445553 | 0.000297211 |
| 6-Hump Camel | −1.0316 | 6.77522 | −1.031628448 | 4.31084 | −1.0316 | 0 |
| Branin | 0.3979 | 0 | 0.397887852 | 5.97628 | 0.397946667 | 9.81495 |
| Goldstein | 3 | 0 | 3.000007881 | 9.28286 | 3 | 0 |
| Hart3 | −3.8628 | 3.16177 | −3.8620812 | 0.001938044 | −3.854683333 | 0.008125864 |
| Hert6 | −3.3224 | 1.35504 | −3.264253912 | 0.099876517 | −2.9311 | 0.07435459 |
| Ackley3 | −195.629 | 5.78152 | −195.6290282 | 2.8506 | −186.4112 | 0 |
| Bohachevskyn N. 2 | 0 | 0 | 0 | 0 | 0 | 0 |
| Bird | −106.7645 | 7.2269 | −106.1160683 | 3.551734418 | −106.7645 | −106.7645 |
| Cross_in_Tiny | −2.06261 | 0 | −2.062611869 | 3.30268 | −2.0626 | 0 |
| Easom | −1 | 0 | −1 | 0 | −0.99998 | 8.16497 |
| Keane | −0.6737 | 1.1292 | −0.6737 | 1.1292 | −0.67367 | 0 |
| Holder | −19.2085 | 3.61345 | −19.20849251 | 8.17787 | −19.2085 | 0 |
| Michalewics | −1.8013 | 6.77522 | −1.8013 | 6.77522 | −1.8013 | 0 |
Figure 3The average fitness of the algorithms on the test functions.
Figure 4The standard deviations of the algorithms on the test functions.
Figure 5First dataset training accuracy for ResNet50 model.
Figure 6First dataset loss values for ResNet50 model.
The first dataset diagnostic performance outcomes of various pretrained models.
| No | Classifier | AUC | CAR |
| Precision | Recall | FPR | PPV | NPV | MSE |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | ResNet50 | 90.78 | 91.46 | 90.49 | 89.73 | 88.94 | 90.92 | 90.22 | 90.17 | 0.039 |
| 2 | DarkNet | 80.92 | 85.13 | 85.11 | 83.29 | 80.14 | 82.19 | 83.71 | 81.95 | 0.069 |
| 3 | GoogleNet | 86.99 | 90.35 | 89.42 | 90.21 | 90.11 | 88.92 | 90.47 | 90.16 | 0.044 |
| 4 | MobileNetV2 | 85.47 | 88.36 | 87.69 | 83.33 | 86.91 | 82.14 | 85.25 | 86.94 | 0.038 |
| 5 | Xception | 75.14 | 77.13 | 75.02 | 76.96 | 74.82 | 77.37 | 74.38 | 72.96 | 0.094 |
| 6 | VGG19 | 80.32 | 84.38 | 84.29 | 77.43 | 93.56 | 82.14 | 83.64 | 81.79 | 0.081 |
| 7 | VGG16 | 81.36 | 80.14 | 80.25 | 78.97 | 79.46 | 78.19 | 80.03 | 76.91 | 0.078 |
| 8 | InceptionV3 | 63.40 | 65.98 | 64.81 | 66.47 | 64.91 | 62.99 | 65.11 | 63.73 | 0.055 |
| 9 | ResNet34 | 90.54 | 90.71 | 80.48 | 79.28 | 80.10 | 89.79 | 90.21 | 89.94 | 0.042 |
| 10 | CNNs | 87.47 | 88.15 | 83.36 | 87.55 | 87.89 | 86.36 | 87.99 | 86.69 | 0.057 |
| 11 | DNN | 83.39 | 85.36 | 81.30 | 83.57 | 85.66 | 83.96 | 84.14 | 82.64 | 0.063 |
| 12 | SAE | 80.39 | 82.14 | 79.92 | 84.87 | 81.93 | 80.97 | 83.94 | 83.12 | 0.059 |
| 13 | InceptionResNetV2 | 85.67 | 87.95 | 86.11 | 87.64 | 84.24 | 86.14 | 88.19 | 85.37 | 0.098 |
| 14 | LSTM | 88.25 | 90.54 | 88.36 | 89.25 | 88.97 | 86.91 | 90.11 | 87.67 | 0.096 |
| 15 | NASNet-Large | 79.36 | 80.11 | 78.98 | 77.91 | 78.16 | 75.87 | 79.47 | 82.96 | 0.079 |
Figure 7Second dataset training accuracy for ResNet50 model.
Figure 8Second dataset loss values for ResNet50 model.
The first dataset diagnostic performance outcomes of various pretrained models.
| No | Classifier | AUC | CAR |
| Precision | Recall | FPR | PPV | NPV | MSE |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | ResNet50 | 90.89 | 89.10 | 87.23 | 88.63 | 87.74 | 88.93 | 88.75 | 90.33 | 0.054 |
| 2 | DarkNet | 82.22 | 84.23 | 83.45 | 82.27 | 82.34 | 84.16 | 83.21 | 82.44 | 0.075 |
| 3 | GoogleNet | 84.76 | 86.76 | 84.91 | 85.20 | 85.10 | 85.72 | 83.33 | 84.36 | 0.059 |
| 4 | MobileNetV2 | 83.12 | 85.33 | 84.19 | 84.37 | 83.49 | 84.87 | 85.10 | 83.22 | 0.047 |
| 5 | Xception | 84.83 | 81.35 | 80.12 | 79.99 | 80.43 | 80.84 | 80.32 | 80.98 | 0.061 |
| 6 | VGG19 | 84.22 | 84.97 | 84.77 | 82.46 | 82.86 | 83.33 | 83.47 | 81.23 | 0.046 |
| 7 | VGG16 | 91.34 | 89.96 | 88.75 | 88.15 | 88.95 | 88.97 | 89.14 | 87.99 | 0.038 |
| 8 | InceptionV3 | 72.21 | 75.34 | 73.81 | 73.47 | 73.83 | 74.87 | 72.34 | 74.84 | 0.069 |
| 9 | ResNet34 | 90.42 | 88.21 | 86.48 | 82.97 | 87.22 | 87.53 | 87.39 | 76.27 | 0.048 |
| 10 | CNNs | 85.29 | 83.65 | 83.10 | 82.55 | 81.76 | 82.40 | 82.77 | 82.74 | 0.050 |
| 11 | DNN | 81.96 | 83.50 | 82.37 | 83.12 | 82.12 | 83.12 | 81.94 | 82.84 | 0.083 |
| 12 | SAE | 83.10 | 82.90 | 81.87 | 80.43 | 81.27 | 81.84 | 81.36 | 81.38 | 0.078 |
| 13 | InceptionResNetV2 | 88.38 | 88.35 | 87.43 | 86.76 | 86.42 | 87.53 | 88.05 | 87.46 | 0.069 |
| 14 | LSTM | 82.44 | 83.24 | 82.33 | 82.58 | 83.09 | 82.77 | 82.55 | 79.24 | 0.099 |
| 15 | NASNet-Large | 81.11 | 82.35 | 80.48 | 80.21 | 80.68 | 81.31 | 81.68 | 82.16 | 0.062 |
CSO results for 10 runs.
| Fitness | AUC | CAR |
| Precision | Recall | FPR | PPV | NPV | MSE | Speed | Angle |
|---|---|---|---|---|---|---|---|---|---|---|---|
|
| |||||||||||
| 71.82 | 14.45 | 17.46 | 10.59 | 8.50 | 10.16 | 9.63 | 5.72 | 5.19 | 18.28 | 0.69 | 53 |
| 81.32 | 14.76 | 12.38 | 17.45 | 9.11 | 10.53 | 6.66 | 5.77 | 3.49 | 19.81 | 0.77 | 129 |
| 77.12 | 14.26 | 18.53 | 13.51 | 7.51 | 9.96 | 8.46 | 6.05 | 3.42 | 18.27 | −3.43 | 53 |
| 70.41 | 13.72 | 16.09 | 9.28 | 7.80 | 9.46 | 9.49 | 6.76 | 5.36 | 22.02 | −2.22 | 45 |
| 77.73 | 14.30 | 18.10 | 11.57 | 12.52 | 6.01 | 6.87 | 4.06 | 4.13 | 22.42 | −2.08 | 131 |
| 72.54 | 14.32 | 18.19 | 8.71 | 12.45 | 11.10 | 8.41 | 4.04 | 4.16 | 18.58 | 3.89 | 59 |
| 73.34 | 13.85 | 11.17 | 10.51 | 10.37 | 8.70 | 8.18 | 6.76 | 3.92 | 26.51 | −3.31 | 57 |
| 73.57 | 15.19 | 11.23 | 10.65 | 9.18 | 9.31 | 7.60 | 8.09 | 6.78 | 21.94 | 3.99 | 67 |
| 78.56 | 14.73 | 17.97 | 9.035 | 13.52 | 10.27 | 5.60 | 6.08 | 3.17 | 19.60 | 2.46 | 57 |
| 76.20 | 15.55 | 12.41 | 11.11 | 13.38 | 7.11 | 7.33 | 6.25 | 3.65 | 23.17 | −2.92 | 48 |
| 75.26 | 14.51 | 15.35 | 11.24 | 10.43 | 9.26 | 7.82 | 5.96 | 4.33 | 21.06 | −0.21 | 69.9 |
|
| |||||||||||
|
| |||||||||||
| 74.26 | 14.18 | 14.50 | 12.00 | 9.93 | 10.21 | 8.70 | 5.11 | 5.50 | 19.85 | 3.71 | 57 |
| 79.03 | 13.89 | 12.48 | 13.01 | 14.99 | 9.42 | 5.79 | 4.54 | 6.00 | 19.86 | 2.08 | 46 |
| 74.31 | 14.15 | 18.68 | 12.20 | 11.80 | 7.76 | 7.64 | 5.38 | 5.44 | 16.93 | −5.67 | 112 |
| 74.45 | 14.02 | 18.21 | 10.04 | 10.04 | 6.37 | 7.35 | 4.50 | 4.77 | 24.68 | 2.42 | 110 |
| 80.88 | 14.04 | 17.33 | 15.03 | 12.88 | 8.79 | 6.03 | 3.88 | 4.60 | 17.41 | −0.02 | 56 |
| 81.85 | 14.19 | 11.93 | 15.54 | 12.34 | 7.44 | 4.91 | 7.71 | 5.72 | 20.20 | −0.19 | 54 |
| 81.90 | 14.26 | 12.50 | 15.55 | 14.38 | 5.48 | 5.69 | 6.01 | 3.97 | 22.16 | 4.91 | 45 |
| 71.74 | 14.70 | 11.93 | 10.40 | 9.13 | 10.66 | 8.85 | 6.53 | 6.15 | 21.62 | −2.16 | 87 |
| 73.61 | 13.37 | 11.87 | 13.22 | 9.56 | 10.16 | 10.21 | 4.09 | 4.20 | 23.31 | −3.03 | 58 |
| 72.98 | 15.46 | 15.35 | 11.85 | 12.50 | 6.44 | 9.48 | 4.44 | 3.66 | 20.79 | 3.92 | 52 |
| 76.50 | 14.23 | 14.48 | 12.88 | 11.76 | 8.27 | 7.47 | 5.22 | 5.00 | 20.68 | 0.60 | 67.7 |
Applying CSO results to evaluate the deep learning algorithms (first dataset).
| Algorithm | AUC | CAR |
| Precision | Recall | FPR | PPV | NPV | MSE | Final result |
|---|---|---|---|---|---|---|---|---|---|---|
|
| ||||||||||
| ResNet50 | 1317.77 | 1404.60 | 1017.64 | 936.64 | 824.20 | 711.77 | 537.50 | 390.20 | 0.82 | 5715.99 |
| DarkNet | 1174.64 | 1307.39 | 957.15 | 869.42 | 742.66 | 643.42 | 498.72 | 354.63 | 1.45 | 5259.72 |
| GoogleNet | 1262.75 | 1387.56 | 1005.62 | 941.65 | 835.05 | 696.11 | 538.99 | 390.16 | 0.93 | 5664.74 |
| MobileNetV2 | 1240.69 | 1357.00 | 986.16 | 869.84 | 805.39 | 643.03 | 507.89 | 376.23 | 0.80 | 5499.36 |
| Xception | 1090.73 | 1184.53 | 843.67 | 803.34 | 693.35 | 605.69 | 443.13 | 315.73 | 1.98 | 4766.83 |
| VGG19 | 1165.93 | 1295.87 | 947.92 | 808.25 | 867.02 | 643.03 | 498.30 | 353.94 | 1.71 | 5292.50 |
| VGG16 | 1181.02 | 1230.76 | 902.49 | 824.32 | 736.35 | 612.11 | 476.80 | 332.82 | 1.64 | 5070.82 |
| InceptionV3 | 920.32 | 1013.29 | 728.85 | 693.84 | 601.52 | 493.12 | 387.91 | 275.79 | 1.16 | 4127.24 |
| ResNet34 | 1314.28 | 1393.09 | 905.08 | 827.56 | 742.28 | 702.92 | 537.44 | 389.21 | 0.88 | 5405.14 |
| CNNs | 1269.72 | 1353.77 | 937.46 | 913.89 | 814.47 | 676.07 | 524.22 | 375.14 | 1.20 | 5511.41 |
| DNN | 1210.49 | 1310.92 | 914.30 | 872.34 | 793.81 | 657.28 | 501.28 | 357.62 | 1.33 | 5302.16 |
| SAE | 1166.94 | 1261.47 | 898.78 | 885.91 | 759.24 | 633.87 | 500.09 | 359.70 | 1.24 | 5197.02 |
| InceptionResNetV2 | 1243.59 | 1350.70 | 968.39 | 914.83 | 780.65 | 674.35 | 525.41 | 369.43 | 2.06 | 5476.59 |
| LSTM | 1281.04 | 1390.47 | 993.69 | 931.63 | 824.48 | 680.38 | 536.85 | 379.38 | 2.02 | 5655.16 |
| NASNet-Large | 1151.99 | 1230.30 | 888.21 | 813.26 | 724.31 | 593.95 | 473.46 | 359.00 | 1.66 | 5044.91 |
|
| ||||||||||
|
| ||||||||||
| ResNet50 | 1293.48 | 1290.40 | 1124.24 | 1041.93 | 725.85 | 664.05 | 463.27 | 451.61 | 1.12 | 5725.61 |
| DarkNet | 1170.09 | 1219.87 | 1075.52 | 967.16 | 681.18 | 628.43 | 434.35 | 412.17 | 1.55 | 5330.35 |
| GoogleNet | 1206.24 | 1256.51 | 1094.34 | 1001.61 | 704.01 | 640.08 | 434.98 | 421.77 | 1.22 | 5478.14 |
| MobileNetV2 | 1182.90 | 1235.80 | 1085.06 | 991.85 | 690.69 | 633.73 | 444.22 | 416.07 | 0.97 | 5411.87 |
| Xception | 1207.23 | 1178.16 | 1032.60 | 940.36 | 665.37 | 603.64 | 419.26 | 404.87 | 1.26 | 5242.96 |
| VGG19 | 1198.55 | 1230.58 | 1092.53 | 969.39 | 685.48 | 622.23 | 435.71 | 406.12 | 0.95 | 5395.18 |
| VGG16 | 1299.88 | 1302.85 | 1143.83 | 1036.29 | 735.86 | 664.35 | 465.30 | 439.91 | 0.79 | 5758.79 |
| InceptionV3 | 1027.64 | 1091.12 | 951.28 | 863.71 | 610.77 | 559.06 | 377.61 | 374.17 | 1.43 | 4735.81 |
| ResNet34 | 1286.79 | 1277.51 | 1114.57 | 975.39 | 721.55 | 653.60 | 456.17 | 381.32 | 0.99 | 5558.71 |
| CNNs | 1213.79 | 1211.47 | 1071.01 | 970.45 | 676.38 | 615.29 | 432.05 | 413.67 | 1.03 | 5372.49 |
| DNN | 1166.39 | 1209.30 | 1061.60 | 977.15 | 679.36 | 620.67 | 427.72 | 414.17 | 1.72 | 5313.30 |
| SAE | 1182.62 | 1200.61 | 1055.16 | 945.53 | 672.32 | 611.11 | 424.69 | 406.87 | 1.61 | 5275.07 |
| InceptionResNetV2 | 1257.76 | 1279.54 | 1126.81 | 1019.94 | 714.93 | 653.60 | 459.61 | 437.26 | 1.43 | 5640.84 |
| LSTM | 1173.23 | 1205.53 | 1061.08 | 970.80 | 687.38 | 618.05 | 430.90 | 396.17 | 2.05 | 5305.00 |
| NASNet-Large | 1154.30 | 1192.64 | 1037.24 | 942.94 | 667.44 | 607.15 | 426.36 | 410.77 | 1.28 | 5223.26 |