| Literature DB >> 33330341 |
Adil Khadidos1, Alaa O Khadidos2, Srihari Kannan3, Yuvaraj Natarajan4, Sachi Nandan Mohanty5, Georgios Tsaramirsis6.
Abstract
In this paper, a data mining model on a hybrid deep learning framework is designed to diagnose the medical conditions of patients infected with the coronavirus disease 2019 (COVID-19) virus. The hybrid deep learning model is designed as a combination of convolutional neural network (CNN) and recurrent neural network (RNN) and named as DeepSense method. It is designed as a series of layers to extract and classify the related features of COVID-19 infections from the lungs. The computerized tomography image is used as an input data, and hence, the classifier is designed to ease the process of classification on learning the multidimensional input data using the Expert Hidden layers. The validation of the model is conducted against the medical image datasets to predict the infections using deep learning classifiers. The results show that the DeepSense classifier offers accuracy in an improved manner than the conventional deep and machine learning classifiers. The proposed method is validated against three different datasets, where the training data are compared with 70%, 80%, and 90% training data. It specifically provides the quality of the diagnostic method adopted for the prediction of COVID-19 infections in a patient.Entities:
Keywords: COVID-19; CT images; DeepSense; artificial intelligence; convolutional neural network; prediction
Mesh:
Year: 2020 PMID: 33330341 PMCID: PMC7714903 DOI: 10.3389/fpubh.2020.599550
Source DB: PubMed Journal: Front Public Health ISSN: 2296-2565
Figure 1Proposed model for classification.
Figure 2Proposed DeepSense deep neural network (DNN) architecture.
Figure 3Results of classification accuracy during training with 70% training data.
Figure 4Results of classification accuracy during training with 80% training data.
Figure 5Results of classification accuracy during training with 90% training data.
Results of statistical parameters for IEEE8023 with 70% training data on 1,000 images.
| Accuracy | 55.67145 | 55.97152 | 58.06198 | 58.32304 | 59.68335 | 80.475 |
| F-measure | 38.39159 | 40.49205 | 51.72857 | 51.8886 | 54.26013 | 83.65671 |
| G-mean | 72.54022 | 72.77127 | 74.27161 | 74.31162 | 74.72171 | 85.57814 |
| MAPE | 28.32533 | 25.38368 | 23.98336 | 21.40179 | 20.82166 | 16.1186 |
| Sensitivity | 61.74481 | 65.25659 | 73.16136 | 85.54813 | 86.20828 | 96.25452 |
| Specificity | 74.18159 | 74.37163 | 77.88342 | 77.90342 | 79.27473 | 80.11492 |
ANN, artificial neural network; BPNN, back propagation neural network; DNN, deep neural network; FFNN, feedforward neural network; MAPE, mean absolute percentage error; RNN, recurrent neural network.
Results of statistical parameters for IEEE8023 with 80% training data on 1,000 images.
| Accuracy | 96.46457 | 97.18473 | 97.21474 | 97.29476 | 97.30476 | 97.43479 |
| F-measure | 52.36871 | 69.72859 | 70.04966 | 72.93131 | 76.16303 | 79.36475 |
| G-mean | 81.88631 | 82.7365 | 84.35786 | 85.91821 | 90.96134 | 92.48168 |
| MAPE | 26.90502 | 25.51371 | 22.74209 | 20.07049 | 10.60537 | 90.12115 |
| Sensitivity | 68.69836 | 70.08967 | 72.86129 | 75.54189 | 84.99801 | 88.59981 |
| Specificity | 96.53459 | 97.32476 | 97.52481 | 97.60483 | 97.62483 | 97.68484 |
ANN, artificial neural network; BPNN, back propagation neural network; DNN, deep neural network; FFNN, feedforward neural network; MAPE, mean absolute percentage error; RNN, recurrent neural network.
Results of statistical parameters for IEEE8023 with 90% training data on 1,000 images.
| Accuracy | 95.91445 | 95.93445 | 95.94446 | 96.03448 | 96.05448 | 96.11449 |
| F-measure | 77.3833 | 77.51333 | 78.03345 | 79.11469 | 79.79484 | 80.08491 |
| G-mean | 79.43476 | 79.67482 | 79.94488 | 80.9451 | 81.26517 | 81.45622 |
| MAPE | 31.14697 | 30.78688 | 30.26677 | 28.65541 | 28.1553 | 27.83522 |
| Sensitivity | 64.45641 | 64.81649 | 65.33661 | 66.94797 | 67.44808 | 67.76815 |
| Specificity | 94.72318 | 94.7832 | 94.8232 | 96.05448 | 96.46457 | 96.81465 |
ANN, artificial neural network; BPNN, back propagation neural network; DNN, deep neural network; FFNN, feedforward neural network; MAPE, mean absolute percentage error; RNN, recurrent neural network.
Results of statistical parameters for COVID-CT with 70% training data on 1,000 images.
| Accuracy | 56.26158 | 58.87317 | 61.23469 | 62.605 | 65.84672 | 84.68794 |
| F-measure | 66.74693 | 66.79694 | 67.68814 | 68.80839 | 73.84151 | 79.40476 |
| G-mean | 43.50373 | 56.43162 | 59.4633 | 44.705 | 76.09302 | 85.98823 |
| MAPE | 19.37033 | 16.69873 | 16.60871 | 11.75563 | 10.42533 | 9.275074 |
| Sensitivity | 76.23305 | 78.91465 | 79.00467 | 83.84675 | 85.18805 | 86.33831 |
| Specificity | 73.39141 | 76.27306 | 77.21327 | 80.37497 | 82.37642 | 84.46789 |
ANN, artificial neural network; BPNN, back propagation neural network; COVID-19, coronavirus disease 2019; DNN, deep neural network; FFNN, feedforward neural network; MAPE, mean absolute percentage error; RNN, recurrent neural network.
Results of statistical parameters for COVID-CT with 80% training data on 1,000 images.
| Accuracy | 97.73486 | 97.75486 | 97.77486 | 97.77486 | 97.78487 | 97.78487 |
| F-measure | 89.19995 | 90.63127 | 90.7813 | 91.29141 | 91.50146 | 92.1416 |
| G-mean | 93.27286 | 96.67462 | 97.22474 | 97.52481 | 97.66484 | 97.66484 |
| MAPE | 86.64838 | 27.01504 | 20.25053 | 9.275074 | 54.63022 | 21.0017 |
| Sensitivity | 88.94989 | 95.58337 | 96.68462 | 97.26475 | 97.54481 | 97.55482 |
| Specificity | 96.76464 | 96.78464 | 96.78464 | 96.78464 | 96.78464 | 97.42479 |
ANN, artificial neural network; BPNN, back propagation neural network; COVID-19, coronavirus disease 2019; DNN, deep neural network; FFNN, feedforward neural network; MAPE, mean absolute percentage error; RNN, recurrent neural network.
Results of statistical parameters for COVID-CT with 90% training data on 1,000 images classifier.
| Accuracy | 97.37478 | 97.37478 | 97.45479 | 97.45479 | 97.4748 | 97.52481 |
| F-measure | 85.88821 | 86.00823 | 87.94967 | 87.97967 | 89.33998 | 89.34998 |
| G-mean | 94.03303 | 94.03303 | 94.40311 | 94.47313 | 94.8032 | 94.84321 |
| MAPE | 70.79983 | 70.61979 | 62.72503 | 61.39473 | 54.04008 | 53.35993 |
| Sensitivity | 90.53124 | 90.53124 | 91.34143 | 91.47146 | 92.21162 | 92.28164 |
| Specificity | 97.4648 | 97.4748 | 97.56482 | 97.56482 | 97.65484 | 97.65484 |
ANN, artificial neural network; BPNN, back propagation neural network; COVID-19, coronavirus disease 2019; DNN, deep neural network; FFNN, feedforward neural network; MAPE, mean absolute percentage error; RNN, recurrent neural network.
Results of statistical parameters for CORD-19 with 70% training data on 1,000 images.
| Accuracy | 59.07321 | 65.85673 | 68.8784 | 74.09157 | 77.93343 | 82.41643 |
| F-measure | 69.68858 | 69.93964 | 70.10968 | 70.28972 | 74.85174 | 80.39498 |
| G-mean | 69.98965 | 70.2197 | 71.94009 | 74.00155 | 76.4631 | 79.21471 |
| MAPE | 68.11823 | 64.47642 | 57.75191 | 39.63186 | 36.77022 | 34.91881 |
| Sensitivity | 77.52334 | 71.14991 | 71.87007 | 73.64147 | 73.84151 | 80.69505 |
| Specificity | 70.39974 | 72.28016 | 75.35185 | 80.58502 | 81.89631 | 82.30641 |
ANN, artificial neural network; BPNN, back propagation neural network; CORD-19, COVID-19 Open Research Dataset Challenge; COVID-19, coronavirus disease 2019; DNN, deep neural network; FFNN, feedforward neural network; MAPE, mean absolute percentage error; RNN, recurrent neural network.
Results of statistical parameters for CORD-19 with 80% training data on 1,000 images.
| Accuracy | 93.97301 | 94.12305 | 94.20307 | 94.25308 | 94.43312 | 94.44312 |
| F-measure | 58.25303 | 60.04343 | 60.3835 | 60.84361 | 62.37495 | 62.51498 |
| G-mean | 79.1447 | 79.65481 | 80.18493 | 80.41498 | 81.52623 | 81.98633 |
| MAPE | 29.90669 | 29.17552 | 28.28533 | 27.93525 | 26.10384 | 25.28365 |
| Sensitivity | 65.69669 | 66.42685 | 67.31805 | 67.67813 | 69.49854 | 70.32973 |
| Specificity | 95.2533 | 95.41334 | 95.42334 | 95.47335 | 95.51336 | 95.56337 |
ANN, artificial neural network; BPNN, back propagation neural network; CORD-19, COVID-19 Open Research Dataset Challenge; COVID-19, coronavirus disease 2019; DNN, deep neural network; FFNN, feedforward neural network; MAPE, mean absolute percentage error; RNN, recurrent neural network.
Results of statistical parameters for CORD-19 with 90% training data on 1,000 images.
| Accuracy | 97.44379 | 97.44379 | 97.52381 | 97.52381 | 97.54381 | 97.59382 |
| F-measure | 85.94922 | 86.06925 | 88.01168 | 88.04169 | 89.40299 | 89.41399 |
| G-mean | 94.09904 | 94.09904 | 94.47013 | 94.54014 | 94.87022 | 94.91022 |
| MAPE | 70.84984 | 70.6698 | 62.77004 | 61.43874 | 54.07909 | 53.39794 |
| Sensitivity | 90.59526 | 90.59526 | 91.40644 | 91.53647 | 92.27664 | 92.34765 |
| Specificity | 97.53381 | 97.54381 | 97.63383 | 97.63383 | 97.72385 | 97.72385 |
ANN, artificial neural network; BPNN, back propagation neural network; CORD-19, COVID-19 Open Research Dataset Challenge; COVID-19, coronavirus disease 2019; DNN, deep neural network; FFNN, feedforward neural network; MAPE, mean absolute percentage error; RNN, recurrent neural network.