| Literature DB >> 33799219 |
Fatih Ozyurt1, Turker Tuncer2, Abdulhamit Subasi3.
Abstract
The new coronavirus disease known as COVID-19 is currently a pandemic that is spread out the whole world. Several methods have been presented to detect COVID-19 disease. Computer vision methods have been widely utilized to detect COVID-19 by using chest X-ray and computed tomography (CT) images. This work introduces a model for the automatic detection of COVID-19 using CT images. A novel handcrafted feature generation technique and a hybrid feature selector are used together to achieve better performance. The primary goal of the proposed framework is to achieve a higher classification accuracy than convolutional neural networks (CNN) using handcrafted features of the CT images. In the proposed framework, there are four fundamental phases, which are preprocessing, fused dynamic sized exemplars based pyramid feature generation, ReliefF, and iterative neighborhood component analysis based feature selection and deep neural network classifier. In the preprocessing phase, CT images are converted into 2D matrices and resized to 256 × 256 sized images. The proposed feature generation network uses dynamic-sized exemplars and pyramid structures together. Two basic feature generation functions are used to extract statistical and textural features. The selected most informative features are forwarded to artificial neural networks (ANN) and deep neural network (DNN) for classification. ANN and DNN models achieved 94.10% and 95.84% classification accuracies respectively. The proposed fused feature generator and iterative hybrid feature selector achieved the best success rate, according to the results obtained by using CT images.Entities:
Keywords: COVID-19; Deep neural network; Fused residual dynamic exemplar pyramid model; RFINCA feature Selector
Year: 2021 PMID: 33799219 PMCID: PMC7997855 DOI: 10.1016/j.compbiomed.2021.104356
Source DB: PubMed Journal: Comput Biol Med ISSN: 0010-4825 Impact factor: 4.589
Comparison of the proposed approach with the previous studies.
| Study | Dataset reference | Proposed methods | Accuracy (%) |
|---|---|---|---|
| [ | 777 COVID-19 | DRE-Net | 86 |
| [ | 313 COVID-19 | UNetþ3D Deep | 90.8 |
| [ | 219 COVID-19 | ResNet þ Location | 86.7 |
| [ | 325 COVID-19 | M-Inception | 89.5 |
| [ | 349 COVID-19, 397 Healty CT images [ | DenseNet | 84.7 |
| [ | 275 COVID-19 | FFT-Gabor | 95.37 |
| [ | 349 COVID-19, 397 Healty CT images [ | CGAN | 82.91 |
| [ | 496 COVID-19 | CNN | 94.98 |
| [ | 100 normal | DL multitask | 93 |
| [ | 564 COVID-19 | VGG16 based lesion-attention DNN | 88.6 |
| [ | 313 COVID-19 | UNet | 90.1 |
| [ | 413 COVID-19 | ResNet-50 + 2D CNN | 93.02 |
| [ | 460 COVID-19 | SqueezeNet | 83 |
| [ | 1029 COVID-19 | AH-Net + DenseNet121 | 90.8 |
| [ | 53 COVID-19 | SVM | 98.71 |
| [ | 51 COVID-19 | UNet++ | 98.85 |
| [ | 230 COVID-19 | AD3D-MIL | 97.9 |
| [ | 521 COVID-19 | DL | 85.40 |
| [ | COVID-19/other pneu./healthy. (Private) | 3D UNet-based network | 94 |
| [ | 284 COVID-19, | CCSHNet | 97.04 |
| [ | 219 COVID-19, 1345 pneumonia and 1341 normal images [ | mAlexNet + BiLSTM | 98.70 |
| [ | 361 COVID-19, | InstaCovNet-19 | 99.08 |
| Proposed method | 349 COVID-19, 397 Healty CT images [ | FDEPFGN and RFINCA | 95.84 |
Fig. 1Pictorial demonstration of the CT image dataset.
Fig. 2Graphical representation of the proposed FRDEPFGN and RFINCA based CT image classification framework.
Fig. 3Pseudo code of the proposed RDEPFGN and RFINCA based CT image classification method.
Fig. 4Graphical representation of LBP feature generation.
Fig. 5Flow diagram of the RFINCA.
Fig. 6Graphical representation of the RFINCA procedure.
Fig. 7Training/validation curve for ANN.
Fig. 8Training/validation curve for DNN.
The calculated accuracy rates (%) of the ANN and DNN according to the folds.
| Fold | ANN | DNN |
|---|---|---|
| 82.43 | 83.78 | |
| 88.0 | 90.67 | |
| 92.0 | 92.0 | |
| 96.0 | 100.0 | |
| 96.0 | 98.67 | |
| 97.33 | 98.67 | |
| 94.67 | 97.33 | |
| 97.30 | 98.65 | |
| 98.65 | 98.65 | |
| 98.65 | 100.0 | |
| 94.10 | 95.84 |
Confusion matrix of the ANN classifier.
| Actual classes | Predicted classes | Recall | |
|---|---|---|---|
| COVID-19 | Healthy | ||
| COVID-19 | 326 | 23 | 0.9341 |
| Healthy | 21 | 376 | 0.9471 |
| Precision | 0.9395 | 0.9423 | 0.9410 |
Confusion matrix of the DNN classifier.
| Actual classes | Predicted classes | Recall | |
|---|---|---|---|
| COVID-19 | Healthy | ||
| COVID-19 | 330 | 19 | 0.9456 |
| Healthy | 12 | 385 | 0.9698 |
| Precision | 0.9649 | 0.9530 | 0.9584 |
The calculated F1-score and geometric mean and accuracy of the ANN and DNN classifiers.
| Classifier | Class | F1-score | Geometric mean | Accuracy |
|---|---|---|---|---|
| ANN | COVID-19 | 93.68 | – | – |
| Healthy | 94.47 | – | – | |
| Average | 94.08 | 94.06 | 94.06 | |
| DNN | COVID-19 | 95.52 | – | – |
| Healthy | 96.13 | – | – | |
| Average | 95.83 | 95.76 | 95.77 |