| Literature DB >> 35634075 |
Firoozeh Abolhasani Zadeh1, Mohammadreza Vazifeh Ardalani2, Ali Rezaei Salehi3, Roza Jalali Farahani4, Mandana Hashemi5, Adil Hussein Mohammed6.
Abstract
The lungs are COVID-19's most important focus, as it induces inflammatory changes in the lungs that can lead to respiratory insufficiency. Reducing the supply of oxygen to human cells negatively impacts humans, and multiorgan failure with a high mortality rate may, in certain circumstances, occur. Radiological pulmonary evaluation is a vital part of patient therapy for the critically ill patient with COVID-19. The evaluation of radiological imagery is a specialized activity that requires a radiologist. Artificial intelligence to display radiological images is one of the essential topics. Using a deep machine learning technique to identify morphological differences in the lungs of COVID-19-infected patients could yield promising results on digital images of chest X-rays. Minor differences in digital images that are not detectable or apparent to the human eye may be detected using computer vision algorithms. This paper uses machine learning methods to diagnose COVID-19 on chest X-rays, and the findings have been very promising. The dataset includes COVID-19-enhanced X-ray images for disease detection using chest X-ray images. The data were gathered from two publicly accessible datasets. The feature extractions are done using the gray level co-occurrence matrix methods. K-nearest neighbor, support vector machine, linear discrimination analysis, naïve Bayes, and convolutional neural network methods are used for the classification of patients. According to the findings, convolutional neural networks' efficiency linked to imaging modalities with fewer human involvements outperforms other traditional machine learning approaches.Entities:
Mesh:
Year: 2022 PMID: 35634075 PMCID: PMC9131703 DOI: 10.1155/2022/3035426
Source DB: PubMed Journal: Comput Intell Neurosci
Literature review related to deep learning method for classification of COVID-19.
| Approaches | Dataset | Volume | TPR | Acc |
|---|---|---|---|---|
| COVID-Net [ | COVIDx test | 13800 | 0.871 | 0.926 |
| ResNet50; InceptionV3; Inception-ResNetV2 [ | GitHub | 100 | — | 98%; |
| 97%; | ||||
| 87% | ||||
| COVNet [ | Proprietary datasets | 4356 | 0.87 | |
| Deep learning with X-ray [ | Proprietary datasets | 448 | 0.986 | 0.967 |
| 6 | 8 | |||
| COVIDX-Net (VGG19 and DenseNet201) [ | Proprietary datasets | 50 | — | 0.9 |
| Barstugan et al. [ | Proprietary datasets | 150 | — | 0.996 |
| 8 | ||||
| ResNet50 and SVM [ | GitHub, Kaggle, and Open-i | 158 | 97. | 0.953 |
| 29% | 8 | |||
| SVM and random forests [ | Hospital Israelita Albert Einstein in São Paulo | — | 0.067 | 0.847 |
| 7 | ||||
| MLT and SVM [ | Montgomery County X-ray Set and COVID Chest X-ray Set and COVID Chest X-ray dataset master | 40 | 0.957 | 0.974 |
| 6 | 8 | |||
| Li et al. [ | Proprietary | — | 0.8 | 0.87 |
| SMOTE [ | Chest X-ray images (Pneumonia)1 and COVID-19 public dataset from Italy | 5840 | 0.967 | 0.966 |
| Probabilistic model [ | Kaggle benchmark dataset | 51 | — | 0.994 |
| NLR&RDW-SD [ | Jingzhou Central Hospital | — | 0.9 | 0.857 |
| RF-based model [ | Proprietary | — | — | 0.875 |
| SMOTE [ | Chest X-ray images (Pneumonia)1 and COVID-19 public dataset from Italy | 5840 | 0.932 | 0.931 |
| iSARF [ | 3 University Hospitals (Tongji, Shanghai, Fudan) | — | 0.907 | 0.879 |
| SMOTE [ | Chest X-ray images (Pneumonia)1 and COVID-19 public dataset from Italy | 5840 | 0.947 | 0.947 |
| Modified U-Net structure [ | SIRM | 110 | — | 0.79 |
| Attention U-Net with an adversarial critic model [ | JSRT, Montgomery, and Shenzhen | 1047 | — | 0.96 |
| InfNet and the Semi-Inf-Net [ | CCOVID-19 CT segmentation and COVID-19 CT/X-ray collection | 1600 | 0.725 | — |
Figure 1The confusion matrix.
Figure 2Example of X-ray image from patients' lungs. (a) COVID-19 patient. (b) Other patients.
Figure 3Conceptual diagram of the presented method.
Figure 4The confusion matrix of the deep learning methods used for COVID-19 diagnosis.
Figure 5The architecture of presented CNN methods for features classification.
Figure 6The training process of the CNN approach.
Figure 7The confusion matrix of the presented CNN method.
Figure 8The ROC curve for presented methods.
The comparison between the presented methods.
| Methods | Sensitivity (%) | Specificity (%) | Precision (%) | AUC | Accuracy (%) |
|---|---|---|---|---|---|
| KNN | 91.6 | 98.9 | 98.8 | 99.61 | 95.2 |
| SVM | 78.4 | 76.6 | 79.4 | 88.20 | 79.0 |
| NB | 77.9 | 47.5 | 59.7 | 74.24 | 62.7 |
| LDA | 85.2 | 82.5 | 82.9 | 90.94 | 83.8 |
| CNN | 99.2 | 100 | 100 | 99.97 | 99.6 |
| [ | 96.1 | 99.7 | 96.9 | — | 93.2 |