| Literature DB >> 36010231 |
Saloni Laddha1, Sami Mnasri2,3, Mansoor Alghamdi2, Vijay Kumar1, Manjit Kaur4, Malek Alrashidi2, Abdullah Almuhaimeed5, Ali Alshehri2, Majed Abdullah Alrowaily2, Ibrahim Alkhazi6.
Abstract
In December 2019, the novel coronavirus disease 2019 (COVID-19) appeared. Being highly contagious and with no effective treatment available, the only solution was to detect and isolate infected patients to further break the chain of infection. The shortage of test kits and other drawbacks of lab tests motivated researchers to build an automated diagnosis system using chest X-rays and CT scanning. The reviewed works in this study use AI coupled with the radiological image processing of raw chest X-rays and CT images to train various CNN models. They use transfer learning and numerous types of binary and multi-class classifications. The models are trained and validated on several datasets, the attributes of which are also discussed. The obtained results of various algorithms are later compared using performance metrics such as accuracy, F1 score, and AUC. Major challenges faced in this research domain are the limited availability of COVID image data and the high accuracy of the prediction of the severity of patients using deep learning compared to well-known methods of COVID-19 detection such as PCR tests. These automated detection systems using CXR technology are reliable enough to help radiologists in the initial screening and in the immediate diagnosis of infected individuals. They are preferred because of their low cost, availability, and fast results.Entities:
Keywords: COVID-19; CT scanning; chest X-rays; deep learning; radiology; transfer learning
Year: 2022 PMID: 36010231 PMCID: PMC9406661 DOI: 10.3390/diagnostics12081880
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1Ground glass opacities in CXR findings.
Recent studies proposing learning model for classifying images to detect COVID-19.
| Study | Application | Learning Method | Model | Features | Dataset Specifications | Results (Accuracy, Specificity, Sensitivity) | ||
|---|---|---|---|---|---|---|---|---|
| Accessibility | Type/Structure | Size | ||||||
| (Lawton and Viriri, 2021) | CT lung scans | Different transfer learning architectures: DenseNet-201, ResNet-101, VGG-19, fficientNet-B4, and MobileNet-V2 | Fully connected artificial neural | Peripheral and bilateral predominant ground-glass opacities | Publicly available | 2482 | Best performance is on VGG-19: accuracy = 95.75% | |
| (Horry et al., 2021) | X-ray-image-based COVID-19 detection | Convolutional neural network models | VGG, Inception, Xception, and Resnet | Patchy infiltration or | Publicly available | X-ray images | 200 × normal | Both VGG16 and VGG19 classifiers provided good results within the experimental constraints of the small number of X-ray images. |
| (Padma et al., 2020) | Illustrates the severity of coronavirus in lung using radiology images | Convolution 2D technique | Binary classes | Open-source datasets of COVID-19 available at GitHub and | Images of Chest X-ray and CT scan | 60 images where 30 are | Accuracy for training set of 99.2%, validation accuracy of 98.3%, loss 0.3%, | |
| (Karhan et al., 2020) | Radiological images of the chest | Convolutional neural networks | ResNet-18 (Residual Network) | Two different classes in the dataset (COVID-19 | Hybrid: Italian Society of Medical Radiology (SIRM) dataset, coronavirus open-source shared dataset, and dataset | COVID-19-positive and -negative CXR images | Accuracy rate of 99.5% | |
| (Hilmizen et al., 2020) | Diagnosing COVID-19 pneumonia from chest CT scan and X-ray images | Multimodal deep learning: concatenation of DenseNet121-MobileNet | The input data for the feed of the network were normalized, resized to 150 × 150 pixels, and the number of channels was set to 3 (RGB images). | Two open-source datasets, and the allocation for each class was balanced | Public | 1750 data for each dataset in the trainingset that has 750 data on the validation set for each dataset | The concatenation of DenseNet121-MobileNet gives accuracy of 99.87%, sensitivity of 99.74%, | |
| (Santoso and Purnomo, 2020) | COVID-19 | Deep neural network | Modification of deep neural network based on Xception model | 618 images with | The dataset is divided into data training | Xception accuracy of 90.09 | ||
| (Darapaneni et al., 2020) | Analysis of severity of COVID-19 from chest X-ray images | Segmentation mask prediction | Mask RCNN | 4668 trained images and 1500 tested images | Mean average precision is 90 (89,) on training set (test set) | |||
| (Kandhari et al., 2021) | Detecting COVID-19 | Deep learning models | ResNet, DenseNet, VGG 16 and VGG 19 | Open source | Dataset of 2727 images | VGG-16 model achievedan impressive classification accuracy of 98.9% and F1 score of | ||
| (Yamac et al., 2021) | COVID-19 recognition approach directly from X-ray images | Convolution support estimation network (CSEN) | CSEN that can be seen as a bridge between deep learning models and representation-based methods | Different publicly available datasets | A benchmark | Over 98% | ||
| (Calderon-Ramirez et al., 2021) | Scarce labeled data classification | CNN models | Semi-supervised deep learning with Mix Match | Accuracies higher than 90% | ||||
| (Alam et al., 2021) | Earlier detection of the COVID-19 through accurate diagnosis | CNN (VGGNet) | Feature fusion using histogram-oriented gradient (HOG) and convolutional neural network (CNN) | Normal (1900) images. The confusion metrics of the generalization results are presented in Fi | Testing accuracy of 99.49%, specificity of 95.7%, and sensitivity of 93.65%. | |||
| (Gilanie et al., 2021) | Coronavirus (COVID-19) detection from chest radiology images | Convolutional neural networks | Three publicly available and a locally developed dataset, obtained from Department of Radiology (Diagnostics), Bahawal Victoria Hospital, Bahawalpur (BVHB), Pakistan | The proposed method achieved average accuracy of 96.68%, specificity of 95.65%, and sensitivity of 96.24%. | ||||
| (Amin et al., 2021) | Classification of COVID-19 X-ray images | Deep learning model | Transfer Learning InceptionV3 | Three different | Public CXR images (pneumonia) from Kaggle + COVID-19 images dataset from GitHub | 299 × 299 pixels | 194 images from the original pneumonia dataset + 163 images from the | 98% accuracy, precision and recall. F1 scores all are equal to 0.97 for pneumonia and normal images and are equal to 1 in COVID-19 images |
| (Panwar et al., 2020) | Fast detection of COVID-19 in X-rays | Deep learning | Transfer learning model with five different layers | VGG as a model for feature extraction | Public | All images converted to a standard size (224 × 224 pixels) | 142 for COVID-19 images; | 97.62% of true positive rate; |
| (Wang et al., 2020) | COVID-19 detection in chest CT images | Multiple-way data augmentation | Offline multiple-way data augmentation | Pre-trained models (PTMs) to learn features, and a novel (L, 2) transfer feature learning algorithm | Private dataset from local hospitals | 284 COVID-19 images, 281 community-acquired | On the test set, CCSHNet achieved sensitivities of four classes of 95.61%, 96.25%, 98.30%, and 97.86%. The precision values of four classes were 97.32%, 96.42%, 96.99%, and 97.38%. TheF1 scores of four classes were 96.46%, 96.33%, 97.64%, and 97.62%. The MA F1 score was 97.04% | |
| (Demir et al., 2021) | Automatic detection of COVID-19 from X-ray images | Deep LTSM | Marker-controlled watershed segmentation (MCWS) | 20 convolutional layers | Public | Three classes: pneumonia, normal, and COVID-19. | 1061 CX images (361 COVID-19, 200 normal, and 500 pneumonia) | In 80% of tests, 100% performance was achieved for all aspects (accuracy, sensitivity, precision, and F-score) |
| (Sheykhivand et al., 2021) | Automatic detection of COVID-19 from Chest images | Deep neural network | Generative Adversarial Networks | Public | Healthy: 2923 images; COVID-19:371; bacterial = 2778; viral = 2840 | 90% accuracy for all scenarios except one | ||
Figure 2Machine learning techniques for CXR.
Figure 3Block diagram of deep CNN architectures using CXR for COVID-19 detection.
Different COVID-19 CXR datasets used in the investigated works.
| No | Ref. | Dataset Name | Nbr of Images | Resolution | Include CT Images? | Observations |
|---|---|---|---|---|---|---|
| 1. | [ | Cohen Image Collection | 315 | 4248 × 3480 | No | Proposed image data are linked with clinical attributes |
| 2. | [ | COVIDx (COVID-19 CXR Dataset Initiative) | 48 | Various | - | - |
| 3. | [ | ActualMed COVID-19 CXR | 13,975 CXR images | Various | No | Proposed COVID-Net improves the decision making of clinicians |
| 4. | [ | COVID-19 Radiography Database | 2905 | - | No | Specificity: 98.8%; sensitivity: |
| 5. | [ | Japanese Society of Radiological Technology | 105 | 4020 × 4892 | No | ROC analysis gives Az values between 0.57 and 0.99 |
| 6. | [ | CXR-8 | 112,120 | 1024 × 1024 | No | High number of images gives better reults in ML algorithms |
| 7. | [ | SIRM COVID-19 Database | 68 | Various | Yes | - |
| 8. | [ | Radiopaedia.org | - | Various | Yes | Open dataset with increasing number of images |
| 9. | [ | ChexPert Dataset | 224,316 | 320 × 320 | No | Expert comparisons and uncertainty labels are considered |
| 10. | [ | Twitter COVID-19 CXR Dataset | 135 | 2012 × 2012 | No | - |
| 11. | [ | Pediatric Pneumonia CXR | 5856 | Various | No | Eight codes were proposed to evaluate this dataset |
| 12. | [ | OCT and Kaggle CXR images | 5863 | Various | Yes | OCT data are divided into training and testing sets with different patients |
| 13. | [ | Open-I Repository | - | Various | Yes | Statistical analysis of data is given |
Performance of reviewed detection models.
| S.No. | Research | Accuracy (%) | F1 Score | AUC |
|---|---|---|---|---|
| 1. | Covid-AID [ | 90.5 | 0.9230 | 0.99 |
| 2. | Deep Covid [ | 83 | 0.83 | 0.90 |
| 3. | Covid Caps [ | 95.7 | - | 0.97 |
| 4. | COVID Net [ | 93.3 | - | - |
| 5. | DeTrac [ | 95.12 | - | - |
| 6. | CheXNet [ | 97.8 | 97.8 | - |
| 7. | COVID-DA [ | - | 0.9298 | 0.985 |
| 8. | CoroNet [ | 93.5 | 93.51 | - |
| 9. | DenseNet-121 [ | 96.3 | 0.96 | 0.88 |
| 10. | ResNet-50 [ | 97.4 | 0.96 | 0.86 |
| 11. | Inception-V4 [ | 91.68 | 0.76 | 0.87 |
| 12. | Inception-ResNet-V2 | 89.45 | 0.84 | 0.86 |
| 13. | Xception [ | 81 | 0.80 | 0.88 |
| 14. | EfficientNet-B2 | 79 | 0.80 | 0.87 |
| 15. | ResNet-50 [ | 99.5 | - | - |
| 16. | ResNet50 and VGG16 [ | 99.87 | - | 0.83 |
| 17. | Transfer learning InceptionV3 [ | 99.49 | 0.85 | - |