| Literature DB >> 33672585 |
Nur-A- Alam1, Mominul Ahsan2, Md Abdul Based3, Julfikar Haider2, Marcin Kowalski4.
Abstract
Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient's death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%).Entities:
Keywords: COVID-19; X-ray image; convolutional neural network (CNN); deep learning; histogram-oriented gradient (HOG); watershed segmentation
Mesh:
Year: 2021 PMID: 33672585 PMCID: PMC8078171 DOI: 10.3390/s21041480
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Map for coronavirus-related deaths across the globe reported to WHO on 7 January 2021 (source: World Health Organization, WHO).
Figure 2Overview of the proposed intelligent system architecture for identifying COVID-19 from chest X-ray images.
Figure 3Comparison of the COVID-19 and normal X-ray images.
Figure 4Proposed preprocessing stages: original image, the region of interest (ROI) image and 24-bit (RGB) to gray.
Figure 5Illustration of images after applying different anisotropic diffusion techniques.
Figure 6Basic flow of histogram-oriented gradient (HOG) feature extraction algorithm.
Figure 7Illustration of image feature extraction by VGG19 pre-trained convolutional neural network (CNN) model.
Figure 8The proposed fusion steps for the features extracted by HOG and CNN.
Figure 9COVID-19 segmentation by using the watershed technique (a) applied anisotropic diffusion for filtering (b) adjusting the filtered image (c) watershed RBG image (d) fracture lung region caused by the coronavirus (COVID-19).
Number of images in normal and COVID 19 categories used in training, validation, and testing phases without data augmentation.
| Data Sets | Number of Images | Ratio of Normal to the COVID-19 Images | |
|---|---|---|---|
| Normal | COVID-19 | ||
| Training | 2489 | 1584 | 1.57 |
| Validation | 70 | 70 | 1.0 |
| Testing | 622 | 395 | 1.57 |
Figure 10Confusion matrix with overall measured performance parameters during training.
Figure 11Performance comparison of modified anisotropic diffusion filtering (MADF) with other techniques in the literature.
Figure 12Performance measurement of different feature extraction models.
Figure 13Comparative results of individual and fusion features.
Figure 14Comparative performance of different classifiers.
Figure 15Learning curves (a) accuracy vs. number of epochs (b) loss vs. number of epochs.
Figure 16Confusion matrix from generalization.
Overall classification accuracy measured using 5-fold cross-validation.
| Feature Extraction Methods | Fold 1 | Fold 2 | Fold 3 | Fold 4 | Fold 5 | Mean Accuracy |
|---|---|---|---|---|---|---|
| HOG | 0.8732 | 0.8789 | 0.8741 | 0.8675 | 0.8730 | 0.8734 |
| CNN | 0.9378 | 0.9367 | 0.9387 | 0.9367 | 0.9321 | 0.9364 |
| Proposed fusion (HOG+CNN) | 0.9856 | 0.9847 | 0.9813 | 0.9827 | 0.9833 | 0.9836 |
Comparison among existing methods in the COVID-19 detection.
| References | Dataset | Methods | Accuracy |
|---|---|---|---|
| Ahammed et al. [ | 2971 chest X-ray images (COVID-19 = 285, normal = 1341, pneumonia = 1345) | CNN | 94.03% |
| Chowdhury et al. [ | 2905 chest X-ray images (COVID-19 = 219, normal = 1341 and pneumonia = 1345) | Parallel-dilated CNN | 96.58% |
| Abbas et al. [ | 196 CXR images (COVID-19 = 105, normal = 80, and SARScases = 11) | Deep CNN (DeTraC) | 93.1% |
| Azemin et al. [ | 5982 (COVID-19 = 154 and normal = 5828) | ResNet-101 CNN | 71.9% |
| El-Rashidy et al. [ | 750 chest X-ray images(COVID-19 = 250 and normal = 500) | CNN/ConvNet | 97.95% |
| Khan et al. [ | 1057 X-ray images (COVID-19 = 195 and normal = 862) | VGG16+VGG19 | 99.3% |
| Loey et al. [ | 307 X-ray images (COVID-19 = 69, normal = 79, Pneumonia_bac = 79 and Pneumonia_vir = 79) | AlexNet+ Googlenet+Restnet18 | 100% |
| Minaee et al. [ | 50,184 chest X-ray images (COVID-19 = 184 and normal = 5000) | ResNet18 + ResNet50 + SqueezeNet + DenseNet-121 | 98% |
| Sekeroglu et al. [ | 6100 X-ray images (COVID-19 = 225, normal = 1583 and pneumonia = 4292) | CNN | 98.50% |
| Wang et al. [ | 18,567 X-ray images (COVID-19 = 140, normal = 8851 and Pneumonia = 9576) | ResNet-101 + ResNet-152 | 96.1% |
| Panwar et al. [ | 284 images (COVID-19 = 142 and normal = 142) | Convolutional neural network (nCOVnet) | 88.1% |
| Ozturk et al. [ | 625 images (COVID-19 = 125 and normal = 500) | Convolutional neural network (DarkNet) | 98.08% |
| Khan et al. [ | 594 images (COVID-19 = 284 and normal = 310) | Convolutional neural network (CoroNet (Xception)) | 99% |
| Apostolopoulos and Mpesiana [ | 728 images (COVID-19 = 224 and normal = 504) | Transfer learning with convolutional neural networks(VGG19, MobileNet v2, Inception, Xception, InceptionResNet v2) | 96.78% |
| Mahmud et al. [ | 610 images (COVID-19 = 305 and normal = 305) | Transfer learning with convolutional neural networks(stacked MultiResolution CovXNet) | 97.4% |
| Benbrahim et al. [ | 320 images (COVID-19 = 160 and normal = 160) | Transfer learning with convolutional neural networks(Inceptionv3 and ResNet50) | 99.01% |
| Martínez et al. [ | 240 images (COVID-19 = 120 and normal = 120) | Convolutional neural network (Neural Architecture Searchnetwork (NASNet)) | 97% |
| Toraman et al. [ | 1281 images (COVID-19 = 231 and normal = 1050) | Convolutional neural network (CapsNet) | 97.24% |
| Duran-Lopezet al. [ | 6926 images (COVID-19 = 2589 and normal = 4337) | Convolutional neural network | 94.43% |
| Proposed Method | 5090 chest X-ray images (COVID-19 = 1979 and normal = 3111) | Fusion features (CNN+HOG) + VGG19 pre-train model | 99.49% |