| Literature DB >> 34998222 |
Ali Riahi1, Omar Elharrouss2, Somaya Al-Maadeed3.
Abstract
The coronavirus outbreak continues to spread around the world and no one knows when it will stop. Therefore, from the first day of the identification of the virus in Wuhan, China, scientists have launched numerous research projects to understand the nature of the virus, how to detect it, and search for the most effective medicine to help and protect patients. Importantly, a rapid diagnostic and detection system is a priority and should be developed to stop COVID-19 from spreading. Medical imaging techniques have been used for this purpose. Current research is focused on exploiting different backbones like VGG, ResNet, DenseNet, or combining them to detect COVID-19. By using these backbones many aspects cannot be analyzed like the spatial and contextual information in the images, although this information can be useful for more robust detection performance. In this paper, we used 3D representation of the data as input for the proposed 3DCNN-based deep learning model. The process includes using the Bi-dimensional Empirical Mode Decomposition (BEMD) technique to decompose the original image into IMFs, and then building a video of these IMF images. The formed video is used as input for the 3DCNN model to classify and detect the COVID-19 virus. The 3DCNN model consists of a 3D VGG-16 backbone followed by a Context-aware attention (CAA) module, and then fully connected layers for classification. Each CAA module takes the feature maps of different blocks of the backbone, which allows learning from different feature maps. In our experiments, we used 6484 X-ray images, of which 1802 were COVID-19 positive cases, 1910 normal cases, and 2772 pneumonia cases. The experiment results showed that our proposed technique achieved the desired results on the selected dataset. Additionally, the use of the 3DCNN model with contextual information processing exploited CAA networks to achieve better performance.Entities:
Keywords: 3DCNN; BEMD; COVID-19; Context-aware attention
Mesh:
Year: 2021 PMID: 34998222 PMCID: PMC8717690 DOI: 10.1016/j.compbiomed.2021.105188
Source DB: PubMed Journal: Comput Biol Med ISSN: 0010-4825 Impact factor: 4.589
Summary of techniques and datasets used for COVID-19 detection from X-ray images.
| Method | Dataset size (images) | Techniques used | ||
|---|---|---|---|---|
| COVID-19 | Pneumonia | Normal | ||
| Islam et al. [ | 613 | 1525 | 1525 | CNN-LSTM |
| HORRY et al. [ | 140 | 322 | 60 361 | VGG19 |
| Khan et al. [ | 284 | 327 | 310 | CoroNet (CNN) |
| Horry et al. [ | 100 | 100 | 200 | VGG16,VGG19,ResNet50,InceptionV3,Xception |
| Apostolopoulos et al. [ | 224 | 714 | 504 | VGG19,MobileNetv2,Inception, Xception, Inception, ResNetv2 |
| Minaee et al. [ | 71 | 5000 | ResNet18,ResNet50,SqueezeNet, DenseNet-121 | |
| Moutounet-Cartanet al. [ | 125 | 50 | 152 | VGG16,VGG19,Inception, ResNetV2, InceptionV3,Xception |
| Hemdan et al. [ | 25 | – | 25 | VGG19,DenseNet121, InceptionV3, ResNetV2, InceptionResNet-V2,Xception, MobileNetV2 |
| Maguolo et al. [ | 144 | 339 | – | AlexNet |
| Chowdhury et al. [ | 423 | 1485 | 1579 | SqueezeNet, Mobilenetv2, ResNet18,ResNet101, VGG19, DenseNet201 |
| Rahimzadeh wt al [ | 180 | 6054 | 8851 | Concatenated CNN |
| Loey et al. [ | 69 | 79 | 79 | GAN, Alexnet, Googlenet, Resnet18 |
| Rahimzade et al. [ | 180 | 4575 | 4575 | Xception, ResNet50V2,Concatenated CNN |
| Ucar et al. [ | 45 | 1591 | 1203 | Bayes SqueezeNet |
| Bukharia et al. [ | 89 | 96 | 93 | ResNet50 |
| Ozturk et al. [ | 127 | 500 | 500 | DarkNet |
| Punn et al. [ | 108 | 515 | 453 | ResNet,Inception-v3, Inception, ResNet-v2, DenseNet169, NASNetL |
| Narin et al. [ | 50 | 50 | ResNet50,InceptionV3, InceptionResNetV2 | |
| Ozcan et al. [ | 131 | 148 | 200 | GoogleNet, ResNet18, ResNet50 |
| Li et al. [ | 239 | 1000 | 1000 | DCSL |
| Mukherjee et al. [ | 130 | 130 | Shallow CNN | |
| Luz et al. [ | 152 | 5421 | 7966 | MobileNet, ResNet50, VGG16, VGG19 |
| Farooq et al. [ | 68 | 931 | 1203 | ResNet50 |
| Khobahi et al. [ | 89 | 8521 | 7966 | TFEN, CIN |
Fig. 1Flowchart of the proposed system.
Fig. 2Some results using BEMD algorithm.
Fig. 3Context-Aware attention module.
Dataset used in our experiment.
| Data/Cases | COVID-19 | Normal | Pneumonia | Overall |
|---|---|---|---|---|
| Training | 1442 | 1528 | 2218 | 5188 |
| Testing | 180 | 191 | 277 | 648 |
| Validation | 180 | 191 | 277 | 648 |
| Overall | 1802 | 1910 | 2772 | 6484 |
Fig. 4Accuracy graph.
Fig. 5Loss graph.
Fig. 6Confusion matrix.
Fig. 7Comparison sensitivity, specificity, and F1-score metrics of the proposed method with state-of-the-art methods.
Performance of the BEMD-3DCNN network compared to the existing methods.
| Method | Data | Accuracy | Precision | Recall | F1-Score |
|---|---|---|---|---|---|
| CNN-LSTM [ | 4575 | 99.4% | 99.2% | 99.3% | 98.9% |
| CNN [ | 4575 | 99.7% | 99.7 | 99.7 | 99.55% |
COVID-19 detection techniques (comparison).
| Authors | Images | Classes | Partitioning | Accuracy | Sensitivity | Specificity |
|---|---|---|---|---|---|---|
| Islam et al. [ | 4575 | 3 | 80%–20% | 99.4 | 99.3 | 99.2 |
| Chowdhury et al. [ | 3487 | 3 | 80%–20% | 97.9 | 97.9 | 98.8 |
| Rahimzade et al. [ | 180 | 3 | five-fold cross-val | 99.5 | 80.5 | 99.5 |
| Ucar et al. [ | 2839 | 3 | 80%-10%–10% | 98.2 | 98.2 | 99.1 |
| An et al. [ | 278 | 3 | 80%–20% | 98.1 | 98.2 | 98.1 |
| Ozturk et al. [ | 1127 | 3 | five-fold cross-val | 98.0 | 95.1 | 95.3 |
| Punn et al. [ | 1076 | 3 | 80%-10%–10% | 98.0 | 91.0 | 91.0 |
| Narin et al. [ | 100 | 2 | five-fold cross-val | 98.0 | 96.0 | 100 |
| Ozcan et al. [ | 721 | 4 | 50%-30%–20% | 97.6 | 97.2 | 97.9 |
| Bukhari et al. [ | 2239 | 3 | five-fold cross-val | 97.0 | 97.0 | 97.0 |
| Mukherjee et al. [ | 260 | 2 | five-fold cross-val | 96.9 | 94.0 | 100 |
| Shankar et al. [ | 247 | 2 | five-fold cross-val | 94.8 | 98.3 | 98.8 |
| Yamaç et al. [ | 6200 | 4 | five-fold cross-val | – | 98.0 | 95.0 |
| ZHOU et al. [ | 672 | 2 | 70%–30% | 93.6 | 88.0 | – |
| Tang et al. [ | 15 477 | 3 | 90%–10% | 95.0 | 96.0 | – |
| Narin et al. [ | 7406 | 2 | 80%–20% | 99.7 | 98.8 | 99.8 |
| Ahsan et al. [ | 5090 | 3 | 80%–20% | 99.4 | 93.6 | 95.7 |
| Kaoutar Ben et al. [ | 1332 | 3 | 65%–6% - 29% | 98.1 | 96.2 | 98.7 |