| Literature DB >> 35936981 |
Deepak Kumar Jain1, Tarishi Singh2, Praneet Saurabh2, Dhananjay Bisen3, Neeraj Sahu4, Jayant Mishra5, Habibur Rahman6.
Abstract
The COVID-19 pandemic has caused a worldwide catastrophe and widespread devastation that reeled almost all countries. The pandemic has mounted pressure on the existing healthcare system and caused panic and desperation. The gold testing standard for COVID-19 detection, reverse transcription-polymerase chain reaction (RT-PCR), has shown its limitations with 70% accuracy, contributing to the incorrect diagnosis that exaggerated the complexities and increased the fatalities. The new variations further pose unseen challenges in terms of their diagnosis and subsequent treatment. The COVID-19 virus heavily impacts the lungs and fills the air sacs with fluid causing pneumonia. Thus, chest X-ray inspection is a viable option if the inspection detects COVID-19-induced pneumonia, hence confirming the exposure of COVID-19. Artificial intelligence and machine learning techniques are capable of examining chest X-rays in order to detect patterns that can confirm the presence of COVID-19-induced pneumonia. This research used CNN and deep learning techniques to detect COVID-19-induced pneumonia from chest X-rays. Transfer learning with fine-tuning ensures that the proposed work successfully classifies COVID-19-induced pneumonia, regular pneumonia, and normal conditions. Xception, Visual Geometry Group 16, and Visual Geometry Group 19 are used to realize transfer learning. The experimental results were promising in terms of precision, recall, F1 score, specificity, false omission rate, false negative rate, false positive rate, and false discovery rate with a COVID-19-induced pneumonia detection accuracy of 98%. Experimental results also revealed that the proposed work has not only correctly identified COVID-19 exposure but also made a distinction between COVID-19-induced pneumonia and regular pneumonia, as the latter is a very common disease, while COVID-19 is more lethal. These results mitigated the concern and overlap in the diagnosis of COVID-19-induced pneumonia and regular pneumonia. With further integrations, it can be employed as a potential standard model in differentiating the various lung-related infections, including COVID-19.Entities:
Mesh:
Year: 2022 PMID: 35936981 PMCID: PMC9351538 DOI: 10.1155/2022/7474304
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1COVID-19 heatmap via NY Times dated 30 November 2021 [3].
Figure 2COVID-19 death heatmap via NY Times dated 30 November 2021 [3].
Figure 3Machine learning types.
Figure 4Depth-wise convolution.
Figure 5Point-wise convolution.
Current state of the art.
| Author | Year | Data sets/samples | Sample collection location/source | Network type/technique used | Objective of study | Type of study/outcome of study |
|---|---|---|---|---|---|---|
| Mahmud et al. [ | 2020 | Total of 5856 pictures | Guangzhou Medical Center, China and Sylhet Medical College, Bangladesh | CNN | To utilize COVID-19 chest X-rays for efficiently extracting diversified features from varying dilation rates | Detection accuracy: |
| 1583 normal X-rays, 1493 non-COVID, 2780 bacterial pneumonia | 97.4% for COVID-19/normal pneumonia | |||||
| 96.9% for COVID-19/viral pneumonia | ||||||
| 94.7% for COVID-19/bacterial pneumonia | ||||||
|
| ||||||
| Gu et al. [ | 2018 | JSRT, 241 images and Montgomery County, (MC, 138 images) | Guangzhou Women and Children's Medical Center, China | FCN and DCNN | Deep learning in chest radiography for diagnosis of bacterial and viral childhood pneumonia | Experiments revealed that DCNN with transfer learning extracted features with greater accuracy (0.8048 ± 0.0202) and sensitivity (0.7755 ± 0.0296) |
| Li et al. [ | 2020 | At six medical centers, 4,536 volumetric chest CT examinations (3D) were obtained from 3,506 individuals | 6 different hospitals in China | COVID-19 detection neural network (COVNet) | Distinguishing of COVID-19 from community-acquired pneumonia on chest CT using AI | For community-acquired pneumonia, the area under the receiver operating characteristic curve was 0.96 and 0.95, respectively |
| (From August 2016 and February 2020) | ||||||
|
| ||||||
| Rajpurkar et al. [ | 2018 | 420 images from 14 different pathologies | Bethesda, Maryland, United States | CheXNeXt algorithm | To diagnosis chest radiograph using deep learning method | The 420 radiographs were labeled by radiologists in an average of 240 minutes, and the algorithm labeled them in 1.5 minutes |
|
| ||||||
| Chowdhury et al. [ | 2020 | Total 423 images database (1485 pictures of viral pneumonia and 1579 pictures of normal chest X-rays) | Italian Society of Medical and Interventional Radiology, Italy | CNN | Screening of COVID-19 and pneumonia detection using AI | The networks were trained to distinguish between two types of pneumonia. For both methods, the classification accuracy, precision, sensitivity, and specificity were 99.7%, 99.7%, 99.7%, and 99.55% and 97.9%, 97.95%, 97.9%, and 98.8%, respectively |
|
| ||||||
| Liang and Zheng [ | 2020 | Total 5856 chest X-ray images (training: 5232, testing: 624) | Guangzhou Women and Children's Medical Center, China | CNN | Pediatric pneumonia diagnosis using transfer learning technique with a deep residual network | On a children's pneumonia classification test, the method recall rate is 96.7%, and the F1 score is 92.7% |
|
| ||||||
| Ho and Gwak [ | 2019 | Total of 112,120 X-ray images (70% training, | ILSVRC2014 dataset | CNN/DenseNet-121 model | CNN-based classification of thoracic disease in chest radiography | In compared to current reference baselines, techniques efficiently used interdependencies among target annotations to produce state-of-the-art classification results of 14 diseases |
| 10% validation, and 20% testing) | ||||||
|
| ||||||
| Roy et al. [ | 2020 | There are 58,924 frames in 277 lung ultrasound recordings from 35 individuals | Italian COVID-19 lung ultrasound | CNN | Diagnosis of lung diseases in COVID-19 pandemic using deep learning | A novel deep network based on spatial transformer networks that predicts the illness severity with weakly supervised artefact localization |
| Database (ICLUS-DB), Italy | ||||||
Figure 6Proposed work architecture.
Figure 7Proposed workflow.
Figure 8Chest X-ray data [30]: (a) COVID-19 positive, (b) normal healthy lung, and (c) regular pneumonia.
Distribution of the dataset.
| Classes | 80%–20% | 70%–30% | 60%–40% | |||
|---|---|---|---|---|---|---|
| Training | Validation | Training | Validation | Training | Validation | |
| COVID-19 | 1,024 | 256 | 896 | 384 | 768 | 512 |
| Normal | 1,184 | 296 | 1,036 | 444 | 888 | 592 |
| Pneumonia | 1,040 | 260 | 910 | 390 | 780 | 520 |
| Total | 3,245 | 812 | 2,842 | 1,218 | 2,436 | 1,624 |
Image preprocessing details.
| S. no. | Operations | Description |
|---|---|---|
| 1. | Rotation | 10° clockwise and anticlockwise |
| 2. | Width shifting | 0.1 fraction of the total width |
| 3. | Height shifting | 0.1 fraction of the total height |
| 4. | Zooming | 0.2% smaller or larger of original image |
| 5. | Rescaling | 1/255 multiplied with image channel values to the normalize input |
| 6. | Sheerness | 0.2 degrees clockwise and anticlockwise |
| 7. | Brightness | 0.25–1.0 shift value |
Training properties.
| S. no. | Training parameters | CNN | Xception | VGG19 | VGG16 |
|---|---|---|---|---|---|
| 1. | Learning rate | 0.001 | 0.001 | 0.001 | 0.001 |
| 2. | Batch size | 64 | 64 | 64 | 64 |
| 3. | Brightness range | [0.25, 1] | [0.25, 1] | [0.25, 1] | [0.25, 1] |
| 4. | Height and width shift range | 0.1 | 0.1 | 0.1 | 0.1 |
| 5. | Rotation range | 10 | 10 | 10 | 10 |
| 6. | Share range | 0.2 | 0.2 | 0.2 | 0.2 |
| 7. | Zoom range | [0.8, 1.2] | [0.8, 1.2] | [0.8, 1.2] | [0.8, 1.2] |
| 5. | Input shape | [224, 224, 3] | [224, 224, 3] | [224, 224, 3] | [224, 224, 3] |
| 6. | Loss function | Categorical cross-entropy | Categorical cross-entropy | Categorical cross-entropy | Categorical cross-entropy |
| 7. | Rescale | 1/225 | 1/225 | 1/225 | 1/225 |
| 8. | Epoch | 15 | 15 | 15 | 15 |
| 9. | Training set | 80%, 70%, and 60% | 80%, 70%, and 60% | 80%, 70%, and 60% | 80%, 70%, and 60% |
Figure 9(a) Heatmap COVID-19-induced pneumonia and (b) heatmap regular pneumonia.
Figure 10CNN layer-wise architecture.
Training and validation accuracy of proposed networks.
| Network | Training and test ratio (%) | Training accuracy | Validation accuracy |
|---|---|---|---|
| CNN | 80–20 | 0.89 | 0.93 |
| 70–30 | 0.90 | 0.94 | |
| 60–40 | 0.89 | 0.90 | |
|
| |||
| Xception | 80–20 | 0.86 | 0.94 |
| 70–30 | 0.86 | 0.93 | |
| 60–40 | 0.82 | 0.86 | |
|
| |||
| VGG16 | 80-20 | 0.78 | 0.94 |
| 70–30 | 0.78 | 0.93 | |
| 60–40 | 0.76 | 0.88 | |
|
| |||
| VGG19 | 80-20 | 0.91 | 0.93 |
| 70–30 | 0.92 | 0.88 | |
| 60–40 | 0.91 | 0.89 | |
Figure 11(a) Training accuracy of the proposed networks on various training and testing ratios, (b) validation accuracy of the proposed networks on various training and testing ratios, (c) training loss of the proposed networks on various training and testing ratios, and (d) validation loss of the proposed networks on various training and testing ratios.
Figure 12Model-wise comparison of accuracies.
Figure 13Xception layer-wise processing through 0, 3, 6, and 9 layers.
Figure 14(a) CNN confusion matrix, (b) Xception confusion matrix, (c) VGG16 confusion matrix, and (d) VGG19 confusion matrix.
Total number of FN, FP, TN, and TP for each of the three classes.
| Model name | Training and test ratio | COVID-19 | Normal | Pneumonia | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| FN | FP | TN | TP | FN | FP | TN | TP | FN | FP | TN | TP | ||
| CNN | 80–20% | 13 | 2 | 554 | 243 | 1 | 30 | 486 | 295 | 22 | 4 | 548 | 238 |
| 70–30% | 29 | 2 | 832 | 355 | 37 | 14 | 760 | 407 | 8 | 58 | 770 | 382 | |
| 60–40% | 30 | 13 | 1099 | 482 | 15 | 58 | 974 | 577 | 45 | 19 | 1085 | 475 | |
|
| |||||||||||||
| VGG16 | 80–20% | 13 | 1 | 555 | 243 | 0 | 47 | 469 | 296 | 40 | 5 | 547 | 220 |
| 70–30% | 23 | 6 | 828 | 361 | 4 | 60 | 714 | 440 | 49 | 10 | 818 | 341 | |
| 60–40% | 46 | 10 | 1102 | 466 | 0 | 145 | 887 | 592 | 113 | 4 | 1100 | 407 | |
|
| |||||||||||||
| VGG19 | 80-20% | 12 | 1 | 555 | 244 | 0 | 49 | 467 | 296 | 40 | 2 | 550 | 220 |
| 70–30% | 24 | 5 | 829 | 360 | 9 | 70 | 704 | 435 | 62 | 20 | 808 | 328 | |
| 60–40% | 38 | 5 | 1107 | 474 | 9 | 86 | 946 | 583 | 74 | 30 | 1074 | 446 | |
|
| |||||||||||||
| Xception | 80-20% | 12 | 2 | 554 | 244 | 1 | 25 | 491 | 295 | 23 | 9 | 543 | 237 |
| 70–30% | 27 | 3 | 831 | 357 | 7 | 51 | 723 | 437 | 49 | 29 | 799 | 341 | |
| 60–40% | 26 | 8 | 1104 | 486 | 0 | 89 | 943 | 592 | 83 | 12 | 1092 | 437 | |
Precision, recall, F1 score, and support of all three classes.
| Model name | CNN | VGG16 | VGG19 | Xception | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Training and test ratio | 80–20% | 70–30% | 60–40% | 80–20% | 70–30% | 60–40% | 80–20% | 70–30% | 60–40% | 80–20% | 70–30% | 60–40% | |
| COVID-19 | Accuracy | 0.98 | 0.97 | 0.97 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 |
| F1 score | 0.97 | 0.96 | 0.96 | 0.97 | 0.96 | 0.94 | 0.97 | 0.96 | 0.96 | 0.97 | 0.96 | 0.97 | |
| FDR | 0.01 | 0.01 | 0.03 | 0.00 | 0.02 | 0.02 | 0.00 | 0.01 | 0.01 | 0.01 | 0.01 | 0.02 | |
| FNR | 0.87 | 0.94 | 0.70 | 0.93 | 0.79 | 0.82 | 0.92 | 0.83 | 0.88 | 0.86 | 0.90 | 0.76 | |
| FOR | 0.02 | 0.03 | 0.03 | 0.02 | 0.03 | 0.04 | 0.02 | 0.03 | 0.03 | 0.02 | 0.03 | 0.02 | |
| FPR | 0.00 | 0.00 | 0.01 | 0.00 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | 0.00 | 0.00 | 0.01 | |
| Precision | 0.99 | 0.99 | 0.97 | 1.00 | 0.98 | 0.98 | 1.00 | 0.99 | 0.99 | 0.99 | 0.99 | 0.98 | |
| Recall | 0.95 | 0.92 | 0.94 | 0.95 | 0.94 | 0.91 | 0.95 | 0.94 | 0.93 | 0.95 | 0.93 | 0.95 | |
| Sensitivity | 0.95 | 0.92 | 0.94 | 0.95 | 0.94 | 0.91 | 0.95 | 0.94 | 0.93 | 0.95 | 0.93 | 0.95 | |
| Specificity | 0.44 | 0.43 | 0.43 | 0.44 | 0.43 | 0.42 | 0.44 | 0.43 | 0.43 | 0.44 | 0.43 | 0.44 | |
|
| |||||||||||||
| Normal | Accuracy | 0.96 | 0.96 | 0.96 | 0.94 | 0.95 | 0.91 | 0.94 | 0.94 | 0.94 | 0.97 | 0.95 | 0.95 |
| F1 score | 0.95 | 0.94 | 0.94 | 0.93 | 0.93 | 0.89 | 0.92 | 0.92 | 0.92 | 0.96 | 0.94 | 0.93 | |
| FDR | 0.09 | 0.03 | 0.09 | 0.14 | 0.12 | 0.20 | 0.14 | 0.14 | 0.13 | 0.08 | 0.10 | 0.13 | |
| FNR | 0.03 | 0.73 | 0.21 | 0.00 | 0.06 | 0.00 | 0.00 | 0.11 | 0.09 | 0.04 | 0.12 | 0.00 | |
| FOR | 0.00 | 0.05 | 0.02 | 0.00 | 0.01 | 0.00 | 0.00 | 0.01 | 0.01 | 0.00 | 0.01 | 0.00 | |
| FPR | 0.06 | 0.02 | 0.06 | 0.09 | 0.08 | 0.14 | 0.09 | 0.09 | 0.08 | 0.05 | 0.07 | 0.09 | |
| Precision | 0.91 | 0.97 | 0.91 | 0.86 | 0.88 | 0.80 | 0.86 | 0.86 | 0.87 | 0.92 | 0.90 | 0.87 | |
| Recall | 1.00 | 0.92 | 0.97 | 1.00 | 0.99 | 1.00 | 1.00 | 0.98 | 0.98 | 1.00 | 0.98 | 1.00 | |
| Sensitivity | 1.00 | 0.92 | 0.97 | 1.00 | 0.99 | 1.00 | 1.00 | 0.98 | 0.98 | 1.00 | 0.98 | 1.00 | |
| Specificity | 0.57 | 0.53 | 0.56 | 0.57 | 0.57 | 0.57 | 0.57 | 0.56 | 0.56 | 0.57 | 0.56 | 0.57 | |
|
| |||||||||||||
| Pneumonia | Accuracy | 0.97 | 0.95 | 0.96 | 0.94 | 0.95 | 0.93 | 0.95 | 0.93 | 0.94 | 0.96 | 0.94 | 0.94 |
| F1 score | 0.95 | 0.92 | 0.94 | 0.91 | 0.92 | 0.87 | 0.91 | 0.89 | 0.90 | 0.94 | 0.90 | 0.90 | |
| FDR | 0.02 | 0.13 | 0.04 | 0.02 | 0.03 | 0.01 | 0.01 | 0.06 | 0.06 | 0.04 | 0.08 | 0.03 | |
| FNR | 0.85 | 0.12 | 0.70 | 0.89 | 0.83 | 0.97 | 0.95 | 0.76 | 0.71 | 0.72 | 0.63 | 0.87 | |
| FOR | 0.04 | 0.01 | 0.04 | 0.07 | 0.06 | 0.09 | 0.07 | 0.07 | 0.06 | 0.04 | 0.06 | 0.07 | |
| FPR | 0.01 | 0.07 | 0.02 | 0.01 | 0.01 | 0.00 | 0.00 | 0.02 | 0.03 | 0.02 | 0.04 | 0.01 | |
| Precision | 0.98 | 0.87 | 0.96 | 0.98 | 0.97 | 0.99 | 0.99 | 0.94 | 0.94 | 0.96 | 0.92 | 0.97 | |
| Recall | 0.92 | 0.98 | 0.91 | 0.85 | 0.87 | 0.78 | 0.85 | 0.84 | 0.86 | 0.91 | 0.87 | 0.84 | |
| Sensitivity | 0.92 | 0.98 | 0.91 | 0.85 | 0.87 | 0.78 | 0.85 | 0.84 | 0.86 | 0.91 | 0.87 | 0.84 | |
| Specificity | 0.43 | 0.46 | 0.43 | 0.40 | 0.41 | 0.37 | 0.40 | 0.40 | 0.40 | 0.43 | 0.41 | 0.40 | |
Correctly, incorrectly, and not detected among the three classes.
| Network | Training and test ratio | COVID-19 | Regular pneumonia | Normal | ||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Correctly detected | Incorrectly detected | Not detected | Correctly detected | Incorrectly detected | Not detected | Correctly detected | Incorrectly detected | Not detected | ||
| CNN | 80–20% | 243 | 1 | 13 | 239 | 5 | 20 | 294 | 29 | 29 |
| 70–30% | 335 | 3 | 29 | 378 | 54 | 12 | 409 | 19 | 35 | |
| 60–40% | 482 | 14 | 30 | 480 | 16 | 40 | 576 | 56 | 16 | |
|
| ||||||||||
| Xception | 80–20% | 244 | 1 | 12 | 226 | 13 | 33 | 291 | 35 | 5 |
| 70–30% | 357 | 3 | 27 | 343 | 33 | 47 | 433 | 50 | 11 | |
| 60–40% | 486 | 13 | 26 | 435 | 15 | 85 | 588 | 87 | 4 | |
|
| ||||||||||
| VGG19 | 80–20% | 244 | 0 | 12 | 216 | 3 | 43 | 295 | 53 | 1 |
| 70–30% | 360 | 1 | 24 | 331 | 17 | 59 | 438 | 71 | 6 | |
| 60–40% | 476 | 9 | 38 | 580 | 87 | 12 | 443 | 31 | 77 | |
|
| ||||||||||
| VGG16 | 80–20% | 243 | 1 | 13 | 212 | 5 | 4 | 296 | 54 | 0 |
| 70–30% | 361 | 4 | 23 | 343 | 9 | 47 | 440 | 61 | 4 | |
| 60–40% | 466 | 11 | 46 | 386 | 4 | 134 | 592 | 165 | 0 | |