| Literature DB >> 34764563 |
Satyavratan Govindarajan1, Ramakrishnan Swaminathan1.
Abstract
In this study, an attempt has been made to differentiate Novel Coronavirus-2019 (COVID-19) conditions from healthy subjects in Chest radiographs using a simplified end-to-end Convolutional Neural Network (CNN) model and occlusion sensitivity maps. Early detection and faster automated screening of the COVID-19 patients is essential. For this, the images are considered from publicly available datasets. Significant biomarkers representing critical image features are extracted from CNN by experimentally investigating on cross-validation methods and hyperparameter settings. The performance of the network is evaluated using standard metrics. Perturbation based occlusion sensitivity maps are employed on the features obtained from the classification model to visualise the localization of abnormal areas. Results demonstrate that the simplified CNN model with optimised parameters is able to extract significant features with a sensitivity of 97.35% and F-measure of 96.71% to detect COVID-19 images. The algorithm achieves an Area Under the Curve-Receiver Operating Characteristic score of 99.4% with Matthews correlation coefficient of 0.93. High value of Diagnostic odds ratio is also obtained. Occlusion sensitivity maps provide precise localization of abnormal regions by identifying COVID-19 conditions. As early detection through chest radiographic images are useful for automated screening of the disease, this method appears to be clinically relevant in providing a visual diagnostic solution using a simplified and efficient model. © Springer Science+Business Media, LLC, part of Springer Nature 2020, corrected publication 2021.Entities:
Keywords: COVID-19; Chest radiograph; Convolutional neural network; Occlusion sensitivity; Visualisation
Year: 2020 PMID: 34764563 PMCID: PMC7647189 DOI: 10.1007/s10489-020-01941-8
Source DB: PubMed Journal: Appl Intell (Dordr) ISSN: 0924-669X Impact factor: 5.086
Dataset demographics
| Category | COVID-19 image collection (As on April 28, 2020) | Shenzhen set |
|---|---|---|
| Number of images present | 354 (CXR + CT) | 662 |
| Total Number of CXR images | 310 | 662 |
| Number of normal CXR images | – | 326 |
| Number of abnormal CXR images | 310 (No Finding - 3) | 336 |
| Findings | Viral and bacterial pneumonias such as ARDS, Chlamydophila, COVID-19, E.Coli, Klebseilla, Legionella, Pneumocystis, Streptococcus, SARS | TB abnormalities |
| COVID-19 CXR images | 247 | – |
| CXR Image view | Frontal view (PA, AP, Lateral) | Frontal View (PA, AP) |
| Image type | RGB/Gray scale | RGB/Gray scale |
| Image format | PNG/JPEG/JPG | PNG |
| Dot Per Inch (DPI) | Variable | 72 |
| Bit depth | Variable | 8 |
| Image resolution | Variable | 3 K × 3 K approx. |
Fig. 1Representative CXR original images of (a and b) healthy subjects and (c and d) COVID-19 patients
Fig. 2Pipeline of the proposed methodology (BN – Batch normalisation, ReLu – Rectified linear unit)
Comparison of cross-validation methods on classification accuracy (in %) for different layers of CNN
| Layers | K = 5 | K = 10 |
|---|---|---|
| 3 | 95.3 | 94.3 |
| 5 | 96.6 | 95.3 |
| 7 | 96.6 | 95.3 |
Fig. 3Effect of filter sizes on classification accuracy for different number of layers with (a) Learning rate = 0.001 and (b) Learning rate = 0.01
Custom CNN model selection based on maximum classification performance
| Parameters | Convolutional layers | Cross validation | Filter size | Learning rate | Mini-batch samples | Max. epochs |
|---|---|---|---|---|---|---|
| Optimal values | 5 | 5-fold | 3 × 3 | 0.001 | 64 | 30 |
Performance measures obtained for healthy and COVID-19 images using CNN
| Performance measures | Healthy | COVID-19 |
|---|---|---|
| Sensitivity (%) | 96.00 | 97.35 |
| Specificity (%) | 97.35 | 96.00 |
| Precision (%) | 97.30 | 96.08 |
| F-measure (%) | 96.64 | 96.71 |
| Diagnostic odds ratio | 882 | |
| Matthews correlation coefficient | 0.93 | |
Fig. 4Model performance shown as (a) Confusion matrix and (b) ROC analysis
Fig. 5Correctly classified images: Original images of healthy (a), COVID-19 subjects (d) and overlay of occlusion sensitivity maps (b and e). Extracted lung field maps (c and f)
Fig. 6Misclassified images: Original images of healthy (a), COVID-19 subjects (d) and their corresponding occlusion sensitivity maps (b and e). Extracted lung field maps (c and f)
Discussion and comparison with existing studies
| Author | Images considered | Architecture used | Visualization method | Validation method | COVID-19 detection Results (%) | Remarks/Limitations | ||
|---|---|---|---|---|---|---|---|---|
| Healthy | COVID-19 | |||||||
| Ozturk et al. [ | 1000 | 250 | DarkNet-17 | Grad-CAM | 5-fold CV | Sensitivity: | 90.65 | Imprecise localization of areas on the chest region |
| AUC-ROC: | – | |||||||
| Brunese et al. [ | 3520 | 250 | VGG-16 | Grad-CAM | CV | Sensitivity: | 87 | Proposed to investigate if formal verification techniques can be helpful to obtain better results |
| AUC-ROC: | – | |||||||
| Mahmud et al. [ | 305 | 305 | Stacked Multi-resolution CovXNet | Grad-CAM | 5-fold CV | Sensitivity: | 97.8 | Scattering in gradient based localizations out of the region of interest |
| AUC-ROC: | 96.9 | |||||||
| Rajaraman et al. [ | 1583 | 314 | Wide residual network and pretrained models | Grad-CAM | Random Split | Sensitivity: | – | Very small collection of COVID-19 data to select augmented training images, Imbalanced dataset and Imprecise localization of areas on the chest region belonging to COVID-19 |
| AUC-ROC: | – | |||||||
| Das et al. [ | D1:1583 D2: 80 | 162 162 | Truncated Inception Net | Activation map | 10-fold CV | Sensitivity: | 95 | Maximum values are reported for imbalanced dataset. Poor localization of areas of COVID-19 |
| AUC-ROC: | 99 | |||||||