| Literature DB >> 35873389 |
Kadir Aktas1,2, Vuk Ignjatovic3, Dragan Ilic3, Marina Marjanovic3, Gholamreza Anbarjafari1,2,4,5.
Abstract
One of the main challenges in the current pandemic is the detection of coronavirus. Conventional techniques (PT-PCR) have their limitations such as long response time and limited accessibility. On the other hand, X-ray machines are widely available and they are already digitized in the health systems. Thus, their usage is faster and more available. Therefore, in this research, we evaluate how well deep CNNs do when it comes to classifying normal versus pathological chest X-rays. Compared to the previous research, we trained our network on the largest number of images, 103,468 in total, including 5 classes such as COPD signs, COVID, normal, others and Pneumonia. We achieved COVID accuracy of 97% and overall accuracy of 81%. Additionally, we achieved classification accuracy of 84% for categorization into normal (78%) and abnormal (88%).Entities:
Keywords: Automatic diagnosis of COVID-19; Optimization of convolutional neural network; Swarm intelligence; X-ray
Year: 2022 PMID: 35873389 PMCID: PMC9296894 DOI: 10.1007/s11760-022-02309-w
Source DB: PubMed Journal: Signal Image Video Process ISSN: 1863-1703 Impact factor: 1.583
Number of images for each anomaly in the dataset
| Class | Number of samples |
|---|---|
| Normal | 62,115 |
| Pulmonary fibrosis | 760 |
| Heart insufficiency | 1722 |
| COPD signs | 23,280 |
| Pneumonia | 7747 |
| Tuberculosis sequelae | 399 |
| Emphysema | 734 |
| Pulmonary artery hypertension | 8 |
| Tuberculosis | 152 |
| Atypical pneumonia | 234 |
| Bone metastasis | 150 |
| Lung metastasis | 326 |
| Pulmonary oedema | 458 |
| Asbestosis signs | 69 |
| Pulmonary hypertension | 148 |
| Post radiotherapy changes | 138 |
| Respiratory distress | 35 |
| Lymphangitis carcinomatosa | 21 |
| Lepidic adenocarcinoma | 11 |
| Covid | 3616 |
| Viral pneumonia | 1345 |
Dataset splitted into 5 labels
| Class | Number of samples |
|---|---|
| Normal | 62,108 |
| COPD signs | 23,277 |
| Pneumonia | 9092 |
| Covid | 3616 |
| Others | 5239 |
Dataset splitted into 2 labels
| Class | Number of samples |
|---|---|
| Normal | 62,108 |
| Abnormal | 40,132 |
Samples from the dataset
Fig. 1InceptionV3 architecture [24]
Results for each backbone for 2-class and 5-class
| Backbone | 2-class (%) | 5-class (%) |
|---|---|---|
| VGG16 | 83.84 | 79.14 |
| InceptionV3 | 84.57 | 81.03 |
| ResNet50 | 83.08 | 74.61 |
| NasNetMobile | 73.41 | 68.13 |
Class accuracies when InceptionV3 is used as backbone
| 2-class accuracies | Normal 78.33% |
| Abnormal 88.60% | |
| 5-Class accuracies | COPD signs 62.93% |
| Covid 97.03% | |
| Normal 93.20% | |
| Pneumonia 64.46% | |
| Others 34.25% |
Comparison with the state-of-the-art methods
| Ref | Number of images | Database Used | Backbone Used | Covid-19 acc. |
|---|---|---|---|---|
| Proposed | 103468 | PADCHEST BIMCV-COVID19+ COVID-19 Radiography Chest X-ray | InceptionV3 | 0.97 |
| [ | 2800 | Chest X-ray | OptCoNet | 0.98 |
| [ | 1125 | Chest X-ray | DarkCovidNet | 0.98 |
| [ | 196 | Chest X-ray JSRT | VGG16 | 0.93 |
| [ | 1428 | Chest X-ray Covid-19 X-ray Pneumonia X-ray | MobileNetV2 | 0.98 |
| [ | 502 | Chest X-ray CoronaHack NLM JSRT | DenseNet103 | 0.92 |
| [ | 2905 | COVID-19 Radiography | A novel CNN | 0.89 |
| [ | 6432 | Chest X-ray | ResNet-50 | 1 |
| [ | 2905 | COVID-19 Radiography | ResNet-50 | 1 |
Training and validation accuracy between 2 classes Normal (0) and Abnormal class (1)
| Precision | Recall | f1-score | Support | |
|---|---|---|---|---|
| Normal | 0.82 | 0.78 | 0.80 | 10,029 |
| Abnormal | 0.86 | 0.89 | 0.87 | 15,531 |
| Accuracy | 0.85 | 25,560 | ||
| Macro avg | 0.84 | 0.83 | 0.84 | 25,560 |
| Weighted avg | 0.84 | 0.85 | 0.85 | 25,560 |
Training and validation accuracy between 5 classes
| Precision | Recall | f1-score | Support | |
|---|---|---|---|---|
| COPD Signs | 0.68 | 0.63 | 0.66 | 5830 |
| Covid | 0.94 | 0.97 | 0.96 | 843 |
| Normal | 0.84 | 0.93 | 0.89 | 15,629 |
| Others | 0.58 | 0.34 | 0.43 | 1308 |
| Pneumonia | 0.92 | 0.64 | 0.76 | 2223 |
| Accuracy | 0.81 | 25,833 | ||
| Macro avg | 0.78 | 0.70 | 0.74 | 25,833 |
| Weighted avg | 0.80 | 0.81 | 0.80 | 25,833 |