| Literature DB >> 35186717 |
Khan Muhammad1, Hayat Ullah2, Zulfiqar Ahmad Khan2, Abdul Khader Jilani Saudagar3, Abdullah AlTameem3, Mohammed AlKhathami3, Muhammad Badruddin Khan3, Mozaherul Hoque Abul Hasanat3, Khalid Mahmood Malik4, Mohammad Hijji5, Muhammad Sajjad6,7.
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has caused a major outbreak around the world with severe impact on health, human lives, and economy globally. One of the crucial steps in fighting COVID-19 is the ability to detect infected patients at early stages and put them under special care. Detecting COVID-19 from radiography images using computational medical imaging method is one of the fastest ways to diagnose the patients. However, early detection with significant results is a major challenge, given the limited available medical imaging data and conflicting performance metrics. Therefore, this work aims to develop a novel deep learning-based computationally efficient medical imaging framework for effective modeling and early diagnosis of COVID-19 from chest x-ray and computed tomography images. The proposed work presents "WEENet" by exploiting efficient convolutional neural network to extract high-level features, followed by classification mechanisms for COVID-19 diagnosis in medical image data. The performance of our method is evaluated on three benchmark medical chest x-ray and computed tomography image datasets using eight evaluation metrics including a novel strategy of cross-corpse evaluation as well as robustness evaluation, and the results are surpassing state-of-the-art methods. The outcome of this work can assist the epidemiologists and healthcare authorities in analyzing the infected medical chest x-ray and computed tomography images, management of the COVID-19 pandemic, bridging the early diagnosis, and treatment gap for Internet of Medical Things environments.Entities:
Keywords: COVID-19 diagnosis; Internet of Medical Things; cancer categorization; deep learning; machine learning; medical imaging; x-ray imaging
Year: 2022 PMID: 35186717 PMCID: PMC8847175 DOI: 10.3389/fonc.2021.811355
Source DB: PubMed Journal: Front Oncol ISSN: 2234-943X Impact factor: 6.244
Figure 1Overview of the proposed WEENet-assisted framework for COVID-19 diagnosis using chest x-ray images with the support of 5G technology and efficient management for IoMT environments.
The operational details of our proposed data augmentation strategy.
| No. | Technique | Parameter range |
|---|---|---|
| 1 | Rotation | −25~25 |
| 2 | Zoom | 0.10 |
| 3 | Width shift | 0.01 |
| 4 | Height shift | 0.01 |
| 5 | Shear | 0.1 |
| 6 | Fill mode | Nearest |
| 7 | Flip | Right and left |
| 8 | Brightness | 0.50, 1.50 |
Figure 2General overview of autoencoder architecture.
Figure 3Training and validation performance of our proposed WEENet over medical CXI image dataset.
Figure 5Training and validation performance of our proposed WEENet over medical CRD image dataset.
Number of samples per class in original and augmented datasets.
| Dataset | Original dataset | Augmented dataset | ||
|---|---|---|---|---|
| COVID-19 | Normal | COVID-19 | Normal | |
| CXI ( | 200 | 5,000 | 3,000 | 3,000 |
| XDC ( | 94 | 94 | 940 | 940 |
| CRD ( | 3,616 | 10,192 | 10,848 | 10,848 |
Detailed information of the collected SOTA including techniques and their other important remarks.
| Ref. | Year | Dataset | COVID | Non-COVID | Technique | Inclusion | Performance (%) |
|---|---|---|---|---|---|---|---|
| ( | 2020 | CXI | 200 | 5,000 | ResNet18, ResNet50, SqueezeNet, and DenseNet121 | ✔ | Sensitivity = 98 ( ± 3) |
| Specificity = 90 | |||||||
| ( | 2021 | SARS-COV-2 CT scan dataset | 1,252 | 1,230 | ADECO-CNN | ✘ | Accuracy = 98.99 |
| ( | 2021 | COVID-CS | 68,626 | 75,541 | Novel joint classification and segmentation along fine-grained pixel-level labels of opacifications | ✘ | Sensitivity = 95 |
| Specificity = 93 | |||||||
| ( | 2021 | CT scan dataset | 349 | 397 | Four models for feature extraction and machine learning classifiers for classification | ✘ | Accuracy = 87.9 |
| ( | 2021 | XDC | 94 | 94 | Centralized-VGG16 + data augmentation. | ✔ | Sensitivity = 95.1 |
| Specificity = 93.0 | |||||||
| Centralized-ResNet50 + data augmentation. |
| ||||||
| Specificity = 96.2 | |||||||
| ( | 2021 | CRD | 3,616 | 10,192 | U-Net | ✔ | Accuracy = 98.21 |
| Modified U-Net | Accuracy = 98.63 | ||||||
| Ours | 2021 | CXI | 200 | 5,000 | WEENet | ✔ | Sensitivity = 85.0 |
|
| |||||||
|
| |||||||
| XDC | 94 | 94 | ✔ |
| |||
| Specificity = 95.7 | |||||||
| Accuracy = 97.8 | |||||||
| CRD | 3,616 | 10,192 | ✔ | Sensitivity = 98.6 | |||
|
| |||||||
|
|
Performance comparison of several deep learning-based models over benchmark datasets.
| Dataset name | Model | Original dataset | Cross-corpse evaluation | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| CXI ( | TP↑ | FP↓ | FN↓ | TN↑ | Sensitivity↑ | Specificity↑ | Accuracy↑ | ROC↑ | Accuracy↑ | |
| MobileNet ( | 12 | 24 | 485 | 515 | 0.024 | 0.955 | 0.508 | 0.424 | 0.435 | |
| NASNet-Mobile ( |
|
| 254 | 746 | 0.092 |
| 0.745 | 0.734 | 0.714 | |
| VGG16 ( | 17 | 19 | 380 | 620 | 0.042 | 0.970 | 0.614 | 0.546 | 0.597 | |
| ResNet101 ( | 21 | 15 | 402 | 598 | 0.049 | 0.975 | 0.597 | 0.591 | 0.557 | |
| ResNet50 ( | 19 | 17 | 341 | 659 | 0.052 | 0.974 | 0.654 | 0.593 | 0.604 | |
| VGG19 ( | 16 | 20 | 258 | 742 | 0.058 | 0.973 | 0.731 | 0.593 | 0.710 | |
| EfficientNet ( | 24 | 12 |
|
|
| 0.985 |
|
|
| |
|
|
|
|
|
|
|
|
|
|
| |
| XDC ( | MobileNet ( | 11 | 9 | 8 | 12 | 0.578 | 0.571 | 0.585 | 0.575 | 0.553 |
| NASNet-Mobile ( | 14 | 6 |
|
| 0.777 | 0.727 | 0.750 | 0.750 | 0.782 | |
| VGG16 ( | 13 | 7 | 9 | 11 | 0.590 | 0.611 | 0.600 | 0.600 | 0.574 | |
| ResNet101 ( | 16 | 4 | 6 | 14 | 0.723 | 0.777 | 0.750 | 0.750 | 0.745 | |
| ResNet50 ( | 15 | 5 | 7 | 13 | 0.681 | 0.722 | 0.700 | 0.700 | 0.617 | |
| VGG19 ( | 12 | 8 | 6 | 14 | 0.666 | 0.636 | 0.650 | 0.650 | 0.592 | |
| EfficientNet ( |
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
| |
| CRD ( | MobileNet ( | 231 | 132 | 466 | 554 | 0.331 | 0.807 | 0.567 | 0.590 | 0.517 |
| NASNet-Mobile ( |
|
| 274 | 746 | 0.530 | 0.930 | 0.762 | 0.791 | 0.698 | |
| VGG16 ( | 244 | 119 | 436 | 584 | 0.358 | 0.830 | 0.598 | 0.622 | 0.578 | |
| ResNet101 ( | 251 | 112 | 464 | 556 | 0.451 | 0.833 | 0.583 | 0.618 | 0.514 | |
| ResNet50 ( | 223 | 140 | 499 | 521 | 0.308 | 0.788 | 0.538 | 0.563 | 0.482 | |
| VGG19 ( | 187 | 176 | 238 | 782 | 0.440 | 0.816 | 0.700 | 0.641 | 0.697 | |
| EfficientNet ( |
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Performance comparison of several deep learning-based models over augmented datasets.
| Dataset name | Model | Original dataset | Cross-corpse evaluation | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| CXI ( | TP↑ | FP↓ | FN↓ | TN↑ | Sensitivity↑ | Specificity↑ | Accuracy↑ | ROC↑ | Accuracy↑ | |
| MobileNet ( | 219 | 149 | 134 | 266 | 0.620 | 0.641 | 0.631 | 0.630 | 0.583 | |
| NASNet-Mobile ( | 307 | 61 | 64 | 336 | 0.827 | 0.846 | 0.837 | 0.837 | 0.784 | |
| VGG16 ( | 280 | 88 | 126 |
| 0.689 | 0.815 | 0.757 | 0.758 | 0.693 | |
| ResNet101 ( | 249 | 119 | 101 | 299 | 0.711 | 0.715 | 0.713 | 0.712 | 0.647 | |
| ResNet50 ( | 307 | 61 | 98 | 302 | 0.758 | 0.832 | 0.793 | 0.795 | 0.691 | |
| VGG19 ( | 291 | 77 | 79 | 321 | 0.786 | 0.806 | 0.796 | 0.797 | 0.738 | |
| EfficientNet ( |
|
|
| 363 |
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
| |
| XDC ( | MobileNet ( | 61 | 33 | 39 | 55 | 0.610 | 0.625 | 0.617 | 0.617 | 0.587 |
| NASNet-Mobile ( | 83 | 11 | 19 | 75 | 0.813 | 0.873 | 0.840 | 0.840 | 0.816 | |
| VGG16 ( | 71 | 23 | 30 | 64 | 0.703 | 0.735 | 0.718 | 0.718 | 0.687 | |
| ResNet101 ( | 77 | 17 | 20 | 74 | 0.793 | 0.813 | 0.803 | 0.803 | 0.774 | |
| ResNet50 ( | 72 | 22 | 19 | 75 | 0.791 | 0.773 | 0.781 | 0.782 | 0.698 | |
| VGG19 ( | 68 | 26 | 20 | 74 | 0.772 | 0.740 | 0.755 | 0.755 | 0.714 | |
| EfficientNet ( |
|
|
|
|
|
|
|
|
| |
|
|
|
|
|
|
|
|
|
|
| |
| CRD ( | MobileNet ( | 759 | 261 | 402 | 618 | 0.653 | 0.703 | 0.675 | 0.675 | 0.597 |
| NASNet-Mobile ( | 822 | 198 | 174 | 846 | 0.825 | 0.810 | 0.817 | 0.818 | 0.798 | |
| VGG16 ( | 784 | 236 | 433 | 587 | 0.644 | 0.713 | 0.672 | 0.672 | 0.586 | |
| ResNet101 ( | 833 | 187 | 116 | 904 | 0.877 | 0.828 | 0.851 | 0.851 |
| |
| ResNet50 ( | 751 | 269 | 478 | 542 | 0.611 | 0.668 | 0.633 | 0.634 | 0.595 | |
| VGG19 ( | 798 | 222 | 198 | 822 | 0.801 | 0.787 | 0.794 | 0.794 | 0.742 | |
| EfficientNet ( |
|
|
|
|
|
|
|
| 0.847 | |
|
|
|
|
|
|
|
|
|
|
| |
Figure 6Robustness analysis of our proposed method against other SOTA CNNs on randomly selected test images from each dataset and the corresponding predictions made by each method.
Performance comparison of the proposed WEENet with other baseline models.
| Dataset | Model | Original dataset | ||
|---|---|---|---|---|
| CXI ( | Sensitivity↑ | Specificity↑ | Accuracy↑ | |
| SqueezeNet ( |
| 0.920 | – | |
| WEENet | 0.850 |
|
| |
| XDC ( | ResNet50 ( | 0.981 |
| 0.970 |
| WEENet |
| 0.957 |
| |
| CRD ( | ChestNet ( | 0.962 | 0.972 | 0.962 |
| WEENet |
|
|
| |
Figure 7Visual results of WEENet over each dataset (A) CXI, (B) XDC, and (C) CRD datasets.
Figure 8Feasibility assessment of our proposed WEENet for 5G-enabled IoMT environment.
Figure 9The graphical overview of the reusability process of our proposed WEENet for lung cancer detection task.