| Literature DB >> 34194535 |
Woan Ching Serena Low1, Joon Huang Chuah1, Clarence Augustine T H Tee1, Shazia Anis2, Muhammad Ali Shoaib1, Amir Faisal3, Azira Khalil4, Khin Wee Lai2.
Abstract
Pneumonia is an infamous life-threatening lung bacterial or viral infection. The latest viral infection endangering the lives of many people worldwide is the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which causes COVID-19. This paper is aimed at detecting and differentiating viral pneumonia and COVID-19 disease using digital X-ray images. The current practices include tedious conventional processes that solely rely on the radiologist or medical consultant's technical expertise that are limited, time-consuming, inefficient, and outdated. The implementation is easily prone to human errors of being misdiagnosed. The development of deep learning and technology improvement allows medical scientists and researchers to venture into various neural networks and algorithms to develop applications, tools, and instruments that can further support medical radiologists. This paper presents an overview of deep learning techniques made in the chest radiography on COVID-19 and pneumonia cases.Entities:
Year: 2021 PMID: 34194535 PMCID: PMC8184329 DOI: 10.1155/2021/5528144
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.238
Significant contributions of the neural network to deep learning [23, 24].
| Milestone/contribution | Year |
|---|---|
| McCulloch-Pitts neuron | 1943 |
| Perceptron | 1958 |
| Backpropagation | 1974 |
| Neocognition | 1980 |
| Boltzmann machine | 1985 |
| Restricted Boltzmann machine | 1986 |
| Recurrent neural networks | 1986 |
| Autoencoders | 1987 |
| LeNet | 1990 |
| LSTM | 1997 |
| Deep belief networks | 2006 |
| Deep Boltzmann machine | 2009 |
Figure 1Mindmap of machine learning (ML) algorithm created by Robert Herman from mind meister [25].
Comparison of clinical and radiological features of COVID-19, SARS, and MERS [32].
| Feature | COVID-19 | SARS | MERS |
|---|---|---|---|
| Clinical sign or symptom | |||
| Fever or chills | Yes | Yes | Yes |
| Dyspnea | Yes | Yes | Yes |
| Malaise | Yes | Yes | Yes |
| Myalgia | Yes | Yes | Yes |
| Headache | Yes | Yes | Yes |
| Cough | Dry | Dry | Dry or productive |
| Diarrhoea | Uncommon | Yes | Yes |
| Nausea or vomiting | Uncommon | Yes | Yes |
| Sore throat | Uncommon | Yes | Yes |
| Arthralgia | Yes | Uncommon | |
| Imaging finding | |||
| Acute phase | |||
| Initial imaging | |||
| Normal | 15–20% of patients | 15–20% of patients | 17% of patients |
| Abnormalities | |||
| Common | Peripheral multifocal airspace opacities (GGO, consolidation, or both) on chest radiography and CT. | Peripheral multifocal airspace opacities (GGO, consolidation, or both) on chest radiography and CT. | Peripheral multifocal airspace opacities (GGO, consolidation, or both) on chest radiography and CT. |
| Rare | Pneumothorax | Pneumothorax | Pneumothorax |
| Not seen | Cavitation or lymphadenopathy | Cavitation or lymphadenopathy | Cavitation or lymphadenopathy |
| Appearance | Bilateral, multifocal, basal airspace; normal chest radiography findings (15%) | Bilateral, multifocal basal airspace on chest radiography or CT (80%); isolated unilateral (20%) | Unilateral, focal (50%); multifocal (40%); diffuse (10%) |
| Follow-up imaging appearance | Persistent or progressive airspace opacities | Unilateral, focal (25%); progressive (most common, can be unilateral and multifocal or bilateral with multifocal consolidation) | Extension into upper lobes or perihilar areas, pleural effusion (33%), interlobular septal thickening (26%) |
| Indications of poor prognosis | Consolidation (vs. GGO) | Bilateral (like ARDS), for or more lung zones, progressive involvement after 12 d | Greater involvement of the lungs, pleural effusion, pneumothorax |
| Chronic phase | Unknown, but pleural effusion and interlobar septal thickening have not yet been reported | ||
| Transient reticular opacities | Yes | Yes | |
| Air trapping | Common (usually persistent) | ||
| Fibrosis | More than one-third of patients | Rare | One-third of patients |
Note: SARS: severe acute respiratory syndrome; MERS: Middle East respiratory syndrome; COVID-19: coronavirus disease 2019; GGO: ground-glass opacity; ARDS: acute respiratory distress syndrome [32].
Summary of deep learning methods and CNN architectures for COVID-19 using radiology images. CT images are computer tomography images, and CXR images are chest X-ray images.
| No. | Papers | Data | Types of images | AI methods to establish the algorithm | CNN architecture | Results for detecting COVID |
|---|---|---|---|---|---|---|
| 1 | [ | Total images: 4,356, COVID-19 images: 1,296, pneumonia images: 1,735, nonpneumonia images: 1,325 | CT | 3D deep learning | ResNet-50 and COVNet | Area under the curve (AUC): 0.96 |
|
| ||||||
| 2 | [ | Total images: 618, COVID-19 images: 219, influenza-A (H1N1, H3N2, H5N1, H7N9, and others), images: 224, normal healthy lungs images: 175 | CT | 3D CNN model for segmentation | Location-attention network and ResNet-18 | Accuracy of 86.7%, average time: 30 s |
|
| ||||||
| 3 | [ | (PA) posterior-anterior images: 5,941, normal images: 1,583, bacterial pneumonia images: 2,786, non-COVID-19 viral pneumonia images: 1,804, COVID-19 images: 68 | CXR | Drop weights based Bayesian CNNs | Bayesian ResNet50V2 | Accuracy of 89.92% |
|
| ||||||
| 4 | [ | COVID-19 images: 453, training images: 217 | CT | Inception migration-learning model | Internal validation: accuracy: 82.9%, specificity: 80.5%, sensitivity: 84%; External testing dataset: accuracy: 73.1%, specificity: 67%, sensitivity: 74% | |
|
| ||||||
| 5 | [ | Total images: 1,065, COVID-19 images: 325; viral pneumonia images: 740 | CT | Modified inception transfer-learning model | Accuracy: 79.30%, specificity: 0.83, sensitive: 0.67 | |
|
| ||||||
| 6 | [ | Total patients: 133, severe/critical patients: 54, nonsevere/critical patients: 79 | CT | Multilayer perception and long short term memory (LSTM) | Area under the curve (AUC): 0.954 | |
|
| ||||||
| 7 | [ | Total images: 4,266, COVID-19 images: 2,529, CAP images: 1,338, influenza A/B images: 135, standard images: 258, total patients: 3,177, COVID-19 patients: 1,502, influenza A/B patients: 83, CAP patients: 1,334, healthy subjects: 258 | CT | 2D deep learning CNN | ResNet 152 | Accuracy: 94.98%, AUC 97.71%, sensitivity: 90.19%, specificity: 95.76%, the average time is taken to read: 2.73 s |
|
| ||||||
| 8 | [ | Total 1,136 cases from 5 hospitals, COVID-19 images: 723, non-COVID-19 images: 413 | CT | 3D deep learning method | UNet ++ & ResNet-50 | Specificity: 0.922, sensitive: 0.974 |
|
| ||||||
| 9 | [ | COVID-19 patients: 50, ordinary people: 50, | CXR | 5 pretrained CNN | ResNet-50, ResNet101, ResNet52, InceptionV3, and inception-ResNetV2 | ResNet-50: accuracy: 98.0% |
| 10 | [ | Total images:13,975, total patients:13,870 | CXR | Deep learning CNN | COVID-net | Accuracy: 92.4% |
|
| ||||||
| 11 | [ | Total patients: 157 | CT | CNN | ResNet-50 | Area under the curve (AUC): 0.996 |
|
| ||||||
| 12 | [ | Normal images: 1,341, viral pneumonia images: 1,345, COVID-19 images: 190 | CXR | CNN | AlexNet, ResNet-18, DenseNet-201, SqueezeNet | Accuracy: 98.3% |
|
| ||||||
| 13 | [ | Total COVID-19 images: 531, CXR images: 170, CT images: 361 | CT and CXR | CNN with transfer learning | Pretrained AlexNet | Accuracy: CXR images: 98.3%, CT image: 94.1% |
|
| ||||||
| 14 | [ | Total images: 5,232, normal images: 1,346, bacterial pneumonia images: 2,538, viral pneumonia images: 1,345 | CXR | Deep learning framework using transfer learning | Pretrained on ImageNet, trained using AlexNet, ResNet18, inception V3, DenseNet121, GoogLeNet, and ensemble model | Ensemble model: accuracy: 96.4%, recall: 99.62% (unseen data) |
|
| ||||||
| 15 | [ | Total images: 5,247, bacterial pneumonia images: 2,561, viral pneumonia images 1,345, normal images: 1,341 | CXR | Pretrained deep CNN and used for transfer learning | AlexNet, ResNet18, DenseNet201, and SqueezeNet | DenseNet201 accuracy: normal and pneumonia: 98%, normal images, bacterial, and viral pneumonia: 93.3%, bacterial and viral pneumonia: 95% |
|
| ||||||
| 16 | [ | Total images: 306, COVID-19 images: 69, normal images: 79, bacterial pneumonia images: 79, viral pneumonia images: 79. The dataset number increases to 8,100 images after using the GAN network. | CXR | Deep transfer learning: using GAN network to generate more images to help detect the virus. Three deep transfer models. | AlexNet, GoogLeNet, Restnet18 with performance measures in different scenario and classes | GoogLeNet accuracy: 80.56% |
|
| ||||||
| 17 | [ | Dataset was collected from medRix and bioRxiv; COVID-19 images: 349, total patients: 216 | CT | Multitask learning and self-supervised | DenseNet-169, ResNet-50 | F1 score: 0.90, AUC: 0.98, accuracy: 0.89 |
|
| ||||||
| 18 | [ | Total images: 2,200, COVID-19 images: 800, viral pneumonia images: 600 | CT | Machine learning technique using Microsoft Azure | ResNet | High accuracy: 91%, overall accuracy: 87.6% |
|
| ||||||
| 19 | [ | Total images: 15,495, normal images: 12,544, COVID-19 image: 2,951 | CXR | CNN model | UNet, UNet++, DLA, DenseNet-121, CheXNet; inception-v3, ResNet-50 | F1 score: 85.81%, sensitivity: 98.37%, specificity: 99.16% |
|
| ||||||
| 20 | [ | Diverse datasets from a different source | CT | Deep fully convolutional networks (FCN) | UNet, ResDense FCN | DSC: 0.780, sensitivity: 0.822, specificity: 0.951 |
|
| ||||||
| 21 | [ | Total images: 954, COVID-19 images: 308, normal images: 323, pneumonia images: 323 images | CXR | Deep learning modules using stacked architecture concept | DenseNet; GoogleNet | Sensitivity: 0.91, specificity: 0.95, F1 score: 0.91, AUC: 0.97 |
| 22 | [ | Total images: 7,406, COVID-19 images: 341, normal images: 2,800, viral pneumonia images: 1,493, bacterial pneumonia images: 2,772 | CXR | 2D five pretrained CNN based models | ResNet50, ResNet101, ResNet152, InceptionV3, and inception-ResNetV2 | COVID-19 and normal: accuracy: 96.1%, COVID-19 and pneumonia accuracy: 99.5%, COVID-19 and bacterial accuracy: 99.7% |
|
| ||||||
| 23 | [ | Total images (COVID-19, pneumonia, and normal): 1,266, COVID-19 images: 924 | CT | 3D pretrained the deep learning system and validate it. | DNN | Sensitivity (train): 78.93%, specificity (train): 89.93%, sensitivity (val): 80.39%, specificity (val): 81.16% |
|
| ||||||
| 24 | [ | Total images (COVID-19, bacterial, and normal): 275, COVID-19 images: 88 | CT | 2D pretrained ResNet 50 using the feature pyramid network (FPN) | DRE-net | Sensitivity: 93%, specificity: 96%, accuracy: 99% |
|
| ||||||
| 25 | [ | Total images: 624, COVID-19 images: 50 | CXR | 2D GAN + TL | AlexNet, GoogLeNet, ResNet18, SqueezeNet | Accuracy: 99% |
|
| ||||||
| 26 | [ | Total images (COVID-19, bacterial, and normal): 1,427, COVID-19 images: 224, bacterial and viral pneumonia images: 714 | CXR | 2D transfer learning (TL) | VGG19, MobileNet, Inception, Xception, Inception ResNet v2. | Sensitivity: 98.66%, specificity: 96.46%, accuracy: 94.72% |
|
| ||||||
| 27 | [ | Total images (COVID-19, pneumonia, normal): 6,008, COVID-19 images: 184 | CXR | 2D transfer learning (TL) | Three ResNet models | Accuracy: 93.9% |
|
| ||||||
| 28 | [ | Total images (COVID-19, pneumonia, and normal): 8,850, COVID-19 images: 498 | CXR | 2D convolutional autoencoder (CAE) | AE: COVIDomaly | Accuracy: 76.52% |
|
| ||||||
| 29 | [ | Total images (COVID-19, pneumonia, and normal): 2,905, COVID-19 images: 219 | CXR | 2D | CNN + k-NN + SVM | Accuracy: 98.70% |
|
| ||||||
| 30 | [ | Total images (COVID-19, pneumonia, and normal): 2,905, COVID-19 images: 219 | CXR | 2D using hyperparameters Bayesian optimisation algorithm | ANN + AlexNet | Sensitivity: 89.39%, specificity: 99.75%, accuracy: 98.97%, F-score: 96.72% |
|
| ||||||
| 31 | [ | Total images (COVID-19, pneumonia, and normal): 502, COVID-19 images: 180 | CXR | 2D patch-based convolutional neural network | ResNet-18 | Sensitivity: 76.90%, specificity: 100.00% |
|
| ||||||
| 32 | [ | Total images (COVID-19, pneumonia, and normal): 2,905, COVID-19 images: 219 | CXR | 2D | Ensemble: Resnet50 and VGG16 | Sensitivity: 91.24%, specificity: 99.82% |
|
| ||||||
| 33 | [ | Total images (COVID-19 and normal): 2,492, COVID-19 images: 1,262 | CT | 2D | TL and DenseNet201 | Accuracy: 99.82% |
|
| ||||||
| 34 | [ | COVID-19, pneumonia, and normal images from Cohen et al. [ | CXR | 2D | Xception | Sensitivity: 97.09%, specificity: 97.29%, accuracy: 97.40% |
| 35 | [ | Total images (COVID-19 and normal): 380, COVID-19: 180 | CXR | 2D | 5 pretrained models+ SVM | Accuracy: 94.7% |
|
| ||||||
| 36 | [ | Total images (COVID-19, pneumonia, normal, and non-COVID-19): 2,905, COVID-19 images: 219 | CXR | 2D pretrained models such as ResNet101, Xception, InceptionV3, MobileNet, and NASNet | InstaCovNet-19 | Accuracy: 99.08%, accuracy: 99.53% |
|
| ||||||
| 37 | [ | Datasets contain COVID-19, pneumonia and normal images. | CXR | 2D | 5 pretrained CNNs | Accuracy: 95.00% |
|
| ||||||
| 38 | [ | Datasets contain bacterial pneumonia, non-COVID viral pneumonia, and COVID-19 images. | CXR | 2D | 5 COVID-CAPS | Sensitivity: 90%, specificity: 95.8%, accuracy: 95.7% |
|
| ||||||
| 39 | [ | Total images (COVID-19 and normal): 5,000, COVID-19 images: 184 | CXR | 2D | 5 TL + pretrained models | Sensitivity: 100%, specificity: 98.38% |
|
| ||||||
| 40 | [ | Total images (COVID-19 and normal): 526, COVID-19 images: 238 | CXR + CT | 2D | TL + AlexNet model | Sensitivity: 72%, specificity: 100%, accuracy: 94.1% |
|
| ||||||
| 41 | [ | Total images (COVID-19 and normal): 320, COVID-19 images: 160 | CXR + CT | 2D Apache spark framework | TL + inceptionV3 & ResNet5 | Sensitivity: 72%, specificity: 100%, accuracy: 99.01% |
|
| ||||||
| 42 | [ | Total images (COVID-19, pneumonia, and normal): 4,575, COVID-19 images: 1,525 | CXR | 2D CNN used for deep feature extraction, and LSTM is used for detection using the extracted feature | LSTM+CNN | Sensitivity: 99.2%, specificity: 99.9%, accuracy: 99.4% |
|
| ||||||
| 43 | [ | Dataset 1 images (COVID-19, pneumonia, and normal): 4,448, COVID-19 images: 2,479, dataset 2 images (COVID-19, pneumonia, and normal): 101, COVID-19 images: 52 | CXR | 2D | 3D inception V1 | Dataset 1: accuracy: 99.4%; dataset 2: sensitivity: 98.08%, specificity: 91.30%, accuracy: 93.3% |
|
| ||||||
| 44 | [ | Total images (COVID-19, pneumonia, and normal): 1,343, COVID-19 images: 446 | CXR | 2D | Conditional GAN: LightCovidNet | Accuracy: 97.28% |
|
| ||||||
| 45 | [ | Total images (COVID-19 and normal): 8,504, COVID-19 images: 445 | CXR | 2D | TL VGG-16 model | Sensitivity: 98.0%, specificity: 100.00%, accuracy: 94.5% |
|
| ||||||
| 46 | [ | Total images (COVID-19 and normal): 746, COVID-19 images: 349 | CT | 2D | TL+ ensemble of 15 pretrained models: EfficientNets(B0-B5), NasNetLarge, NasNetMobile, InceptionV3, ResNet-50, SeResnet 50, Xception, DenseNet121, ResNext50, and Inception_resnet_v2 | Accuracy: 85.0% |
| 47 | [ | Total images (COVID-19 and normal): 2,482, COVID-19 images: 1,252 | CT | 2D | AE + random forest | Specificity: 98.77%, accuracy: 97.87% |
|
| ||||||
| 48 | [ | Total images (COVID-19 and normal1): 50, COVID-19 images: 25 | CXR | 3D | COVIDX-net | Sensitivity: 100.00%, specificity: 80.00% |
|
| ||||||
| 49 | [ | Total images (COVID-19 and Normal): 800, COVID-19 images: 400 | CXR | 2D using modern and traditional machine learning methods: (ANN), (SVM), linear kernel and (RBF), | Deep learning models: MobileNets V2, ResNet50, GoogleNet, DarkNet, and Xception | ResNet50 accuracy: 98.8% |
|
| ||||||
| 50 | [ | Total images (COVID-19 and Normal): 800, COVID-19 images: 400 | CXR | 2D CLAHE and Butterworth bandpass filter was applied to enhance the contrast and eliminate the noise. | The hybrid multimodal deep learning system COVID-deep net system. | Sensitivity: 99.9%, specificity: 100.0%, accuracy: 99.3% |
|
| ||||||
| 51 | [ | Datasets from Cohen et al. [ | CXR | 2D benchmarking and diagnostic models: decision matrix that embedded a mix of 10 evaluation criteria and 12 diagnostic models, also known as multicriteria decision making (MCDM) | TOPSIS is applied for benchmarking and ranking purpose, while entropy is used to calculate the criteria's weights. SVM is selected as the best diagnosis model | Coefficient value: 0.9899 |
|
| ||||||
| 52 | [ | Total images (COVID-19 and normal): 800, COVID-19 images: 400 | CXR | 2D hybrid deep learning framework, pretrained deep learning models incorporating of a ResNet34, and high-resolution network model | COVID-CheXNet system | Sensitivity: 99.98%, specificity: 100.0%, accuracy: 99.99% |
Available data sources about COVID-19 radiology images for both chest X-ray and CT images.
| No. | Sources | Data type | No. of images | Image type | Types of images | Links |
|---|---|---|---|---|---|---|
| 1 | J. P. Cohen's GitHub | Viral pneumonia (SARS, varicella, influenza) and COVID-19, bacterial pneumonia (Streptococcus spp., Klebsiella spp., Escherichia coli, Mycoplasma spp., Legionella spp., unknown, Chlamydophilla spp.) and COVID-19, fungal (Pneumocystis spp., lipoid) and COVID-19 | Raw images: 910, annotated images: 210 | jpg and png | CXR |
|
|
| ||||||
| 2 | European Society of Radiology | Total cases or images unknown | N/A | CXR and CT |
| |
|
| ||||||
| 4 | Kaggle | Posterior-anterior (PA), anterior-posterior (AP) lateral for X-rays and axial or coronal for CT scans | Normal images: 1,576, pneumonia ARDS images: 2, viral pneumonia images: 1,493, COVID-19 images: 58, SARS images: 4, bacterial pneumonia images: 2,772, bacterial Streptococcus images: 5 | png, jpg, jpeg, and others | CXR and CT |
|
|
| ||||||
| 5 | UCSD-AI4H | Total: 349 images from 216 patients | COVID-19 images: 349, non-COVID-19 images: 397 | jpg and png | CT |
|
|
| ||||||
| 6 | MedSeg | Images were segmented by a radiologist using 3 labels: ground-glass (mask value = 1), consolidation (=2), and pleural effusion (=3). | Image volumes—9 volumes, a total of >800 slices, COVID-19 masks 350 annotated slices. Lung masks > 700 annotated slices | jpg | CT |
|
|
| ||||||
| 7 | COVID-19 Radiography Database | COVID-19 images: 219, normal images: 1,341, viral pneumonia images: 1,345 | png | CXR |
| |
Challenges of radiology imaging addresses and AI applications.
| No | Applications | Type of data | Challenges | AI methods | Sources |
|---|---|---|---|---|---|
| 1 | COVID-19 early detection using radiology images. Typically CXR and CT images | CXR images | Limited availability of annotated medical images and medical image classification remains the biggest challenge in medical diagnosis. | DeTraC deep convolution neural network | [ |
| 2 | CXR images | Finding optimal parameters for the SVM classifier can be seen as a challenge. Finding optimal parameters for the SVM classifier can be seen as a challenge. Finding optimal values for the relief algorithm can be seen as another limitation of the study | COVIDetectioNet | [ | |
| 3 | CT images | Redundant data such as interferential vessels can be misdiagnosed as pathology. Radiologists have difficulty differentiating COVID-19 and other atypical and viral pneumonia diseases, which are the same in CT imagery and have similar symptoms. | AlexNet, VGG-16, VGG-19, SqueezeNet, GoogleNet, MobileNet-V2, ResNet-18, ResNet-50, ResNet-101, Xception | [ | |
| 4 | CXR images | Due to the sudden existence and infectious nature of COVID-19, systematic collection of the extensive data set for CNN training is formidable. Biomarkers found in the CXR images can be misleading. | Patch-based convolutional network | [ | |
| 5 | CXR images | The research is dealing with images taken directly from patients with severe COVID-19 or some form of pneumonia. However, in the real world, more people are unaffected by pneumonia. The limited number of data available provides a limitation to provide feasible results. | Multiclass classification and hierarchical classification, using texture descriptors and also pretrained CNN model | [ | |
| 6 | CXR images | Insufficient pulmonary diseases data limit us to conduct verification techniques. | Localise the areas in CXR symptomatic of the COVID-19 presence | [ | |
| 7 | CT images | Shortage of radiology image labelled “data” | Segmentation deep network (Inf-net) | [ |