| Literature DB >> 35286874 |
Haseeb Hassan1, Zhaoyu Ren2, Chengmin Zhou2, Muazzam A Khan3, Yi Pan4, Jian Zhao5, Bingding Huang6.
Abstract
Artificial intelligence (AI) and computer vision (CV) methods become reliable to extract features from radiological images, aiding COVID-19 diagnosis ahead of the pathogenic tests and saving critical time for disease management and control. Thus, this review article focuses on cascading numerous deep learning-based COVID-19 computerized tomography (CT) imaging diagnosis research, providing a baseline for future research. Compared to previous review articles on the topic, this study pigeon-holes the collected literature very differently (i.e., its multi-level arrangement). For this purpose, 71 relevant studies were found using a variety of trustworthy databases and search engines, including Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus. We classify the selected literature in multi-level machine learning groups, such as supervised and weakly supervised learning. Our review article reveals that weak supervision has been adopted extensively for COVID-19 CT diagnosis compared to supervised learning. Weakly supervised (conventional transfer learning) techniques can be utilized effectively for real-time clinical practices by reusing the sophisticated features rather than over-parameterizing the standard models. Few-shot and self-supervised learning are the recent trends to address data scarcity and model efficacy. The deep learning (artificial intelligence) based models are mainly utilized for disease management and control. Therefore, it is more appropriate for readers to comprehend the related perceptive of deep learning approaches for the in-progress COVID-19 CT diagnosis research.Entities:
Keywords: COVID-19 CT deep learning; COVID-19 CT detection; COVID-19 CT diagnosis; Supervised learning; Weakly supervised learning
Mesh:
Year: 2022 PMID: 35286874 PMCID: PMC8897838 DOI: 10.1016/j.cmpb.2022.106731
Source DB: PubMed Journal: Comput Methods Programs Biomed ISSN: 0169-2607 Impact factor: 7.027
Fig. 1Overview and arrangement of the collected literature based on supervised and weakly supervised learning.
COVID-19 diagnosis techniques based on CNN backbone networks.
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| Serte et al | COVID-19 and normal CT volumes classification | 80 normal CT scans and 19 COVID-19 CT scans | ResNet-50 | AUC: 96% |
| Multi-Objective Differential Evolution (MODE)–based CNN | ||||
| Xu et al | Classification of COVID-19, IAVP, and healthy cases | 618 CT Samples and | Multiple CNN models | ACC:86.7% |
| ACC: 91.79%, SEN: 93.05%, SPEC: 89.95%, AUC: 96.35%, PRC: 93.10%, F1-score: 93.07% | ||||
| Rahimzadeh et al | Identification and classification of COVID-19 and normal patients | 63,849 chest CT images | ResNet50V2 | Accuracy: 98.49% |
| Jiantao et al | Distinguishing COVID-19 from CAP | 497 CT examinations | 3D CNN | AUC: 0.70 with 99% CI (0.56–0.85) |
| Abdullah et al | Classifying COVID and Non-COVID | 349 CT images | Sequential CNN | ACC: 92.48 |
COVID-19 diagnosis techniques based on U-Net and its variants.
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| Gozes et al | Classification of COVID and non-COVID | Multiple international chest CT datasets (US-China) | RADLogics | AUC: 0.99(95%CI: 0.989–1.00), SEN: 98.2%, SPEC: 92.2% |
| Amine et al | COVID classification and COVID lesion segmentation | 1044 chest CT images, 744 images, and 100 CT scans | U-Net | |
| Pu et al | COVID identification and progression | 120 and 72 CT scans were used | U-Net | SEN: 95% (CI 94–97%) |
| Kuchana et al | Lung spaces and COVID anomalies segmentation | 20 chest CT scans and 929 slices | U-Net hyperparameters modification | F1-Score: 97.31% |
| Jun et al | COVID-19 identification | 46,096 Chest CT Images | U-Net++ and ResNet-50 | |
| Ni et al | COVID detection, voxel and pulmonary lobe segmentation. | 19,291 CT scans | 3D U-Net and | |
| Shuo et al | Segmentation and classification | CT data collection from five hospitals | ||
| Automated segmentation of COVID-19 infected regions | Chest CT dataset from Ma et al | 3D U-Net | DSC for lungs: 0.956 DSC for infection: 0.761 |
COVID-19 CT diagnosis based on dense connections, multi-scale, attention mechanism, and inception.
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| Yang et al | COVID and non-COVID classification | 295 patient's chest CT Slices | DenseNet | |
| Liu et al | Classification of COVID and non-COVID-19 | CT data from 920 COVID patients and 1370 from non-COVID pneumonia patients | modified DenseNet-264 | AUC: 0.98, ACC: 94.3% |
| Yan | Detection of COVID-19 and differentiating it from other CP | 416 abnormal scans 412 non-COVID pneumonia scans, 412 pneumonia scans | MSCNN | SEN: 89.1%, SPEC: 85.7% |
| Qingsen et al | Segmentation of chest CT images with COVID-19 infections | 21,658 annotated chest CT images | Feature Variation FV and Progressive Atrous Spatial Pyramid Pooling (PASPP) | |
| Mohamad et al | COVID-19 detection | Total 2482 CT scan images | CNN, multi-scale features, and atrous convolution | ACC: 96.16%. |
| Ouyang et al | Diagnosing COVID-19 from CAP | 2186 CT scans for training | 3D CNN, Dual-Sampling Attention Network and | AUC: 0.944, ACC:87.5%, SEN:86.9%, SPEC: 90.1%, F1-score: 82.0% |
| Bin et al | Binary classification of COVID positive and negative | Chest CT dataset from | Attention Mechanism | AUC: 94.0%, SEN: 88.8%, PRC:87.9%, F1-score:88.6% |
| Wang et al | Identification of COVID-19 and ILD | 936 normal CT scans. 2406 ILD CT scans | 3D ResNets with prior attention | ACC:93.3%, SEN: 87.6%, SPEC: 95.5% |
| Zhao et al | Segmentation of COVID-19 lung opacification | 19 lung CT scans, | And spatial-wise attention module | DSC: 89.48%, SEN: 88.74% |
| Alom et al | Identification of COVID-19 patients | 420 CT samples collected from different sources | Modified Nabla-net |
COVID-19 CT diagnosis based on improved or novel loss functions.
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| Li et al | Improving the discrimination and detection performance of COVID-19 images | COVID-CT-Dataset | Stacked-autoencoders [ | AA: 94.7%, PRC: 96.54%, RC:94.1%, F1-Score: 94.8% |
| Tongxue et al | Segmenting COVID-19 CT datasets | 473 CT slices | U-Net and Attention Mechanism | SEN: 86.7%, SPEC: 99.3% |
| Saeedizadeh et al | COVID-19 infected region segmentation and detection | Medseg Dataset | U-Net and 2D total variation | mIoU: 99%, Dice score: 86% |
| Wang | COVID lesion segmentation | 558 COVID-19 patients CT scans CNN | Dice loss, Mean Absolute Error (MAE) loss, and Self Ensembling CNNs [ | Dice (%): 80.29±11.14, RVE (%): 17.72±23.40, HD95(mm): 18.72±27.26 |
Selective information from collected domain adaptation-based transfer learning methods.
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| Zhang et al | COVID-19 screening | A self-created COVID-DA dataset collected from different online sources | Transferring domain knowledge from | F1-score: 92.98%, RC: 88.33%, PRC: 98.15%, AUC: 0.985% |
| Chen et al | COVID-19 infection segmentation | Medseg dataset | U-Net and conditional GAN | |
| Jin et al | COVID-19 infection segmentation | A dataset collected from three sources [ | Segmentation, adversarial learning, and class activation map (CAM) [ | COVID-19-T1 dataset |
| Li et al | COVID-19 infection detection | 300 slices from Zhongnan Hospital, Wuhan University and Medsegdataset | Vanilla ResNet50, Network-in-Network (NIN) | SEN:94.2%, SPE: 99.5%, ACC: 96.85% |
Important selective information from Few-shot learning-based transfer earning models.
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| Yifan et al | COVID-19 diagnosis | 6000 source domains | Siamese network structure | ACC: 0.8040±0.0356 |
| Voulodimos | COVID-19 infected area segmentation | Radiopaedia | U-Net and on-line few-shot learning process | Few-shot U-Net (AUC): 0.968 |
| Abdel et al | COVID-19 infection segmentation | Medseg Dataset | Encoder (using Res2Net module) | |
| Chen et al | COVID-19 diagnosis | 216 patients COVID (+ve) scans | Pre-trained encoder and |
COVID-19 diagnosis techniques based on Recurrent Neural Network (RNN).
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| DeepSense model | Extracting and classifying COVID-19 lung lesions | IEEE8023 | CNN and RNN | |
| Hasan et al | Classification of Covid-19, Pneumonia and Healthy Lungs | Total 321 chest CT scans including 118 CT COVID-19 patients, 96 CT scans pneumonia, and 107 CT scans of healthy people | LSTM neural network | ACC: 99.68% |
COVID-19 diagnosis by conventional transfer learning pre-trained models.
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| Yu et al | COVID-19 detection | 148 chests CT images | GoogLeNet | ACC:0.87, SPEC:0.84, SEN: 0.90 |
| Wang et al | Classifying COVID-19 from other types of pneumonia | A total of 5372 patients CT images including additional information | DenseNet121 | |
| Aayush et al | Classification of COVID (+ve) and COVID (-ve) | 2492 CT scans, 68% for training, 17% for validation, and 15% for testing | Pre-trained DenseNet201 | |
| Pathak et al | Classification of COVID +ve and COVID -ve | 413 COVID +ve images | Pre-trained ResNet-32 | |
| Ali et al | COVID-19 diagnosis | 1020 CT slices from 108 patients. | Ten pre-trained convolutional neural networks | |
| Xuehai et al | COVID-19 diagnosis and construction of publicly available dataset | 349 positive CT scans | Train DenseNet-169 | |
| Kassania et al | To differentiate between COVID-19 and healthy participants | COVID-19 image data collection | Pre-trained CNN models with machine learning classifiers | |
| Singh et al | Classification of COVID (+ve), pneumonia, and tuberculosis | 2373 COVID, 2890 pneumonia infected, 3193 tuberculosis, and 3038 healthy images | VGG16, DenseNet201, and ResNet152V2 | |
| Fu et al | Detection and differentiating COVID-19 and other | Private CT dataset | Pre-trained ResNet-50 | |
| Pham et al | COVID‑19 Classification | COVID-CT-Dataset | 16 pre-trained CNNs | DenseNet-201: ACC (%): 96.20 ± 4.95, SEN (%): 95.78 ± 5.27, SPEC (%): 96.67 ± 4.59, F1-score: 0.96 ± 0.05, AUC: 0.98 ± 0.03 |
| Khan et al | COVID-19 classification | Radiopaedia COVID-19 dataset | Pretrained DenseNet-201 and contrast enhancement | Average classification ACC: 94.76% |
COVID-19 CT data augmentation with pre-trained Models methods and their selective information.
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| Silva et al | COVID-19 detection | Datasets [ | EfficientNet | ACC: 87.6, F1-score: 86.19, AUC: 90.5 |
| Horry et al | COVID-19 detection | Multi-Modal datasets e.g., X-ray dataset | CNN pre-trained model with data augmentation technique. | VGG19 model performed well and achieved a precision of up to 84% for CT |
| Li et al | COVID-19 identification | 349 CT images with | Pre-trained DensNet-121 | ACC: 0.87 and F1-score: 0.86, |
| Ko et al | COVID-19 diagnosis | 3993 chest CT images, comprises COVID-19, other pneumonia, and nonpneumonia disease from various sources | CNN pretrained Models VGG16 | |
| Ahuja et al | COVID-19 detection with binary classification e.g., COVID and non-COVID | 349 positive CT images and 397 CT images of non-COVID patients | ResNet18, ResNet50, ResNet-101, and Squeeze-Net | |
| Zheng et al | COVID-19 detection | 499 CT volumes for training | Pre-trained U-Net, with random affine transformation and color jittering data augmentation techniques | ACC: 0.901, PPV: 0.840, and NPV:0.982 |
| Hu et al | Classification of COVID-19 positive and negative | 521 COVID-19 and 397 healthy subjects | ShuffleNet V2 [ | |
| Hassan et al | COVID-19 Prediction | CT Dataset | DenseNet-121 | |
| Shalbaf et al | COVID-19 detection | COVID-CT-dataset | EfficientNets(B0-B5), NasNetLarge, NasNetMobile, InceptionV3, ResNet-50, SeResnet 50, Xception, DenseNet121, ResNext50 and Inception_resnet_v2 | PRC: 0.857, RC: 0.854, and ACC: 0.85 |
| Singh et al | COVID-19 detection | Covid-19 image data collection | VGG16 architecture, principal component analysis (PCA), deep convolutional neural network (DCNN), extreme learning machine (ELM), online sequential ELM, bagging ensemble with support vector machine (SVM), and data augmentation techniques. | |
| Hu et al | COVID-19 infection detection | Total 450 patient scans, 150 chest CT exams of COVID-19, CAP and NP patients with additional information, and Lung Segmentation dataset | CNN and U-Net | |
| Bai et al | Classification of COVID and non–COVID |
Selective information from GAN-based data augmentation COVID-19 diagnosis methods.
| Source | Operations | No of CT Scans/Images/Slices/Patients | Adopted Framework | Results |
|---|---|---|---|---|
| Loey et al | COVID-19 detection | Utilized CGAN network and data augmentation to construct 4425 images for the training set and 418 for the validation set from the dataset | Pre-trained CNNs (AlexNet, VGGNet16, VGGNet19, GoogleNet, and ResNet50) | ACC: 82.91%, SEN: 7.66%, SPE: 87.62% |
| Song et al | COVID-19 diagnosis and classification | Chest CT data from 227 patients (106 COVID-19 positive patients and 121 non-COVID-19 patients) | BigBiGAN | |
| Sedik et al | COVID-19 Detection | 500 Images and training set from | CNN, ConvLSTM, and data augmentation (image transformations) along with GANs | |
| Mobiny et al | COVID-19 classification | Covid-ct-dataset | Capsule Networks (CapsNets) [ | |
| Goel et al | COVID-19 screening and classification | 1252 COVID-19 images | Generative adversarial network (GAN) and ResNet-50 | |
| Ghassemi et al | COVID-19 diagnosis | 1766 abnormal slices | several pre-trained CNN networks | ACC: 99.60% |