| Literature DB >> 34516488 |
Abstract
ABSTRACT: Coronavirus disease (COVID-19) has spread worldwide. X-ray and computed tomography (CT) are 2 technologies widely used in image acquisition, segmentation, diagnosis, and evaluation. Artificial intelligence can accurately segment infected parts in X-ray and CT images, assist doctors in improving diagnosis efficiency, and facilitate the subsequent assessment of the severity of the patient infection. The medical assistant platform based on machine learning can help radiologists make clinical decisions and helper in screening, diagnosis, and treatment. By providing scientific methods for image recognition, segmentation, and evaluation, we summarized the latest developments in the application of artificial intelligence in COVID-19 lung imaging, and provided guidance and inspiration to researchers and doctors who are fighting the COVID-19 virus.Entities:
Mesh:
Year: 2021 PMID: 34516488 PMCID: PMC8428739 DOI: 10.1097/MD.0000000000026855
Source DB: PubMed Journal: Medicine (Baltimore) ISSN: 0025-7974 Impact factor: 1.817
Figure 1Vehicle-mounted CT machine and doctor remote operation scan. (A) Vehicle-mounted mobile CT (B) Doctors issue reports based on CT.
Characteristics of 4 common respiratory viral infections.
| Typical CT findings | ||||||
| Distribution | Consolidation | GGO | Nodule | Bronchial wall thickening | Pleural effusion | |
| COVID-19 | Periph-eral, multifocal | +++ | + | Rare | Rare | Rare |
| RSV | Airway | + | + | Centrilobular +++ | Rare | Rare |
| COP | Under the pleura, around the bronchus | +++ | Rare | Rare | Rare | Rare |
| AIP | Diffuse or upper lung | Rare | +++ | Rare | +++ | Rare |
| DIP | Lower lung, periphery and under chest mold | Rare | + | Rare | Rare | Rare |
| Adenovirus | Multifocal | +++ | +++ | Centrilobular +++ | Rare | Rare |
| H1N1 | Lower lung | +++ | +++ | +++ | Rare | + |
Figure 2Typical cross-sectional CT image.
Figure 3The general processing flow based on the lung medical imaging CAD system.
Application of image segmentation in COVID-19.
| Literature | Modality | Method | Target ROI | Application | Highlights |
| Zheng et al[ | CT | U-Net | Lung | Diagnosis | Weakly-supervised method by pseudo label |
| Cao et al[ | CT | U-Net | Lung, Lesion | Quantification | Quantitative CT for providing an objective assessment of pulmonary involvement and therapy response in COVID-19 |
| Huang et al[ | CT | U-Net | Lung, Lung lobes, lesion | Quantification | Patients were divided into mild, moderate, severe, and critical types |
| Qi et al[ | CT | U-Net | Lung lobes, Lesion | Quantification | Intervention CT radiomics models based on logistic regression (LR) and random forest (RF) |
| Gozes et al[ | CT | U-Net/ Commercial Software | Lung, Leson | Diagnosis | Combination of 2D and 3D methods |
| Li et al[ | CT | U-Net | Lesion | Diagnosis | differentiate COVID-19 and CAP from chest CT images |
| Chen et al[ | CT | U-Net++ | Lesion | Diagnosis | Greatly improve the efficiency of radiologists in clinical practice |
| Shuo Jin et al[ | CT | U-Net++ | Lung, Lesion | Quantification | Joint segmentation and classification |
| Shan et al[ | CT | VB-Net | Lung, Lung lobes, Lung segments, Lesion | Quantification | Human-in-the-loop |
| Tang et al[ | CT | Commercial Software | Lung, Lesion, Trachea, Bronchus | Quantification | The images highlight the distribution of pulmonary lesions |
| Shen et al[ | CT | Commercial Software | Lesion | Quantification | Quantitative CT analysis for stratifying COVID-19 |
| Tan Y Q[ | CT | Inf-Net | Lung, Lesion | Quantification | Only a small number of labeled images are needed, and unlabeled data is mainly used |
Figure 4U-Net structure.
Figure 5Inf-Net structure.[
Figure 6COVID 19-infected region exemplary CT axial slices, wherein the red and green mask and the mask represent consolidation GGO.[
Medical image related research on AI-assisted diagnosis of COVID-19.
| Modality | Subjects | Task | Method | Result | |
| Ghoshal et al[ | X-Ray | 70 COVID-19Others (# of subjects notavailable) | Classification: COVID-19/ Others | Bayesian DNN | Accuracy: 92.9% |
| Narin et al[ | X-Ray | 50 COVID-1950 Normal | Classification: COVID-19/Normal | ResNet50, InceptionV3 and Inception- ResNetV2 | ResNet50 Accuracy: 98%Inception V3 Accuracy: 97%Inception-ResNetV2 Accuracy::87% |
| Zhang et al[ | X-Ray | 70 COVID-191008 Others931 Bac. Pneu.660 Vir. Pneu.1203 Normal | Classification: COVID-19/Others | ResNet | Sensitivity:96.0%Specificity: 70.7%Accuracy:95.2% |
| Ezz El-Din Hemdan[ | X-Ray | 50 COVID-1925 Normal | Classification: COVID-19/Normal | VGG19/Google MobileNet | Accuracy:91%Sensitivity: 89% |
| Ozturk T[ | X-Ray | 224COVID-19,700 Bac. Pneu.504 Normal | Classification: COVID-19/Bac. Pneu./Normal | DarkNet (YOLO) | Accuracy:98.08% |
| Chen et al[ | CT | 51 COVID-1955 Others | Classification: COVID-19/Others | UNet++ | Accuracy: 95.2%Sensitivity:100% |
| Zheng et al[ | CT | 313 COVID-19229 Others | Classification: COVID-19/Others | U-NetDeCoVNet | Sensitivity:90.7%Accuracy: 91.1%Specificity::0.959 |
| Das A K et al[ | CT | 496 COVID-191385 Others | Classification: COVID-19/Others | Inceptionv3 architecture | Sensitivity:94.1%Accuracy:95.5% |
| Jin et al[ | CT | 723 COVID-19413 Others | Classification: COVID-19/Others | segmentation-classification | Sensitivity:97.4%Accuracy:92.2% |
| Wang et al[ | CT | 44 COVID-1955 Vir. Pneu. | Classification: COVID-19/Vir. Pneu. | migration-learning | Accuracy: 82.9% |
| Ying et al[ | CT | 88 COVID-19100 Bac. Pneu.86 Normal | Classification: COVID-19/Bac. Pneu./Normal | ResNet-50 | Accuracy: 86.0% |
| Ozcan T. A et al[ | CT | 219 COVID-19224 Influ.-A175 Normal | Classification: COVID-19/Influ.-A/Norma | GoogleNet, ResNet18 and ResNet50 | Accuracy: 86.7% |
| Li et al[ | CT | 468 COVID-191551 CAP1445 Non-pneu. | Classification: COVID-19/CAP/Non-pneu. | ResNet-50 | Sensitivity:90.0%Accuracy:96.0% |
| Shi et al[ | CT | 1658 COVID-191027 CAP | Classification:COVID-19/CAP | CovMUNET | Accuracy: 87.9%Specificity: 90.7%Sensitivity: 83.3% |
| Chen et al[ | CT | 98COVID-19 | Classification: COVID-19/Others | Clinical features model, Radiological semantic features model | Accuracy: 94%Specificity: 79% |
| Wang, Lin, Wong[ | CT | 13,645 COVID-19 | Classification: COVID-19/Others | COVID-Net: A deep CNN | Accuracy: 92.4% |
| Xu et al [ | CT | 110COVID-19 | Classification: COVID-19/Others | 3-dimensional deep learning model | Accuracy: 86.7% |
| Wang et al[ | CT | 1065 CT images (325 COVID-19 and 740 viral pneumonia) | Classification: COVID-19/Others | Modified inception transfer-learning model | Accuracy: 79.3%Specificity: 83%Sensitivity: 67% |
| Li et al[ | CT | 3322 COVID-19 | Classification: COVID-19/Others | COV-Net | Accuracy: 95% |
| Charmaine Butt et[ | CT | 528 COVID-19 | Classification: COVID-19/Vir. Pneu./Normal | location-attention oriented model | Accuracy: 99.6%Specificity: 92.2%Sensitivity: 98.2% |
| Zhang et al[ | CT and X-ray | 42 COVID-19 patients44 healthy | Classification: COVID-19/Others | end-to-end multiple-input deep convolutional attention network (MIDCAN) | Accuracy: 98.02%Specificity:97.95%Sensitivity: 98.10% |
| Wang et al[ | CT | 284 COVID-19 images,281 community-acquired pneumonia293 secondary pulmonary tuberculosis images;306 healthy images, | Classification: COVID-19/ community-acquired pneumonia/ secondary pulmonary tuberculosis images/ healthy | CCSHNet | Precision: 97.32%Specificity: 95.61%F1 scores:96.46% |
| Yu et al[ | CT | 148 COVID-19148 healthy | Classification: COVID-19/Others | ResGNet | Accuracy: 98.02%Specificity:97.95%Sensitivity: 98.10% |
Figure 7Process flow chart of a deep learning model for classification.
Comparison results of COVID-19 detection and recognition models.
| Method | Specificity | Sensitivity | Accuracy |
| Modified inception transfer-learning model[ | 0.83 | 0.67 | 0.793 |
| COV-NET[ | – | – | 0.95 |
| STAR methods[ | 0.9113 | 0.9493 | 0.9249 |
| CovMUNET[ | 0.907 | 0.833 | 0.879 |
| DeCoVNet[ | 0.959 | 0.911 | 0.911 |
| COVID-Net[ | – | – | 0.924 |
| DeCoVNet[ |
| 0.911 | 0.959 |
| MIDCAN[ | 0.9795 |
|
|
| ResGNet[ | 0.9591 | 0.9733 | 0.9662 |