| Literature DB >> 35892518 |
Yao Song1,2, Jun Liu1,2, Xinghua Liu3, Jinshan Tang4.
Abstract
BACKGROUND: Automated segmentation of COVID-19 infection lesions and the assessment of the severity of the infections are critical in COVID-19 diagnosis and treatment. Based on a large amount of annotated data, deep learning approaches have been widely used in COVID-19 medical image analysis. However, the number of medical image samples is generally huge, and it is challenging to obtain enough annotated medical images for training a deep CNN model.Entities:
Keywords: COVID-19; lesion segmentation; self-supervised learning
Year: 2022 PMID: 35892518 PMCID: PMC9332359 DOI: 10.3390/diagnostics12081805
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1Example of blood vessel orientation and rotation in CT images of COVID-19 patients.
Figure 2The framework of the proposed method.
Encoder-Decoder Network Architecture.
| Module Name | Num | Floor | Input | The Amount of Data |
|---|---|---|---|---|
| Encoder block 1 | 2 | {conv, batchnorm, ReLU} | 2D CT slices | 34 K |
| pooling layer 1 | 1 | max-pooling | Encoder block 1 | - |
| Encoder block 2 | 2 | {conv, batchnorm, ReLU} | pooling layer 1 | 68 K |
| pooling layer 2 | 1 | max-pooling | Encoder block 2 | - |
| Encoder block 3 | 2 | {conv, batchnorm, ReLU} | pooling layer 2 | 68 K |
| pooling layer 3 | 1 | max-pooling | Encoder block 3 | - |
| Encoder block 4 | 2 | {conv, batchnorm, ReLU} | pooling layer 3 | 68 K |
| pooling layer 4 | 1 | max-pooling | Encoder block 4 | - |
| Encoder block 5 | 2 | {conv, batchnorm, ReLU} | pooling layer 4 | 2465 K |
| Decoder-block 4 | 1 | {up-sample, conv, batchnorm, ReLU, concat} | Encoder block 5 | 358 K |
| 2 | {conv, batchnorm, ReLU} | Decoder-block 4 | ||
| Decoder-block 3 | 1 | {up-sample, conv, batchnorm, ReLU, concat} | Decoder-block 4 | 137 K |
| 2 | {conv, batchnorm, ReLU} | Decoder-block 3 | ||
| Decoder-block 2 | 1 | {up-sample, conv, batchnorm, ReLU, concat} | Decoder-block 3 | 137 K |
| 2 | {conv, batchnorm, ReLU} | Decoder-block 2 | ||
| Decoder-block 1 | 1 | {up-sample, conv, batchnorm, ReLU, concat} | Decoder-block 2 | 137 K |
| 1 | {conv, batchnorm, ReLU} | Decoder-block 1 | ||
| 1 × 1 Conv block | 1 | 1 × 1 conv | Decoder-block 1 | 0.25 K |
Figure 3Three examples of CT images of different severity.
Annotate the number of samples of different severity in the dataset.
| Datasets | 3D-COVID | COVID19-Seg | CC-COVID |
|---|---|---|---|
| Slight | 200 | 60 | 47 |
| Medium | 250 | 30 | 29 |
| Severe | 150 | 10 | 24 |
| Sum | 600 | 100 | 100 |
Comparison of Different Methods in COVID-19 Severity Classification Experiment.
| Method | Accuracy | Precision | Recall | F1 Score | |
|---|---|---|---|---|---|
| Supervised | VGG19 [ | 73.21 | 68.60 | 63.26 | 65.82 |
| ResNet50 [ | 75.89 | 73.19 | 69.58 | 71.34 | |
| DenseNet121 [ | 76.64 | 78.51 | 72.95 | 75.62 | |
| Self-supervised | Rotation [ | 82.33 | 78.39 | 65.00 | 71.06 |
| Wu [ | 84.69 | 88.63 | 74.34 | 80.85 | |
| Moco V1 [ | 88.33 | 82.49 | 70.53 | 76.04 | |
| SimCLR [ | 84.21 | 79.17 | 71.88 | 75.34 | |
| Ours |
|
|
|
|
Comparison of Dice coefficients of three segmentation methods under different labeled data volumes.
| Labels | Method | Dice % |
|---|---|---|
| 10% | Ours |
|
| U-Net | 58.06 | |
| U-Net++ | 57.21 | |
| 30% | Ours |
|
| U-Net | 66.44 | |
| U-Net++ | 66.27 | |
| 70% | Ours |
|
| U-Net | 74.09 | |
| U-Net++ | 76.32 | |
| 100% | Ours |
|
| U-Net | 79.33 | |
| U-Net++ | 81.16 |
Figure 4Comparison of lesion segmentation results under different labeled data volumes.
Figure 5Comparison of Lesion Segmentation Results by Different Methods.
Comparison of Dice Coefficients for Different Enhanced Combinations.
| Crop | Color Jitter | Random Erasing | Binarization | Dice |
|---|---|---|---|---|
| 77.12 | ||||
| ✓ | 79.44 | |||
| ✓ | 78.79 | |||
| ✓ | 77.89 | |||
| ✓ |
| |||
| ✓ | ✓ | ✓ | 80.83 | |
| ✓ | ✓ | ✓ | 81.66 | |
| ✓ | ✓ | ✓ | 81.19 | |
| ✓ | ✓ | ✓ | 82.91 | |
| ✓ | ✓ | ✓ | ✓ |
|
Comparison of models under different parameters.
| Accuracy | Precision | Dice | F1 Score | |
|---|---|---|---|---|
| 85.67 | 83.95 | 76.51 | 80.05 | |
| 87.69 | 84.42 | 80.98 | 82.66 | |
| 89.03 | 87.12 | 82.37 | 84.67 | |
|
|
|
|
| |
| 92.21 | 88.55 | 81.23 | 84.73 | |
| 78.42 | 76.97 | 69.38 | 72.98 |
Comparison of pre-training experiments with different data volumes.
| Pre-Training | Accuracy | Precision | Recall | F1 Score | Dice |
|---|---|---|---|---|---|
| All | 95.49 | 93.66 | 86.98 | 90.19 | 84.91 |
| Dataset 1 | 83.52 | 79.19 | 78.30 | 78.74 | 75.26 |