| Literature DB >> 34178095 |
Ju Zhang1, Lundun Yu2, Decheng Chen2, Weidong Pan2, Chao Shi3, Yan Niu2, Xinwei Yao2, Xiaobin Xu4, Yun Cheng4.
Abstract
As the COVID-19 virus spreads around the world, testing and screening of patients have become a headache for governments. With the accumulation of clinical diagnostic data, the imaging big data features of COVID-19 are gradually clear, and CT imaging diagnosis results become more important. To obtain clear lesion information from the CT images of patients' lungs is helpful for doctors to adopt effective medical methods, and at the same time, is helpful to screen the patients with real infection. Deep learning image segmentation is widely used in the field of medical image segmentation. However, there are some challenges in using deep learning to segment the lung lesions of COVID-19 patients. Since image segmentation requires the labeling of lesion information on a pixel by pixel basis, most professional radiologists need to screen and diagnose patients on the front line, and they do not have enough energy to label a large amount of image data. In this paper, an improved Dense GAN to expand data set is developed, and a multi-layer attention mechanism method, combined with U-Net's COVID-19 pulmonary CT image segmentation, is proposed. The experimental results showed that the segmentation method proposed in this paper improved the segmentation accuracy of COVID-19 pulmonary medical CT image by comparing with other image segmentation methods.Entities:
Keywords: Attention; COVID-19; Deep learning; Generative Adversarial Network; Medical image segmentation
Year: 2021 PMID: 34178095 PMCID: PMC8220920 DOI: 10.1016/j.bspc.2021.102901
Source DB: PubMed Journal: Biomed Signal Process Control ISSN: 1746-8094 Impact factor: 3.880
Fig. 1CT scan of the lung of COVID-19 patient.
Fig. 2The overall network structure.
Fig. 3Basic GAN network structure.
Fig. 4DCGAN network structure.
Fig. 5Novel Generative network model of Dense GAN.
The pseudocode for DenseGAN.
| |
| |
| ●Sample batch of |
| ●Generate batch of |
| ●Input |
| ●Update the discriminator by ascending its stochastic gradient: |
| |
| ●Update the generator by descending its stochastic gradient: |
| |
| |
Fig. 6A novel network model of multilayer attention mechanism U-Net.
Fig. 7The edge attention module.
Fig. 8The shape attention module.
Fig. 9The local attention module.
Fig. 10Thermodynamic distribution of the attention mechanism.
Fig. 11Training images.
Fig. 12Generated Pictures.
Fig. 13Loss and accuracy during the model training.
Fig. 14The segmentation effects of experiments.
Evaluation results of each network.
| Methods | Backbone | Parameters | Dice | Sensitive | Precision | MAE | |
|---|---|---|---|---|---|---|---|
| U-Net | VGG16 | 7.853 M | 0.439 | 0.534 | 0.858 | 0.186 | 0.622 |
| Gated U-Net | VGG16 | 175.093 K | 0.623 | 0.658 | 0.926 | 0.102 | 0.725 |
| U-Net++ | VGG16 | 9.163 M | 0.581 | 0.672 | 0.902 | 0.120 | 0.722 |
| Dense U-Net | DenseNet161 | 45.082 M | 0.515 | 0.594 | 0.840 | 0.184 | 0.655 |
| Inf-Net | Res2Net | 33.122 M | 0.682 | 0.692 | 0.943 | 0.082 | 0.781 |
| Our Method | VGG16 | 28.538 M | 0.683 | 0.698 | 0.946 | 0.075 | 0.792 |
Evaluation results of Ablation Studies.
| Methods | Dice | Sensitive | Precision | MAE | |
|---|---|---|---|---|---|
| (a)GAN | 0.468 | 0.572 | 0.820 | 0.173 | 0.628 |
| (b)DCGAN | 0.537 | 0.613 | 0.871 | 0.136 | 0.683 |
| (c)GAN + MA | 0.563 | 0.638 | 0.893 | 0.129 | 0.703 |
| (d)DCGAN + MA | 0.642 | 0.663 | 0.928 | 0.098 | 0.758 |
| (e)Our Method | 0.683 | 0.698 | 0.946 | 0.075 | 0.792 |
MA: Multilayer attention mechanism.