| Literature DB >> 35071275 |
Runnan He1, Shiqi Xu2, Yashu Liu2, Qince Li1,2, Yang Liu2, Na Zhao3, Yongfeng Yuan2, Henggui Zhang1,4,5.
Abstract
Medical imaging provides a powerful tool for medical diagnosis. In the process of computer-aided diagnosis and treatment of liver cancer based on medical imaging, accurate segmentation of liver region from abdominal CT images is an important step. However, due to defects of liver tissue and limitations of CT imaging procession, the gray level of liver region in CT image is heterogeneous, and the boundary between the liver and those of adjacent tissues and organs is blurred, which makes the liver segmentation an extremely difficult task. In this study, aiming at solving the problem of low segmentation accuracy of the original 3D U-Net network, an improved network based on the three-dimensional (3D) U-Net, is proposed. Moreover, in order to solve the problem of insufficient training data caused by the difficulty of acquiring labeled 3D data, an improved 3D U-Net network is embedded into the framework of generative adversarial networks (GAN), which establishes a semi-supervised 3D liver segmentation optimization algorithm. Finally, considering the problem of poor quality of 3D abdominal fake images generated by utilizing random noise as input, deep convolutional neural networks (DCNN) based on feature restoration method is designed to generate more realistic fake images. By testing the proposed algorithm on the LiTS-2017 and KiTS19 dataset, experimental results show that the proposed semi-supervised 3D liver segmentation method can greatly improve the segmentation performance of liver, with a Dice score of 0.9424 outperforming other methods.Entities:
Keywords: 3D segmentation of liver; CT image; feature restoration; generative adversarial networks; semi-supervised
Year: 2022 PMID: 35071275 PMCID: PMC8777029 DOI: 10.3389/fmed.2021.794969
Source DB: PubMed Journal: Front Med (Lausanne) ISSN: 2296-858X
Figure 1The schematic diagram of the semi-supervised deep learning framework for liver segmentation.
Figure 2The structure of SE module.
Figure 3Improved 3D U-Net model.
Figure 4The generator structure.
Figure 5The flow chart of generating fake images.
Figure 6The flow chart of discriminator training.
Figure 7The flow chart of generator training.
Figure 8Fake images generated by the generator: (A) 1,000 iterations; (B) 5,000 iterations; (C) 10,000 iterations; (D) 15,000 iterations; (E) 20,000 iterations; (F) Real images.
Comparison of experimental results on the LiTS-2017 dataset.
|
|
|
|
|---|---|---|
|
|
| |
| 3D U-Net | 0.9160 | 0.881 |
| 3D U-Net+SE+Pyramid pooling | 0.9304 | 0.905 |
| 3D U-Net+SE+Pyramid pooling+GAN |
|
|
The bold values represent the highest score.
Comparison of experimental results with other methods.
|
|
|
|---|---|
| DenseNet (42) | 0.923 |
| 3D DenseUNet-65 (43) | 0.929 |
| FCN+ACM (44) | 0.943 |
| GIU-Net (45) |
|
| 3D U-Net+SE+Pyramid pooling+GAN | 0.942 |
The bold values represent the highest score.
Figure 9Schematic representation of the liver 3D segmentation results.
Figure 10Comparison of the 3D surface plots of the two algorithms.
Comparison of experimental results on the KiTS19 dataset.
|
|
|
|
|---|---|---|
|
|
| |
| 3D-UNet | 0.906 | 0.871 |
| 3D-UNet+SE+Pyramid pooling+GAN |
|
|
The bold values represent the highest score.
Figure 11Schematic representation of the kidney 3D segmentation results.