| Literature DB >> 35966244 |
Yongchao Jiang1,2, Mingquan Ye1,2, Peipei Wang1,2, Daobin Huang1,2, Xiaojie Lu1,2.
Abstract
The automatic segmentation method of MRI brain tumors uses computer technology to segment and label tumor areas and normal tissues, which plays an important role in assisting doctors in the clinical diagnosis and treatment of brain tumors. This paper proposed a multiresolution fusion MRI brain tumor segmentation algorithm based on improved inception U-Net named MRF-IUNet (multiresolution fusion inception U-Net). By replacing the original convolution modules in U-Net with the inception modules, the width and depth of the network are increased. The inception module connects convolution kernels of different sizes in parallel to obtain receptive fields of different sizes, which can extract features of different scales. In order to reduce the loss of detailed information during the downsampling process, atrous convolutions are introduced in the inception module to expand the receptive field. The multiresolution feature fusion modules are connected between the encoder and decoder of the proposed network to fuse the semantic features learned by the deeper layers and the spatial detail features learned by the early layers, which improves the recognition and segmentation of local detail features by the network and effectively improves the segmentation accuracy. The experimental results on the BraTS (the Multimodal Brain Tumor Segmentation Challenge) dataset show that the Dice similarity coefficient (DSC) obtained by the method in this paper is 0.94 for the enhanced tumor area, 0.83 for the whole tumor area, and 0.93 for the tumor core area. The segmentation accuracy has been improved.Entities:
Mesh:
Year: 2022 PMID: 35966244 PMCID: PMC9371863 DOI: 10.1155/2022/6305748
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.809
Figure 1The structure of inception module.
Figure 2The structure of A-inception module.
Figure 3The structure of multiresolution fusion module.
Figure 4Overall architecture of the MRF-IUNet model.
Figure 5The structure of ASPP module.
Figure 6Example segmentation results on the BRATS dataset. From left to right and top to bottom are the segmentation results of U-Net, ResUNet, DenseUNet, DeepLabv3 +, MRF-IUNet (proposed), and ground truth. The whole tumor (WT) class includes all visible labels (a union of green, yellow, and red labels), the tumor core (TC) class is a union of red and yellow, and the enhancing tumor core (ET) class is shown in yellow.
Segmentation performance of different models.
| Model | DSC | IOU | PPV | ||||||
|---|---|---|---|---|---|---|---|---|---|
| ET | WT | TC | ET | WT | TC | ET | WT | TC | |
| U-Net | 0.9222 | 0.7696 | 0.9004 | 0.8998 | 0.7412 | 0.8805 | 0.9371 | 0.7828 | 0.9106 |
| ResU-Net | 0.9332 | 0.8167 | 0.9123 | 0.9107 | 0.7893 | 0.8915 | 0.9493 | 0.8373 | 0.9208 |
| DenseU-Net | 0.9339 | 0.7336 | 0.9225 | 0.9124 | 0.7057 | 0.9042 | 0.9526 | 0.7444 | 0.9356 |
| DeepLabv3+ | 0.8986 | 0.6819 | 0.8745 | 0.8697 | 0.6504 | 0.8521 | 0.9215 | 0.6922 | 0.8839 |
| MRF-IUNet |
|
|
|
|
|
|
|
|
|