| Literature DB >> 34986785 |
Xi Guan1, Guang Yang2,3, Jianming Ye4, Weiji Yang5, Xiaomei Xu1, Weiwei Jiang1, Xiaobo Lai6.
Abstract
BACKGROUND: Glioma is the most common brain malignant tumor, with a high morbidity rate and a mortality rate of more than three percent, which seriously endangers human health. The main method of acquiring brain tumors in the clinic is MRI. Segmentation of brain tumor regions from multi-modal MRI scan images is helpful for treatment inspection, post-diagnosis monitoring, and effect evaluation of patients. However, the common operation in clinical brain tumor segmentation is still manual segmentation, lead to its time-consuming and large performance difference between different operators, a consistent and accurate automatic segmentation method is urgently needed. With the continuous development of deep learning, researchers have designed many automatic segmentation algorithms; however, there are still some problems: (1) The research of segmentation algorithm mostly stays on the 2D plane, this will reduce the accuracy of 3D image feature extraction to a certain extent. (2) MRI images have gray-scale offset fields that make it difficult to divide the contours accurately.Entities:
Keywords: Automatic segmentation; Brain tumor; Deep learning; Magnetic resonance imaging; VNet
Mesh:
Year: 2022 PMID: 34986785 PMCID: PMC8734251 DOI: 10.1186/s12880-021-00728-8
Source DB: PubMed Journal: BMC Med Imaging ISSN: 1471-2342 Impact factor: 1.930
Fig. 1The overall architecture of the proposed 3D AGSE-VNet
Fig. 2SE network module diagram
Fig. 3Attention Block schematic diagram
Fig. 4Attention Guided Filter Blocks structure diagram
Fig. 5The architecture of encoder block and decoder block with AGSE-VNet
Fig. 6Feature map processed by encoder and decoder
Fig. 7Preprocessed result
Quantitative valuation on the training set and validation set
| Dice | Sensitivity | Specificity | Hausdorff95 | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ET | WT | TC | ET | WT | TC | ET | WT | TC | ET | WT | TC | |
| Training | 0.70 | 0.85 | 0.77 | 0.72 | 0.83 | 0.74 | 0.99 | 0.99 | 0.99 | 35.70 | 8.96 | 17.40 |
| Validation | 0.68 | 0.85 | 0.69 | 0.68 | 0.83 | 0.65 | 0.99 | 0.99 | 0.99 | 47.40 | 8.44 | 31.60 |
Fig. 8A collection of scatter plots and box plots of four indicators in the training set
Fig. 9A collection of scatter plots and box plots of four indicators in the validation set
Fig. 10Display of segmentation results in the training set. a Example segmentation results in 2D. b Example segmentation results with 3D rendering
Fig. 11Display of segmentation results in the validation set. a Example segmentation results in 2D. b Example segmentation results with 3D rendering
The results of various indicators in the training set
| Team | Dice | Sensitivity | Specificity | Hausdorff95 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ET | WT | TC | ET | WT | TC | ET | WT | TC | ET | WT | TC | |
| Proposed | 0.70 | 0.85 | 0.77 | 0.72 | 0.83 | 0.74 | 0.99 | 0.99 | 0.99 | 35.70 | 8.96 | 17.40 |
| mpstanford | 0.60 | 0.78 | 0.72 | 0.56 | 0.80 | 0.75 | 0.99 | 0.99 | 0.99 | 35.95 | 17.68 | 17.21 |
| agussa | 0.67 | 0.87 | 0.79 | 0.69 | 0.87 | 0.82 | 0.99 | 0.99 | 0.99 | 39.25 | 15.75 | 17.05 |
| ovgu_seg | 0.65 | 0.81 | 0.75 | 0.72 | 0.78 | 0.76 | 0.99 | 0.99 | 0.99 | 34.79 | 9.50 | 8.93 |
| AI-Strollers | 0.59 | 0.73 | 0.61 | 0.52 | 0.73 | 0.64 | 0.99 | 0.97 | 0.98 | 38.87 | 20.81 | 24.22 |
| uran | 0.48 | 0.79 | 0.64 | 0.45 | 0.74 | 0.61 | 0.99 | 0.99 | 0.99 | 37.92 | 7.72 | 14.07 |
| CBICA | 0.54 | 0.78 | 0.57 | 0.64 | 0.82 | 0.53 | 0.99 | 0.99 | 0.99 | 20.00 | 46.30 | 39.60 |
| unet3d-sz | 0.69 | 0.81 | 0.75 | 0.77 | 0.93 | 0.83 | 0.99 | 0.96 | 0.98 | 37.71 | 19.57 | 18.36 |
| iris | 0.76 | 0.88 | 0.81 | 0.78 | 0.90 | 0.83 | 0.99 | 0.99 | 0.99 | 32.30 | 18.07 | 14.70 |
| VuongHN | 0.74 | 0.81 | 0.82 | 0.84 | 0.98 | 0.84 | 0.95 | 0.93 | 0.99 | 21.97 | 12.32 | 8.72 |
The results of various indicators in the validation set
| Team | Dice | Sensitivity | Specificity | Hausdorff95 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ET | WT | TC | ET | WT | TC | ET | WT | TC | ET | WT | TC | |
| Proposed | 0.68 | 0.85 | 0.69 | 0.68 | 0.83 | 0.65 | 0.99 | 0.99 | 0.99 | 47.40 | 8.44 | 31.60 |
| mpstanford | 0.49 | 0.72 | 0.62 | 0.49 | 0.81 | 0.69 | 0.99 | 0.99 | 0.99 | 61.89 | 26.00 | 28.02 |
| agussa | 0.59 | 0.83 | 0.69 | 0.60 | 0.87 | 0.71 | 0.99 | 0.99 | .0.99 | 56.58 | 23.23 | 29.59 |
| ovgu_seg | 0.60 | 0.79 | 0.68 | 0.66 | 0.79 | 0.67 | 0.99 | 0.99 | 0.99 | 54.07 | 12.05 | 19.10 |
| AI-Strollers | 0.58 | 0.74 | 0.61 | 0.52 | 0.77 | 0.62 | 0.99 | 0.99 | 0.99 | 47.23 | 24.03 | 31.54 |
| uran | 0.75 | 0.88 | 0.76 | 0.77 | 0.85 | 0.71 | 0.99 | 0.99 | 0.99 | 36.42 | 6.62 | 19.30 |
| CBICA | 0.63 | 0.82 | 0.67 | 0.76 | 0.78 | 0.75 | 0.99 | 0.99 | 0.99 | 9.60 | 10.70 | 28.20 |
| unet3d-sz | 0.70 | 0.84 | 0.72 | 0.71 | 0.87 | 0.79 | 0.99 | 0.99 | 0.99 | 42.09 | 10.48 | 12.32 |
| iris | 0.68 | 0.86 | 0.73 | 0.67 | 0.90 | 0.70 | 0.99 | 0.99 | 0.99 | 44.13 | 23.87 | 20.02 |
| VuongHN | 0.79 | 0.90 | 0.83 | 0.80 | 0.89 | 0.80 | 0.99 | 0.99 | 0.99 | 21.43 | 6.74 | 7.05 |
Comparison of our proposed AGSE-VNet model with classic methods
| Method | Dice_ET | Dice_WT | Dice_TC | Dataset |
|---|---|---|---|---|
| Proposed | 0.67 | 0.85 | 0.69 | BraTs 2020 |
| Zhou et al | 0.65 | 0.87 | 0.75 | BraTs 2018 |
| Zhao et al | 0.62 | 0.84 | 0.73 | BraTs 2016 |
| Pereira et al | 0.65 | 0.78 | 0.75 | BraTs 2015 |
Fig. 12Comparison of segmentation results without noise and noise added