| Literature DB >> 32277141 |
Jinping Liu1, Hui Liu2, Zhaohui Tang3, Weihua Gui3, Tianyu Ma2, Subo Gong4, Quanquan Gao2, Yongfang Xie3, Jean Paul Niyoyita5.
Abstract
Accurate segmentation of brain tumors from magnetic resonance (MR) images play a pivot role in assisting diagnoses, treatments and postoperative evaluations. However, due to its structural complexities, e.g., fuzzy tumor boundaries with irregular shapes, accurate 3D brain tumor delineation is challenging. In this paper, an intersection over union (IOU) constraint 3D symmetric full convolutional neural network (IOUC-3DSFCNN) model fused with multimodal auto-context is proposed for the 3D brain tumor segmentation. IOUC-3DSFCNN incorporates 3D residual groups into the classic 3DU-Net to further deepen the network structure to obtain more abstract voxel features under a five-layer cohesion architecture to ensure the model stability. The IOU constraint is used to address the issue of extremely unbalanced tumor foreground and background regions in MR images. In addition, to obtain more comprehensive and stable 3D brain tumor profiles, the multimodal auto-context information is fused into the IOUC-3DSFCNN model to achieve end-to-end 3D brain tumor profiles. Extensive confirmatory and comparative experiments conducted on the benchmark BRATS 2017 dataset demonstrate that the proposed segmentation model is superior to classic 3DU-Net-relevant and other state-of-the-art segmentation models, which can achieve accurate 3D tumor profiles on multimodal MRI volumes even with blurred tumor boundaries and big noise.Entities:
Mesh:
Year: 2020 PMID: 32277141 PMCID: PMC7148375 DOI: 10.1038/s41598-020-63242-x
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Schematic of 3DSFCNN. CBL, ABL and RBL are represented from top to bottom, respectively. CBL is used to extract features, RBL is used to restore the spatial information and the ABL is mainly used to adjust the forward flow and back propagation of network information.
Convolution and pooling layer parameter setting.
| Type | Filter size | Input | Output | Input filters | Output filters |
|---|---|---|---|---|---|
| Cov_1 | 3 × 3 × 3 | 128 × 128 × 128 | 128 × 128 × 128 | 4 | 32 |
| Cov_2 | 3 × 3 × 3 | 128 × 128 × 128 | 128 × 128 × 128 | 32 | 64 |
| Pooling_1 | 2 × 2 × 2 | 128 × 128 × 128 | 64 × 64 × 64 | 64 | 64 |
| Pooling_2 | 2 × 2 × 2 | 64 × 64 × 64 | 32 × 32 × 32 | 128 | 128 |
| Pooling_3 | 2 × 2 × 2 | 32 × 32 × 32 | 16 × 16 × 16 | 256 | 256 |
| Pooling_4 | 2 × 2 × 2 | 16 × 16 × 16 | 8 × 8 × 8 | 512 | 512 |
| Pooling_5 | 2 × 2 × 2 | 8 × 8 × 8 | 4 × 4 × 4 | 1024 | 1024 |
| Cov_3 | 3 × 3 × 3 | 4 × 4 × 4 | 4 × 4 × 4 | 1024 | 2048 |
| Cov_4 | 3 × 3 × 3 | 4 × 4 × 4 | 4 × 4 × 4 | 2048 | 4096 |
| Cov_5 | 3 × 3 × 3 | 8 × 8 × 8 | 8 × 8 × 8 | 5120 | 1024 |
| Cov_6 | 3 × 3 × 3 | 8 × 8 × 8 | 8 × 8 × 8 | 1024 | 512 |
| Cov_7 | 3 × 3 × 3 | 16 × 16 × 16 | 16 × 16 × 16 | 1024 | 512 |
| Cov_8 | 3 × 3 × 3 | 16 × 16 × 16 | 16 × 16 × 16 | 512 | 256 |
| Cov_9 | 3 × 3 × 3 | 32 × 32 × 32 | 32 × 32 × 32 | 512 | 256 |
| Cov_10 | 3 × 3 × 3 | 32 × 32 × 32 | 32 × 32 × 32 | 256 | 128 |
| Cov_11 | 3 × 3 × 3 | 64 × 64 × 64 | 64 × 64 × 64 | 256 | 128 |
| Cov_12 | 3 × 3 × 3 | 64 × 64 × 64 | 64 × 64 × 64 | 128 | 64 |
| Cov_13 | 3 × 3 × 3 | 128 × 128 × 128 | 128 × 128 × 128 | 128 | 64 |
| Cov_14 | 3 × 3 × 3 | 128 × 128 × 128 | 128 × 128 × 128 | 64 | 32 |
Figure 2Schematic of residual block structure.
Parameter settings of residual groups.
| Type | Filter size | Input | Output |
|---|---|---|---|
| Res_1 | |||
| Res_2 | |||
| Res_3 | |||
| Res_4 |
Figure 33D Haar-like features.
Figure 4Schematic of the proposed multimodal auto-context fused IOUC-3DSFCNN.
Figure 5Four modalities of HGG and LGG. From left to right: t1, t1Gd, t2, and Flair.
Figure 6Experimental results of subject 1. From top to bottom are raw images, ground truth and segmentation results of the proposed method, 3DU-Net proposed by Çiçek et al.[20], respectively.
Figure 7Experimental results of subject 2. From top to bottom are raw images, ground truth and segmentation results of the proposed method and 3D U-Net (proposed by Çiçek et al.[20]), respectively.
Performance comparison of the proposed multimodal auto-context fused IOUC-3DSFCNN model with 3DSFCNN model.
| Methods | DICE | Recall | Precision | HD(mm) | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Whole | Core | Enh. | Whole | Core | Enh. | Whole | Core | Enh. | Whole | Core | Enh. | |
| IOUC-3DSFCNN | 0.82 | 0.81 | 0.70 | 0.88 | 0.80 | 0.79 | 0.84 | 0.74 | 7.63 | 8.31 | 9.08 | |
| IOUC-3DSFCNN + Auto-context | 0.85 | |||||||||||
Figure 8Segmentation result comparison. (a–d) represent four different slices of the same brain tumor.
Performance comparison using different modalities.
| Modalities | DICE | Recall | Precision | HD(mm) | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Whole | Core | Enh. | Whole | Core | Enh. | Whole | Core | Enh. | Whole | Core | Enh. | |
| Flair | 0.80 | 0.77 | 0.68 | 0.82 | 0.80 | 0.71 | 0.85 | 0.83 | 0.71 | 6.55 | 7.16 | 9.44 |
| t1 | 0.79 | 0.65 | 0.56 | 0.75 | 0.78 | 0.63 | 0.77 | 0.55 | 0.61 | 7.31 | 12.19 | 15.67 |
| t1Gd | 0.81 | 0.80 | 0.74 | 0.85 | 0.80 | 0.75 | 0.81 | 0.79 | 0.75 | 5.76 | 7.47 | 9.29 |
| t2 | 0.84 | 0.79 | 0.80 | 0.84 | 0.74 | 0.72 | 0.86 | 0.84 | 0.73 | 5.88 | 6.33 | 8.90 |
| Flair+t1+t1Gd + t2 | 0.88 | 0.84 | 0.78 | 0.89 | 0.85 | 0.79 | 0.85 | 0.86 | 0.76 | 4.28 | 7.94 | 7.83 |
Figure 9Segmentation results using different modal MR images. (a–d) represent the raw images of four single modalities; (e) represents the ground truth, (f–i) represent the segmentation results of Flair, t1, t1Gd, and t2 modality, and (j) represents the segmentation result of the multimodal MR images.
Figure 10P-R curve comparison.
Comparison with 3DU-Net brain tumor segmentation method.
| Methods (with multimodality) | DICE | Recall | Precision | HD(mm) | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Whole | Core | Enh. | Whole | Core | Enh. | Whole | Core | Enh. | Whole | Core | Enh. | |
| 0.87 | 0.83 | 0.77 | 0.90 | 0.80 | 0.84 | 0.84 | 0.85 | 0.75 | 7.57 | 10.44 | 13.06 | |
Performance comparison of other brain tumor segmentation methods.
| Methods (with multimodality MRI) | Evaluation | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| DICE | Recall | Precision | HD(mm) | |||||||||
| Whole | Core | Enh. | Whole | Core | Enh. | Whole | Core | Enh. | Whole | Core | Enh. | |
| 0.89 | 0.78 | 0.91 | 0.84 | 0.85 | 0.88 | 0.85 | 0.78 | 6.32 | 7.85 | |||
| Hu | 0.89 | 0.82 | 0.77 | 0.90 | 0.84 | 0.86 | 0.87 | 0.83 | 0.71 | — | — | — |
| Pereira | 0.88 | 0.83 | 0.77 | 0.89 | 0.83 | 0.81 | 0.88 | 0.87 | 0.74 | — | — | — |
| Razzak | 0.89 | 0.79 | 0.75 | 0.88 | 0.80 | 0.80 | 0.88 | 0.87 | 0.68 | — | — | — |
| Yang | 0.82 | 0.88 | 0.78 | — | — | — | ||||||
| Zhao | 0.88 | 0.84 | 0.77 | 0.86 | 0.82 | 0.80 | 0.76 | — | — | — | ||
| Sun | 0.84 | 0.72 | 0.62 | 0.89 | 0.73 | 0.69 | 0.82 | 0.77 | 0.60 | — | — | — |
| Xue | 0.85 | 0.70 | 0.66 | 0.80 | 0.65 | 0.62 | 0.92 | 0.80 | 0.69 | — | — | — |
| Isensee | 0.89 | 0.79 | 0.73 | 0.89 | 0.78 | 0.79 | — | — | — | 6.97 | 9.48 | 4.55 |
| Chen | 0.83 | 0.73 | 0.64 | 0.84 | 0.74 | 0.80 | — | — | — | 36.4 | 25.59 | 30.31 |
| Wang | 0.87 | 0.77 | 0.78 | — | — | — | — | — | — | 6.55 | 27.04 | 15.90 |
| Wang | 0.87 | 0.79 | 0.74 | — | — | — | — | — | — | 4.16 | 5.97 | 6.71 |