| Literature DB >> 34055832 |
Bo Wang1,2, Jingyi Yang3, Hong Peng4, Jingyang Ai2, Lihua An5, Bo Yang6, Zheng You1, Lin Ma4.
Abstract
Automatic segmentation of brain tumors from multi-modalities magnetic resonance image data has the potential to enable preoperative planning and intraoperative volume measurement. Recent advances in deep convolutional neural network technology have opened up an opportunity to achieve end-to-end segmenting the brain tumor areas. However, the medical image data used in brain tumor segmentation are relatively scarce and the appearance of brain tumors is varied, so that it is difficult to find a learnable pattern to directly describe tumor regions. In this paper, we propose a novel cross-modalities interactive feature learning framework to segment brain tumors from the multi-modalities data. The core idea is that the multi-modality MR data contain rich patterns of the normal brain regions, which can be easily captured and can be potentially used to detect the non-normal brain regions, i.e., brain tumor regions. The proposed multi-modalities interactive feature learning framework consists of two modules: cross-modality feature extracting module and attention guided feature fusing module, which aim at exploring the rich patterns cross multi-modalities and guiding the interacting and the fusing process for the rich features from different modalities. Comprehensive experiments are conducted on the BraTS 2018 benchmark, which show that the proposed cross-modality feature learning framework can effectively improve the brain tumor segmentation performance when compared with the baseline methods and state-of-the-art methods.Entities:
Keywords: attention mechanism; brain tumor segmentation; deep neural network; feature fusion; multi-modality learning
Year: 2021 PMID: 34055832 PMCID: PMC8158657 DOI: 10.3389/fmed.2021.653925
Source DB: PubMed Journal: Front Med (Lausanne) ISSN: 2296-858X
Figure 1A brief illustration of the proposed multi-modality interactive feature learning framework for brain tumor segmentation.
Figure 2Illustration of the network architecture of the segmentation process.
Figure 3Illustration of the details of the reverse co-attention block, where the “A” represents the average operation.
Figure 4Illustration of the details of the feature fusion block, where the operation “C” represents the channel-wise concatenation.
Ablation study of the proposed approach and the other baseline models on the BraTS 2018 validation set.
| “ | 0.698 | 0.793 | 0.808 | 0.766 | 4.412 | 9.614 | 8.184 | 7.403 |
| “ | 0.517 | 0.876 | 0.749 | 0.714 | 10.461 | 5.668 | 9.472 | 8.534 |
| “ | 0.674 | 0.818 | 0.782 | 0.758 | 5.072 | 6.101 | 8.562 | 6.578 |
| “Ours w/o CA” | 0.778 | 0.885 | 0.819 | 0.827 | 3.841 | 5.912 | 7.291 | 5.681 |
| “Ours w AT” | 0.789 | 0.897 | 0.836 | 0.841 | 4.690 | 4.912 | 6.912 | 5.505 |
| Ours | 0.801 | 0.909 | 0.854 | 0.855 | 3.879 | 4.571 | 6.411 | 4.954 |
Higher Dice scores indicate the better results, while lower Hausdorff95 scores indicate the better results.
Comparison results of the proposed approach and the other state-of-the-art models on the BraTS 2018 validation set.
| Myronenko ( | 0.823 | 0.910 | 0.867 | 0.866 | 3.926 | 4.516 | 6.855 | 5.099 |
| Isensee et al. ( | 0.809 | 0.913 | 0.863 | 0.861 | 2.410 | 4.270 | 6.520 | 4.400 |
| Puch et al. ( | 0.758 | 0.895 | 0.774 | 0.809 | 4.502 | 10.656 | 7.103 | 7.420 |
| Chandra et al. ( | 0.767 | 0.901 | 0.813 | 0.827 | 7.569 | 6.680 | 7.630 | 7.293 |
| Ma et al. ( | 0.743 | 0.872 | 0.773 | 0.796 | 4.690 | 6.120 | 10.400 | 7.070 |
| Chen et al. ( | 0.733 | 0.888 | 0.808 | 0.810 | 4.643 | 5.505 | 8.140 | 6.096 |
| Zhang et al. ( | 0.791 | 0.903 | 0.836 | 0.843 | 3.992 | 4.998 | 6.369 | 5.120 |
| Ours | 0.801 | 0.909 | 0.854 | 0.855 | 3.879 | 4.571 | 6.411 | 4.954 |
Higher Dice scores indicate the better results, while lower Hausdorff95 scores indicate the better results.
Figure 5Some examples of segmentation results of our proposed brain tumor segmentation on BraTs 2018 dataset.