| Literature DB >> 35953776 |
Afifa Khaled1, Jian-Jun Han2, Taher A Ghaleb3.
Abstract
MRI brain images are always of low contrast, which makes it difficult to identify to which area the information at the boundary of brain images belongs. This can make the extraction of features at the boundary more challenging, since those features can be misleading as they might mix properties of different brain regions. Hence, to alleviate such a problem, image boundary detection plays a vital role in medical image segmentation, and brain segmentation in particular, as unclear boundaries can worsen brain segmentation results. Yet, given the low quality of brain images, boundary detection in the context of brain image segmentation remains challenging. Despite the research invested to improve boundary detection and brain segmentation, these two problems were addressed independently, i.e., little attention was paid to applying boundary detection to brain segmentation tasks. Therefore, in this paper, we propose a boundary detection-based model for brain image segmentation. To this end, we first design a boundary segmentation network for detecting and segmenting images brain tissues. Then, we design a boundary information module (BIM) to distinguish boundaries from the three different brain tissues. After that, we add a boundary attention gate (BAG) to the encoder output layers of our transformer to capture more informative local details. We evaluate our proposed model on two datasets of brain tissue images, including infant and adult brains. The extensive evaluation experiments of our model show better performance (a Dice Coefficient (DC) accuracy of up to [Formula: see text] compared to the state-of-the-art models) in detecting and segmenting brain tissue images.Entities:
Keywords: Boundary detection; Brain segmentation; MRI; Medical imaging
Mesh:
Year: 2022 PMID: 35953776 PMCID: PMC9367147 DOI: 10.1186/s12859-022-04882-w
Source DB: PubMed Journal: BMC Bioinformatics ISSN: 1471-2105 Impact factor: 3.307
Fig. 1Examples show the ambiguous boundaries between WM and GM
Summary of the state-of-the-art techniques in medical image
| Publication | Method | Purpose |
|---|---|---|
| Guoqiang et al. [ | Segmentation of brain MRI image with GVF snake model | |
| Lei et al. [ | Clustering method | MR brain image segmentation |
| Somasundaram et al. [ | Intensity thresholding | Brain portion segmentation from MRI |
| Jiao et al. [ | Brain image segmentation based on bilateral symmetry information | |
| Jimenez et al. [ | 3 | Data-driven brain MRI segmentation supported on edge confidence and a priori tissue information |
| Tan Ou et al. [ | Atlas | Automatic segmentation of human brain images |
| Snell et al. [ | Active surfaces | Model-based segmentation of the brain from 3-D MR |
| Lei et al. [ | Clustering method | MR brain image segmentation |
| Yao et al. [ | Adjustable method | High effective medical image segmentation |
| Zhang et al. [ | Active volume model with shape priors | 3D segmentation of rodent brain structures |
| Liya et al. [ | Object detection | Feature extraction and morphological operations |
| Mallick et al. [ | Intelligent technique | |
| Zhou et al. [ | Encoder–decoder networks | Low-contrast medical image segmentation |
| Qu et al. [ | FCD detection | Estimating blur at the brain gray-white matter boundary |
| Shen et al. [ | Fully convolutional networks | Neuronal boundary detection |
| Chakraborty et al. [ | An integrated approach | Boundary finding in medical images |
| Khaled et al. [ | 3D, FCN + MIL + G + K | Brain tissues segmentation |
| Khaled et al. [ | Multi-stage GAN | Brain tissues segmentation |
Fig. 2An overview of the proposed model
Fig. 3The architecture of our model’s transformer
List of symbols referred to in this paper
| Symbol | Definition |
|---|---|
| White matter | |
| Gray matter | |
| Cerebrospinal fluid | |
| Convolutional | |
| Activation function | |
| Expected value | |
| Dice Coefficient | |
| Magnetic resonance imaging | |
| Subject-1-to-subject-10 | |
| Subject-11-to-subject-23 | |
| Automated segmentation | |
| Reference segmentation | |
| Boundary information module | |
| Dice loss function | |
| Cross-entropy loss function |
Fig. 4An example of the dataset (T1, T2, manual reference contour)
Parameters used to generate T1 and T2
| Parameter | Flip angle | Resolution | |
|---|---|---|---|
| 1900/4.38 ms | 7 | 1 | |
| 7380/119 ms | 150 | 1.25 |
Segmentation performance in Dice Coefficient (DC) obtained on the dataset achieved by our model (with and without BIM), compared to the state-of-the-art models
| Model | Dice Coefficient (DC) accuracy | ||
|---|---|---|---|
| CSF (%) | GM (%) | WM (%) | |
| Özgün et al. [ | 91.2 | 86.1 | 84.1 |
| Dong et al. [ | 83.5 | 85.2 | 86.4 |
| Konstantinos et al. [ | 90.3 | 86.8 | 84.3 |
| Mahbod et al. [ | 85.5 | 87.3 | 88.7 |
| 3D, FCN + MIL + G + K [ | 94.1 | 90.2 | 89.7 |
| Multi-stage [ | 94.0 | ||
| Ours (with | 94.0 | 91.0 | |
| Ours (without | 90.0 | 89.0 | 86.0 |
The best performance for each tissue class is highlighted in bold
Segmentation performance in Dice Coefficient (DC) obtained on the MRBrainS dataset achieved by our model (with and without BIM), compared to the state-of-the-art models
| Model | Dice Coefficient (DC) accuracy | ||
|---|---|---|---|
| CSF (%) | GM (%) | WM (%) | |
| Özgün et al. [ | 83.9 | 88.9 | 89.4 |
| Dong et al. [ | 83.5 | 85.4 | 88.9 |
| Mahbod et al. [ | 85.5 | 87.3 | 88.7 |
| Marijn et al. [ | 85.5 | 87.3 | 88.7 |
| 3D,FCN+MIL+G+K [ | 90.2 | 89.7 | |
| Multi-stage [ | 93.0 | 93.0 | 88.0 |
| Our model (with | 92.0 | ||
| Our model (without | 89.0 | 90.0 | 90.0 |
The best performance for each tissue class is highlighted in bold
Fig. 5Visualization results on MRBrainS dataset
Average execution time (in minutes) and standard deviation (SD) on the MRBrainS dataset
| Model | Time ( |
|---|---|
| Özgün et al. [ | 15.40 (0.16) |
| Dong et al. [ | 19.23 (0.20) |
| Mahbod et al. [ | 17.6 (0.18) |
| Marijn et al. [ | 18.4 (0.15) |
| 3D, FCN + MIL + G + K [ | |
| Multi-stage [ | 22.61 (0.21) |
| Our model (with | 10 (0.3) |
| Our model (without | 9 (0.14) |
The fastest model is highlighted in bold