| Literature DB >> 35392415 |
Manjin Sheng1, Wenjie Xu1, Jane Yang2, Zhongjie Chen3.
Abstract
Stroke is an acute cerebrovascular disease with high incidence, high mortality, and high disability rate. Determining the location and volume of the disease in MR images promotes accurate stroke diagnosis and surgical planning. Therefore, the automatic recognition and segmentation of stroke lesions has important clinical significance for large-scale stroke imaging analysis. There are some problems in the segmentation of stroke lesions, such as imbalance of the front and back scenes, uncertainty of position, and unclear boundary. To meet this challenge, this paper proposes a cross-attention and deep supervision UNet (CADS-UNet) to segment chronic stroke lesions from T1-weighted MR images. Specifically, we propose a cross-spatial attention module, which is different from the usual self-attention module. The location information interactively selects encode features and decode features to enrich the lost spatial focus. At the same time, the channel attention mechanism is used to screen the channel characteristics. Finally, combined with deep supervision and mixed loss, the model is supervised more accurately. We compared and verified the model on the authoritative open dataset "Anatomical Tracings of Lesions After Stroke" (Atlas), which fully proved the effectiveness of our model.Entities:
Keywords: ATLAS; MRI; chronic stroke; deep learning; lesion segmentation
Year: 2022 PMID: 35392415 PMCID: PMC8980944 DOI: 10.3389/fnins.2022.836412
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
FIGURE 1Overview of our proposed cross-attention and deep supervision UNet (CADS-UNet).
FIGURE 2Cross space attention module (CSAM).
FIGURE 3Channel attention module (CAM).
Comparison with state-of-the-art methods on the ATLAS dataset.
| Method | DSC | DSC (global) | Recall | Precision |
| FCN-8s | 0.4274 | 0.6720 | 0.4531 | 0.4883 |
| U-Net | 0.4944 | 0.6933 | 0.5030 | 0.6419 |
| ResUNet | 0.5081 | 0.7373 | 0.5081 | 0.5921 |
| Attention-UNet | 0.5162 | 0.7383 | 0.5321 |
|
| CADS-UNet (ours) |
|
|
| 0.6368 |
Bold value shows the best performance.
FIGURE 4Comparisons of our method, baseline, FCN-8s, U-Net, ResUNet, and attention-UNet.
Comparison with other state-of-the-art methods and ablation studies on the ATLAS dataset.
| Method | DSC | Recall | Precision | Train/Test |
| X-Net | 0.4867 | 0.4752 | 0.6000 | All/Fivefold cross-validation |
| 2D MI-UNet | 0.4945 | 0.5237 | 0.5669 | All/Fivefold cross-validation |
| 3D UNet | 0.5296 | 0.5497 | 0.6090 | All/Fivefold cross-validation |
| D-UNet | 0.5349 | 0.5243 | 0.6331 | 183/46 |
| 2.5D CNN | 0.54 | – | – | 99/Fivefold cross-validation |
| CADS-UNet (ours) |
|
| 0.6368 | 137/56 |
| BaseLine (BL) | 0.5124 | 0.5291 | 0.6111 | 137/56 |
| BL + CSAM | 0.5361 | 0.5455 |
| 137/56 |
| BL + CSAM + CAM | 0.5407 | 0.5654 | 0.6218 | 137/56 |
Bold value shows the best performance.