| Literature DB >> 30774849 |
Jinjin Hai1, Kai Qiao1, Jian Chen1, Hongna Tan2, Jingbo Xu1, Lei Zeng1, Dapeng Shi2, Bin Yan1.
Abstract
Breast tumor segmentation plays a crucial role in subsequent disease diagnosis, and most algorithms need interactive prior to firstly locate tumors and perform segmentation based on tumor-centric candidates. In this paper, we propose a fully convolutional network to achieve automatic segmentation of breast tumor in an end-to-end manner. Considering the diversity of shape and size for malignant tumors in the digital mammograms, we introduce multiscale image information into the fully convolutional dense network architecture to improve the segmentation precision. Multiple sampling rates of atrous convolution are concatenated to acquire different field-of-views of image features without adding additional number of parameters to avoid over fitting. Weighted loss function is also employed during training according to the proportion of the tumor pixels in the entire image, in order to weaken unbalanced classes problem. Qualitative and quantitative comparisons demonstrate that the proposed algorithm can achieve automatic tumor segmentation and has high segmentation precision for various size and shapes of tumor images without preprocessing and postprocessing.Entities:
Year: 2019 PMID: 30774849 PMCID: PMC6350548 DOI: 10.1155/2019/8415485
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1The distribution of breast tumor size.
Figure 2Illustration of atrous convolution with different atrous sampling rates in 1-D. (a) Rate = 1. (b) Rate = 2. (c) Rate = 3.
Figure 3The proposed ASPP-FC-DenseNet.
Figure 4The mammogram data with different views and corresponding annotations. (a) CC view. (b) CC annotation. (c) MLO view. (d) MLO annotation.
Figure 5Segmentation results of different sizes of breast tumor. (a) Image. (b) FC-DenseNet. (c) ASPP-FC-DenseNet. (d) Ground Truth.
The quantitative comparisons of the proposed and original FC-DenseNet algorithms.
| Methods | DI | IOU | PA |
|---|---|---|---|
| FC-DenseNet | 0.7355 | 0.5948 | 0.7968 |
| ASPP-FC-DenseNet | 0.7697 | 0.6041 | 0.7983 |
Figure 6The tumor segmentation results of ASPP-FC-DenseNet model with different loss functions. (a) Image. (b) Dice loss. (c) No weighted loss. (d) Ours. (e) Ground Truth.
The quantitative comparisons of ASPP-FC-DenseNet model with different loss functions.
| Methods | DI | IOU | PA |
|---|---|---|---|
| No weighted loss | 0.7151 | 0.5974 |
|
| Dice loss | 0.7108 | 0.5920 | 0.7988 |
| Ours |
|
| 0.7983 |
Figure 7Tumor segmentation results of different CNNs. (a) Image. (b) U-Net. (c) PSPNet. (d) Deeplab v3+. (e) Ours. (f) Ground Truth.
The quantitative comparisons of different CNNs.
| Models | DI | IOU | PA |
|---|---|---|---|
| U-Net | 0.6763 | 0.5608 | 0.7562 |
| PSPNet | 0.6785 | 0.5427 | 0.7202 |
| Deeplab v3+ | 0.6827 | 0.5641 | 0.7072 |
| ASPP-FC-DenseNet | 0.7697 | 0.6041 | 0.7983 |
Figure 8The original MLO mammogram and pectoralis deleted mammogram.
Figure 9The tumor segmentation results compared with other segmentation algorithms. (a) Image. (b) Level set. (c) Grab cut. (d) Double threshold. (e) Ours. (f) Ground Truth.
The quantitative comparisons of the proposed model and other algorithms.
| Methods | DI | IOU | PA |
|---|---|---|---|
| Level set | 0.5989 | 0.4893 | 0.6813 |
| Grab cut | 0.4663 | 0.3491 | 0.6220 |
| Threshold | 0.5464 | 0.4322 | 0.6440 |
| ASPP-FC-DenseNet | 0.7697 | 0.6041 | 0.7983 |