| Literature DB >> 36219359 |
Jia Ying1, Renee Cattell1,2, Tianyun Zhao1, Lan Lei3,4, Zhao Jiang5, Shahid M Hussain5, Yi Gao6,7, H-H Sherry Chow8, Alison T Stopeck9,10, Patricia A Thompson10,11, Chuan Huang12,13,14.
Abstract
Presence of higher breast density (BD) and persistence over time are risk factors for breast cancer. A quantitatively accurate and highly reproducible BD measure that relies on precise and reproducible whole-breast segmentation is desirable. In this study, we aimed to develop a highly reproducible and accurate whole-breast segmentation algorithm for the generation of reproducible BD measures. Three datasets of volunteers from two clinical trials were included. Breast MR images were acquired on 3 T Siemens Biograph mMR, Prisma, and Skyra using 3D Cartesian six-echo GRE sequences with a fat-water separation technique. Two whole-breast segmentation strategies, utilizing image registration and 3D U-Net, were developed. Manual segmentation was performed. A task-based analysis was performed: a previously developed MR-based BD measure, MagDensity, was calculated and assessed using automated and manual segmentation. The mean squared error (MSE) and intraclass correlation coefficient (ICC) between MagDensity were evaluated using the manual segmentation as a reference. The test-retest reproducibility of MagDensity derived from different breast segmentation methods was assessed using the difference between the test and retest measures (Δ2-1), MSE, and ICC. The results showed that MagDensity derived by the registration and deep learning segmentation methods exhibited high concordance with manual segmentation, with ICCs of 0.986 (95%CI: 0.974-0.993) and 0.983 (95%CI: 0.961-0.992), respectively. For test-retest analysis, MagDensity derived using the registration algorithm achieved the smallest MSE of 0.370 and highest ICC of 0.993 (95%CI: 0.982-0.997) when compared to other segmentation methods. In conclusion, the proposed registration and deep learning whole-breast segmentation methods are accurate and reliable for estimating BD. Both methods outperformed a previously developed algorithm and manual segmentation in the test-retest assessment, with the registration exhibiting superior performance for highly reproducible BD measurements.Entities:
Keywords: Breast cancer; Breast density; Breast segmentation; Deep learning; Image registration
Year: 2022 PMID: 36219359 PMCID: PMC9554077 DOI: 10.1186/s42492-022-00121-4
Source DB: PubMed Journal: Vis Comput Ind Biomed Art ISSN: 2524-4442
Summary characteristics of the data used in different subsets
| Dictionary set | Deep learning set | Test-retest set | |
|---|---|---|---|
| Purpose | To develop the template dictionary for the registration breast segmentation | To develop the deep learning network for the breast segmentation | To evaluate the test-retest reproducibility of MagDensity derived using available segmentation methods |
| Scanner | B, P | B, P, S | B, P |
| Data set | Sulindac | Sulindac and metformin | Sulindac |
B Biograph mMR, P Prisma, S Skyra
Fig. 1A. Workflow of the image processing steps for the registration breast segmentation method; B. The step-by-step results of the image processing. (a) Edge-enhanced fat-water-sum image after Canny edge detection method, (b) outcome of the Otsu thresholding, (c) mask after morphological operations, (d) body region extracted by applying the mask, and (e1) and (e2) single-sided breasts generated from D by cutting the breasts in the middle and flipping the left side to the right
Fig. 2Pipeline of the registration breast segmentation algorithm. (1) Similarity measurement to choose the most similar five templates; (2) Registration of chosen template images to the target image and generation of deformation maps; (3) Application of the deformation maps to the corresponding template masks; (4) Summation of all the registered masks and voting to output the final segmentation of the breast (voxels contained in at least four registered template masks were included)
Fig. 3Architecture of 3D U-Net for deep learning breast segmentation. The input contains two channels, one for water-only image and the other for fat-only image. The output is segmentation map. The blue boxes represent feature maps. The white boxes are copied feature maps. The number of channels is labeled on the top of the box. Different arrows denote different operations that are indicated in the legends
Fig. 4Representative examples of whole-breast segmentation results using A registration, B deep learning, C dynamic programming, and D manual methods
Concordance of MagDensity derived using the automated algorithms and manual segmentation (reference standard)
| Registration | Deep learning | Dynamic programming | |
|---|---|---|---|
| 0.693 | 0.781 | 1.124 | |
0.986 (0.974, 0.993) | 0.983 (0.961, 0.992) | 0.975 (0.869, 0.991) |
Comparison of test-retest reproducibility of MagDensity derived using different segmentation methods
| Registration | Deep learning | Dynamic programming | Manual intra-rater (rater 1) | Manual intra-rater (rater 2) | Manual inter-rater (rater 1 vs rater 2) | |
|---|---|---|---|---|---|---|
| Mean Δ2-1 | 0.292 | 0.390 | 0.282 | 0.369 | 1.116 | |
| Mean |Δ2-1| | 0.683 | 0.695 | 0.540 | 0.702 | 1.140 | |
| Max |Δ2-1| | 1.748 | 1.938 | 1.615 | 2.014 | 3.763 | |
| MSE | 0.741 | 0.763 | 0.479 | 0.855 | 1.967 | |
ICC (95%CI) | 0.983 (0.956, 0.993) | 0.982 (0.949, 0.993) | 0.988 (0.966, 0.995) | 0.982 (0.952, 0.993) | 0.955 (0.444, 0.989) |
∆2 − 1: Difference between test-retest measures
Fig. 5Plots of test-retest results of MagDensity derived using different segmentation methods. The red dashed line indicates the line of agreement. The quantitative assessment is shown in Table 3
Fig. 6Violin Plots of test-retest measures of MagDensity derived using different segmentation methods. The quantitative assessment is shown in Table 3
Fig. 7A simulation illustrating the statistical significance of a small enhancement of BD measurement reliability. When the test-retest standard deviation is reduced from 1.42% to 1.11%, the p value decreases from 0.13 to < 0.05 (assume N = 10; true change = 1%)