| Literature DB >> 33343286 |
Jinwoo Hong1,2, Hyuk Jin Yun2,3, Gilsoon Park4, Seonggyu Kim1, Cynthia T Laurentys2, Leticia C Siqueira2, Tomo Tarui5,6, Caitlin K Rollins7, Cynthia M Ortinau8, P Ellen Grant2,3, Jong-Min Lee4, Kiho Im2,3.
Abstract
Fetal magnetic resonance imaging (MRI) has the potential to advance our understanding of human brain development by providing quantitative information of cortical plate (CP) development in vivo. However, for a reliable quantitative analysis of cortical volume and sulcal folding, accurate and automated segmentation of the CP is crucial. In this study, we propose a fully convolutional neural network for the automatic segmentation of the CP. We developed a novel hybrid loss function to improve the segmentation accuracy and adopted multi-view (axial, coronal, and sagittal) aggregation with a test-time augmentation method to reduce errors using three-dimensional (3D) information and multiple predictions. We evaluated our proposed method using the ten-fold cross-validation of 52 fetal brain MR images (22.9-31.4 weeks of gestation). The proposed method obtained Dice coefficients of 0.907 ± 0.027 and 0.906 ± 0.031 as well as a mean surface distance error of 0.182 ± 0.058 mm and 0.185 ± 0.069 mm for the left and right, respectively. In addition, the left and right CP volumes, surface area, and global mean curvature generated by automatic segmentation showed a high correlation with the values generated by manual segmentation (R 2 > 0.941). We also demonstrated that the proposed hybrid loss function and the combination of multi-view aggregation and test-time augmentation significantly improved the CP segmentation accuracy. Our proposed segmentation method will be useful for the automatic and reliable quantification of the cortical structure in the fetal brain.Entities:
Keywords: MRI; cortical plate; deep learning; fetal brain; hybrid loss; segmentation
Year: 2020 PMID: 33343286 PMCID: PMC7738480 DOI: 10.3389/fnins.2020.591683
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
FIGURE 1Illustration of proposed network based on U-Net. Our network uses a 128 × 128 2D slice as the input and predicts the probability of five labels (background, left and right CP, and left and right inner volume of CP).
FIGURE 2Schematic representation of proposed segmentation procedure. (A) Multi-view aggregation combines segmentations from each trained model along three planes: coronal, sagittal, and axial. (B) TTA prediction synthesizes multiple segmentations by flip augmentation to generate a final segmentation map. (C) To enhance prediction accuracy, MVT aggregation is a combination of multi-view aggregation and TTA.
Statistical comparisons of segmentation performance obtained by different loss functions and aggregation methods.
| Dice | in_L | 0.978 ± 0.009 | 0.978 ± 0.009 | 0.980 ± 0.008 | 0.979 ± 0.008 a | 0.978 ± 0.009a,b | 0.977 ± 0.009a,b | 0.977 ± 0.009a,b,c | 0.976 ± 0.009a,b,c,d |
| in_R | 0.977 ± 0.011 | 0.977 ± 0.011 | 0.979 ± 0.011 | 0.978 ± 0.011a | 0.977 ± 0.012a,b | 0.977 ± 0.011a,b | 0.976 ± 0.011a,b,c,d | 0.976 ± 0.011a,b,c,d | |
| CP_L | 0.899 ± 0.027 | 0.885 ± 0.048* | 0.907 ± 0.027 | 0.904 ± 0.027a | 0.897 ± 0.027a,b | 0.855 ± 0.126a,b | 0.894 ± 0.026a,b,c | 0.893 ± 0.029a,b,c | |
| CP_R | 0.898 ± 0.031 | 0.884 ± 0.050* | 0.906 ± 0.031 | 0.902 ± 0.030a | 0.896 ± 0.032a,b | 0.896 ± 0.033a,b | 0.892 ± 0.031a,b,c,d | 0.851 ± 0.126a,b | |
| MSD | in_L | 0.293 ± 0.092 | 0.293 ± 0.095 | 0.267 ± 0.092 | 0.277 ± 0.090a | 0.294 ± 0.097a,b | 0.299 ± 0.099a,b | 0.308 ± 0.097a,b,c | 0.312 ± 0.096a,b,c,d |
| in_R | 0.300 ± 0.112 | 0.297 ± 0.110 | 0.271 ± 0.110 | 0.282 ± 0.107a | 0.299 ± 0.118a,b | 0.303 ± 0.116a,b | 0.318 ± 0.115a,b,c,d | 0.321 ± 0.108a,b,c,d | |
| CP_L | 0.199 ± 0.059 | 0.544 ± 1.064* | 0.188 ± 0.060 | 0.190 ± 0.058 | 0.199 ± 0.060a,b | 1.229 ± 3.178 | 0.209 ± 0.060a,b,c | 0.213 ± 0.064a,b,c | |
| CP_R | 0.202 ± 0.070 | 0.551 ± 1.078* | 0.186 ± 0.069 | 0.204 ± 0.077a | 0.203 ± 0.073a | 0.205 ± 0.073a | 0.215 ± 0.072a,c,d | 1.247 ± 3.192 | |
FIGURE 3Example of segmentation results with different loss function. The black arrows indicate the errors of segmentation when using the Dice loss. Since the loss for boundary was added, the proposed hybrid loss achieves more accurate segmentation results compared to the Dice loss.
FIGURE 4Example of segmentation results with different aggregation methods. The black arrows indicate the errors of segmentation. The proposed MVT method effectively eliminated segmentation errors that remained even after using TTA or multi-view aggregation.
FIGURE 5Box plots of segmentation accuracy. The proposed method yields a significantly higher Dice coefficient and lower MSD compared with other methods. The gray line is the connection between the same subjects. Post hoc results are listed in Table 1 and Supplementary Tables 1–3.
Statistical comparisons of segmentation performance between 2D network with multi-view aggregation and 3D networks.
| Dice | in_L | 0.979 ± 0.008 | 0.974 ± 0.010 | 8.352 | 0.0001 |
| in_R | 0.978 ± 0.011 | 0.974 ± 0.011 | 8.563 | 0.0001 | |
| CP_L | 0.904 ± 0.028 | 0.819 ± 0.223 | 2.797 | 0.0073 | |
| CP_R | 0.901 ± 0.031 | 0.881 ± 0.033 | 12.822 | 0.0001 | |
| MSD | in_L | 0.279 ± 0.092 | 0.369 ± 0.117 | −8.067 | 0.0001 |
| in_R | 0.283 ± 0.108 | 0.371 ± 0.137 | −6.615 | 0.0001 | |
| CP_L | 0.190 ± 0.059 | 1.875 ± 5.565 | −2.134 | 0.0377 | |
| CP_R | 0.217 ± 0.101 | 0.255 ± 0.081 | −3.091 | 0.0032 | |
FIGURE 6Regression plots of volume, surface area, and surface GMC from ground truth and our automatic segmentation. The fitting result coefficient (β) was very close to unity in all indices in all regions.
FIGURE 7Age-related trends of segmentation accuracy of the proposed method. (A) Dice coefficient. (B) MSD.
Cortical plate (CP) segmentation performance of the proposed method and other methods.
| Deep learning (direct) | EM (indirect) | Atlas-based (indirect) | |||||
| Proposed | |||||||
| No. subject (GA range) | 52 (22.9–31.4) | 52 (22.9–31.4) | 4 (29–32) | 14 (20.6–22.9) | 16 (22.4–36.4) | 15 (21.7–38.7) | |
| Dice | CP_L | 0.894 ± 0.030 | − | − | − | − | |
| CP_R | 0.811 ± 0.251 | − | − | − | − | ||
| CP | 0.852 ± 0.141 | 0.625 ± 0.038 | 0.82 ± 0.02 | − | 0.84 ± 0.06 | ||
| MSD | CP_L | 0.212 ± 0.064 | − | − | − | − | |
| CP_R | 2.277 ± 6.388 | − | − | −− | − | ||
| CP | 1.245 ± 3.226 | 0.697 ± 0.079 | − | 0.864 ± 0.141 | − | ||