| Literature DB >> 35757468 |
R Sreelakshmy1, Anita Titus2, N Sasirekha3, E Logashanmugam4, R Benazir Begam5, G Ramkumar6, Raja Raju7.
Abstract
Cerebellum measures taken from routinely obtained ultrasound (US) images have been frequently employed to determine gestational age and identify developing central nervous system's anatomical abnormalities. Standardized cerebellar assessments from large-scale clinical datasets are required to investigate correlations between the growing cerebellum and postnatal neurodevelopmental results. These studies could uncover structural abnormalities that could be employed as indicators to forecast neurodevelopmental and growth consequences. To achieve this, higher-throughput, precise, and impartial measures must be used to replace the existing human, semiautomatic, and advanced algorithms, which seem to be time-consuming and inaccurate. In this article, we presented an innovative deep learning (DL) technique for automatic fetal cerebellum segmentation from 2-dimensional (2D) US brain images. We present ReU-Net, a semantic segmentation network tailored to the anatomy of the fetal cerebellum. Moreover, we use U-Net as a foundation models with the incorporation of residual blocks and Wiener filter over the last 2 layers to segregate the cerebellum (c) from the noisy US data. 590 images for training and 150 images for testing were taken; also, we employed a 5-fold cross-assessment method. Our ReU-Net scored 91%, 92%, 25.42, 98%, 92%, and 94% for Dice Score Coefficient (DSC), F1-score, Hausdorff Distance (HD), accuracy, recall, and precision, correspondingly. The suggested method outperforms the other U-Net predicated techniques by a quantitatively significant margin (p 0.001). Our presented approach can be used to allow high bandwidth imaging techniques in medical study fetal US images as well as biometric evaluation on a broader scale in fetal US images.Entities:
Mesh:
Year: 2022 PMID: 35757468 PMCID: PMC9225853 DOI: 10.1155/2022/8342767
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.246
Overview of reviewed literatures.
| Authors | Reference | Year | Article title | Proposed approach | Type of images | Training images | Testing images | Limitation |
|---|---|---|---|---|---|---|---|---|
| Singh et al. | [ | 2021 | 2D fetal US brain images cerebellum semantic segmentation | ResU-Net-c | US | 588 | 146 | The model does not segment the cerebellum automatically |
| Zhao et al. | [ | 2022 | Optimized DL method-based automated segmentation of 3D fetal brain images | 3D U-Net | MR | 65 | 41 | Not all the evaluation metrics are computed |
| Hesse et al. | [ | 2022 | Fetal brain's subcortical segmentation in 3D US images by DL methods | CNN | US | 215 | 20 | Segmentation robustness requires more improvement |
| Fidon et al. | [ | 2021 | DRO-based abnormal brain of fetal segmentation using 3D MR images | nnU-Net-DRO | MR | 116 | 26 | The process of segmentation may take more time to complete |
| Kim et al. | [ | 2019 | Fetal-head biometry automatic evaluation by ML | DL-based method | US | 102 | 70 | The method is employed only on smaller datasets |
| Khalili et al. | [ | 2019 | Automatic segmentation of fetal brain tissue by CNN | CNN | MR | 32 | 94 | The error rate of developed method was not measured |
| Avisdris et al. | [ | 2021 | Fetal brain automated linear measures by DNN | DNN | MR | 121 | 33 | The linear measurements take more time |
| Dumast et al. | [ | 2022 | Segmentation of fetal brain tissue from synthetic MR images | FaBiaN | MR | 17 | 11 | The developed model is not applicable for large datasets |
| Rackerseder et al. | [ | 2019 | 3D brain US fully automatic segmentation | DeepVNet | US | N/A | N/A | The quantitative evaluation takes more time |
| Venturini et al. | [ | 2019 | Structural semantic segmentation using multitask CNN from 3D US images | Multitask CNN | US | 480 | 48 | The presented approach is challenging |
Figure 1Residual block basic diagram.
Figure 2Workflow.
Figure 3Network architecture of ReU-Net.
Epochs and model parameter iteration.
| Models | Epochs | Parameters | |
|---|---|---|---|
| Attention U-Net | FTL | 200 | 35,974,075 |
| DL | 250 | ||
| CL | 150 | ||
|
| |||
| U-Net | FTL | 200 | 32,842,032 |
| DL | 250 | ||
| CL | 150 | ||
|
| |||
| U-Net++ | FTL | 300 | 11,284,042 |
| DL | 150 | ||
| CL | 100 | ||
|
| |||
| ReU-Net (proposed) | FTL | 120 | 18,742,832 |
| DL | 127 | ||
| CL | 235 | ||
Segmentation fetal cerebellum evaluation.
| Methods | Losing variable | Precision | Recall | DSC | F1-score | HD | Accuracy |
| Proceeding time (seconds) |
|---|---|---|---|---|---|---|---|---|---|
| Attention U-Net | FTL | 0.84 | 0.79 | 0.84 | 0.85 | 40.05 | 0.92 | 4.20 | 0.50 |
| CL | 0.89 | 0.82 | 0.87 | 0.84 | 35.87 | 0.94 | 6.12 | 0.45 | |
| DL | 0.86 | 0.85 | 0.82 | 0.88 | 34.25 | 0.91 | 5.24 | 0.42 | |
|
| |||||||||
| U-Net++ | FTL | 0.91 | 0.84 | 0.91 | 0.87 | 28.67 | 0.89 | 7.24 | 0.35 |
| CL | 0.89 | 0.87 | 0.89 | 0.82 | 25.50 | 0.85 | 4.20 | 0.40 | |
| DL | 0.92 | 0.88 | 0.92 | 0.84 | 24.26 | 0.92 | 6.54 | 0.32 | |
|
| |||||||||
| U-Net | FTL | 0.94 | 0.90 | 0.87 | 0.90 | 22.56 | 0.94 | 5.23 | 0.30 |
| CL | 0.91 | 0.92 | 0.88 | 0.91 | 30.02 | 0.92 | 3.20 | 0.32 | |
| DL | 0.89 | 0.89 | 0.84 | 0.89 | 20.26 | 0.93 | 2.50 | 0.38 | |
|
| |||||||||
| ResU-Net-c | FTL | 0.90 | 0.94 | 0.91 | — | 18.2 | — | 3.02 | 0.32 |
| CL | 0.95 | 0.9 | 0.91 | — | 17.8 | — | 9.52 | 0.30 | |
| DL | 0.93 | 0.92 | 0.92 | — | 17.9 | — | — | 0.34 | |
|
| |||||||||
| Proposed ReU-Net | FTL | 0.94 | 0.95 | 0.94 | 0.92 | 18.24 | 0.97 | 4.50 | 0.25 |
| CL | 0.96 | 0.93 | 0.92 | 0.93 | 15.25 | 0.95 | 10.45 | 0.20 | |
| DL | 0.94 | 0.92 | 0.93 | 0.94 | 15.78 | 0.98 | — | 0.23 | |
Fetal cerebellum segmentation by DL (i.e., mean ± SD).
| Model | Precision | HD | Recall |
| DSC |
|---|---|---|---|---|---|
| U-Net++ | 0.75 ± 0.24 | 42.12 ± 33.54 | 0.82 ± 0.15 | 5.42 | 0.87 ± 0.18 |
| Attention U-Net | 0.85 ± 0.12 | 35.42 ± 28.52 | 0.72 ± 0.17 | 1.08 | 0.82 ± 0.13 |
| ResU-Net-c | 0.92 ± 0.05 | 28.42 ± 27.6 | 0.74 ± 0.18 | 1.2 | 0.88 ± 0.15 |
| U-Net | 0.82 ± 0.21 | 38.45 ± 21.5 | 0.82 ± 0.19 | 1.32 | 0.74 ± 0.17 |
| ReU-Net (proposed) | 0.94 ± 0.07 | 25.42 ± 21.76 | 0.92 ± 0.18 | — | 0.91 ± 0.08 |
Figure 4Overall outcome of proposed ReU-Net model.