| Literature DB >> 36263364 |
Qi Peng1, Xingcai Chen1, Chao Zhang2, Wenyan Li2, Jingjing Liu1, Tingxin Shi3, Yi Wu1, Hua Feng2, Yongjian Nian1, Rong Hu2.
Abstract
The study aims to enhance the accuracy and practicability of CT image segmentation and volume measurement of ICH by using deep learning technology. A dataset including the brain CT images and clinical data of 1,027 patients with spontaneous ICHs treated from January 2010 to December 2020 were retrospectively analyzed, and a deep segmentation network (AttFocusNet) integrating the focus structure and the attention gate (AG) mechanism is proposed to enable automatic, accurate CT image segmentation and volume measurement of ICHs. In internal validation set, experimental results showed that AttFocusNet achieved a Dice coefficient of 0.908, an intersection-over-union (IoU) of 0.874, a sensitivity of 0.913, a positive predictive value (PPV) of 0.957, and a 95% Hausdorff distance (HD95) (mm) of 5.960. The intraclass correlation coefficient (ICC) of the ICH volume measurement between AttFocusNet and the ground truth was 0.997. The average time of per case achieved by AttFocusNet, Coniglobus formula and manual segmentation is 5.6, 47.7, and 170.1 s. In the two external validation sets, AttFocusNet achieved a Dice coefficient of 0.889 and 0.911, respectively, an IoU of 0.800 and 0.836, respectively, a sensitivity of 0.817 and 0.849, respectively, a PPV of 0.976 and 0.981, respectively, and a HD95 of 5.331 and 4.220, respectively. The ICC of the ICH volume measurement between AttFocusNet and the ground truth were 0.939 and 0.956, respectively. The proposed segmentation network AttFocusNet significantly outperforms the Coniglobus formula in terms of ICH segmentation and volume measurement by acquiring measurement results closer to the true ICH volume and significantly reducing the clinical workload.Entities:
Keywords: computed tomography; deep learning; intracerebral hemorrhage; segmentation; volume measurement
Year: 2022 PMID: 36263364 PMCID: PMC9575984 DOI: 10.3389/fnins.2022.965680
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
Description of the data features of the training set, validation set, and testing set.
| Item | Training set | Validation set | Testing set | |
| Number of cases ( | 718 | 102 | 207 | |
| Age (Mean ± SD) | 57.19 ± 12.67 | 55.56 ± 12.95 | 55.93 ± 13.33 | 0.211 |
| GCS [Median (Min, Max)] | 13 (3, 15) | 13 (3, 15) | 13 (5, 15) | 0.896 |
| ICH [Median (Min, Max)] | 1 (0, 4) | 1 (0, 4) | 1 (0, 4) | 0.664 |
| Hemorrhage volume (Mean ± SD) | 32.65 ± 22.97 | 33.10 ± 25.90 | 33.83 ± 20.19 | 0.323 |
| Intraventricular extension of intracerebral hemorrhage ( | 277 | 40 | 78 | |
|
| ||||
| Basal ganglia | 551 | 77 | 154 | |
| Lobe | 143 | 20 | 46 | |
| Brain stem | 5 | 0 | 1 | |
| Cerebellum | 3 | 0 | 1 | |
| Ventricle | 16 | 5 | 5 | |
GCS, Glasgow coma scale; ICH, intracerebral hemorrhage; SD, standard deviation.
FIGURE 1The structure diagram of AttFocusNet based on focus and the AG (D* and U* represent the convolutions in the encoding process and decoding process, respectively).
FIGURE 2The scheme of the focus structure.
FIGURE 3The structure diagram of 2D AG.
Comparison of the segmentation performance of different methods.
| Algorithm | Dice | IoU | Sensitivity | PPV | HD95 (mm) |
| UNet ++ | 0.900 | 0.865 | 0.911 | 0.950 | 6.829 |
| AttUNet | 0.893 | 0.859 | 0.899 | 0.956 | 7.253 |
| PraNet | 0.788 | 0.716 | 0.795 | 0.905 | 16.01 |
| 3DUNet | 0.857 | 0.806 | 0.866 | 0.930 | 6.873 |
| UNETR | 0.700 | 0.638 | 0.825 | 0.799 | 13.668 |
| AttFocusNet | 0.908 | 0.874 | 0.913 | 0.957 | 5.960 |
FIGURE 4Distribution of the segmentation performance of different deep models (White dots, squares, vertical lines, and peaks represent the median, interquartile range, 95% confidence interval, and data density distribution, respectively).
FIGURE 5Comparison of the segmentation results of different methods.
FIGURE 6Computed tomography (CT) images and segmentation results corresponding to the difference in volume measurements between AttFocusNet and the Coniglobus formula. (A) A difference of 1.86 mL between volume measurements. (B) A difference of 74.05 mL between volume measurements.
FIGURE 7Three-dimensional (3D) visualization of an ICH by mimics.
FIGURE 8Comparison of the consistency.
Comparison of the segmentation performance of different methods on CQ500 dataset.
| Algorithm | Dice | IoU | Sensitivity | PPV | HD95 (mm) |
| UNet ++ | 0.883 | 0.790 | 0.826 | 0.947 | 13.584 |
| AttUNet | 0.879 | 0.783 | 0.812 | 0.957 | 11.312 |
| PraNet | 0.809 | 0.680 | 0.747 | 0.882 | 24.161 |
| 3DUNet | 0.842 | 0.727 | 0.782 | 0.913 | 37.052 |
| UNETR | 0.714 | 0.555 | 0.604 | 0.873 | 69.893 |
| AttFocusNet | 0.889 | 0.800 | 0.817 | 0.976 | 5.331 |
Comparison of the segmentation performance of different methods on RSNA2019 dataset.
| Algorithm | Dice | IoU | Sensitivity | PPV | HD95 (mm) |
| UNet ++ | 0.895 | 0.810 | 0.834 | 0.965 | 3.735 |
| AttUNet | 0.896 | 0.811 | 0.829 | 0.974 | 4.733 |
| PraNet | 0.775 | 0.633 | 0.647 | 0.966 | 6.842 |
| 3DUNet | 0.820 | 0.694 | 0.707 | 0.975 | 10.838 |
| UNETR | 0.361 | 0.220 | 0.572 | 0.264 | 133.924 |
| AttFocusNet | 0.911 | 0.836 | 0.849 | 0.981 | 4.220 |
FIGURE 9Comparison of the consistency for external validation. (A) CQ500. (B) RSNA2019.