| Literature DB >> 30840715 |
Nooshin Ghavami1,2, Yipeng Hu1,2, Ester Bonmati1,2, Rachael Rodell1,2, Eli Gibson1,2, Caroline Moore2,3,4, Dean Barratt1,2.
Abstract
Image guidance systems that register scans of the prostate obtained using transrectal ultrasound (TRUS) and magnetic resonance imaging are becoming increasingly popular as a means of enabling tumor-targeted prostate cancer biopsy and treatment. However, intraoperative segmentation of TRUS images to define the three-dimensional (3-D) geometry of the prostate remains a necessary task in existing guidance systems, which often require significant manual interaction and are subject to interoperator variability. Therefore, automating this step would lead to more acceptable clinical workflows and greater standardization between different operators and hospitals. In this work, a convolutional neural network (CNN) for automatically segmenting the prostate in two-dimensional (2-D) TRUS slices of a 3-D TRUS volume was developed and tested. The network was designed to be able to incorporate 3-D spatial information by taking one or more TRUS slices neighboring each slice to be segmented as input, in addition to these slices. The accuracy of the CNN was evaluated on data from a cohort of 109 patients who had undergone TRUS-guided targeted biopsy, (a total of 4034 2-D slices). The segmentation accuracy was measured by calculating 2-D and 3-D Dice similarity coefficients, on the 2-D images and corresponding 3-D volumes, respectively, as well as the 2-D boundary distances, using a 10-fold patient-level cross-validation experiment. However, incorporating neighboring slices did not improve the segmentation performance in five out of six experiment results, which include varying the number of neighboring slices from 1 to 3 at either side. The up-sampling shortcuts reduced the overall training time of the network, 161 min compared with 253 min without the architectural addition.Entities:
Keywords: convolutional neural networks; prostate cancer; registration; segmentation; transrectal ultrasound
Year: 2018 PMID: 30840715 PMCID: PMC6102407 DOI: 10.1117/1.JMI.6.1.011003
Source DB: PubMed Journal: J Med Imaging (Bellingham) ISSN: 2329-4302
Fig. 1Proposed network architecture.
Fig. 2Diagram illustrating the experiment for taking different percentages of midsection prostate slices.
Segmentation metrics obtained from the automatic segmentation results when using different numbers of neighboring slices.
| Number of neighboring slices included on each side | 2-D DSC | 3-D DSC | Boundary distance |
|---|---|---|---|
| None | 0.88 ± 0.13 | 0.88 ± 0.06 | 1.80 ± 1.68 |
| 1 | 0.89 ± 0.12 | 0.89 ± 0.05 | 1.79 ± 2.05 |
| 2 | 0.89 ± 0.13 | 0.88 ± 0.04 | 1.77 ± 1.46 |
| 3 | 0.89 ± 0.12 | 0.88 ± 0.05 | 1.75 ± 1.77 |
Fig. 3Example comparisons between manual (red) and automatic (blue) segmentations. (a)–(d) Represent the 25th, 50th, 75th, and 100th quantile with DSC of 0.84, 0.92, 0.95, and 0.98, respectively.
Segmentation metrics obtained from the automatic segmentation results when using slow and late fusion methods.
| Fusion method | 2-D DSC | 3-D DSC | Boundary distance |
|---|---|---|---|
| Slow (two adjacent slices on each side) | 0.89 ± 0.12 | 0.89 ± 0.05 | 1.68 ± 1.57 |
| Late (three adjacent slices on each side) | 0.86 ± 0.12 | 0.85 ± 0.06 | 2.15 ± 1.59 |
Fig. 4Differences in the automatically segmented prostate when incorporating different numbers of neighboring slices. Manual segmentation (red), automatic segmentation using one adajcent slice (blue), using two adjacent slices (cyan), and three adjacent slices (yellow) overlayed on top of the original prostate slice.
Segmentation metrics obtained from the automatic segmentation results when taking different percentages of middle slices from each patient.
| Percentage of slices (%) | 2-D DSC | 3-D DSC | Boundary distance |
|---|---|---|---|
| 100 | 0.89 ± 0.12 | 0.89 ± 0.05 | 1.79 ± 2.05 |
| 90 | 0.88 ± 0.12 | 0.88 ± 0.06 | 1.90 ± 1.91 |
| 75 | 0.89 ± 0.10 | 0.89 ± 0.05 | 1.78 ± 1.56 |
| 60 | 0.89 ± 0.09 | 0.89 ± 0.05 | 1.83 ± 1.42 |
Fig. 5Interobserver segmentation comparisons for three example slices. Good visual agreement is shown between the manual segmentations from the two observers (red and green).