OBJECTIVES: To develop a U-Net-based deep learning approach (U-DL) for bladder segmentation in computed tomography urography (CTU) as a part of a computer-assisted bladder cancer detection and treatment response assessment pipeline. MATERIALS AND METHODS: A dataset of 173 cases including 81 cases in the training/validation set (42 masses, 21 with wall thickening, 18 normal bladders), and 92 cases in the test set (43 masses, 36 with wall thickening, 13 normal bladders) were used with Institutional Review Board approval. An experienced radiologist provided three-dimensional (3D) hand outlines for all cases as the reference standard. We previously developed a bladder segmentation method that used a deep learning convolution neural network and level sets (DCNN-LS) within a user-input bounding box. However, some cases with poor image quality or with advanced bladder cancer spreading into the neighboring organs caused inaccurate segmentation. We have newly developed an automated U-DL method to estimate a likelihood map of the bladder in CTU. The U-DL did not require a user-input box and the level sets for postprocessing. To identify the best model for this task, we compared the following models: (a) two-dimensional (2D) U-DL and 3D U-DL using 2D CT slices and 3D CT volumes, respectively, as input, (b) U-DLs using CT images of different resolutions as input, and (c) U-DLs with and without automated cropping of the bladder as an image preprocessing step. The segmentation accuracy relative to the reference standard was quantified by six measures: average volume intersection ratio (AVI), average percent volume error (AVE), average absolute volume error (AAVE), average minimum distance (AMD), average Hausdorff distance (AHD), and the average Jaccard index (AJI). As a baseline, the results from our previous DCNN-LS method were used. RESULTS: In the test set, the best 2D U-DL model achieved AVI, AVE, AAVE, AMD, AHD, and AJI values of 93.4 ± 9.5%, -4.2 ± 14.2%, 9.2 ± 11.5%, 2.7 ± 2.5 mm, 9.7 ± 7.6 mm, 85.0 ± 11.3%, respectively, while the corresponding measures by the best 3D U-DL were 90.6 ± 11.9%, -2.3 ± 21.7%, 11.5 ± 18.5%, 3.1 ± 3.2 mm, 11.4 ± 10.0 mm, and 82.6 ± 14.2%, respectively. For comparison, the corresponding values obtained with the baseline method were 81.9 ± 12.1%, 10.2 ± 16.2%, 14.0 ± 13.0%, 3.6 ± 2.0 mm, 12.8 ± 6.1 mm, and 76.2 ± 11.8%, respectively, for the same test set. The improvement for all measures between the best U-DL and the DCNN-LS were statistically significant (P < 0.001). CONCLUSION: Compared to a previous DCNN-LS method, which depended on a user-input bounding box, the U-DL provided more accurate bladder segmentation and was more automated than the previous approach.
OBJECTIVES: To develop a U-Net-based deep learning approach (U-DL) for bladder segmentation in computed tomography urography (CTU) as a part of a computer-assisted bladder cancer detection and treatment response assessment pipeline. MATERIALS AND METHODS: A dataset of 173 cases including 81 cases in the training/validation set (42 masses, 21 with wall thickening, 18 normal bladders), and 92 cases in the test set (43 masses, 36 with wall thickening, 13 normal bladders) were used with Institutional Review Board approval. An experienced radiologist provided three-dimensional (3D) hand outlines for all cases as the reference standard. We previously developed a bladder segmentation method that used a deep learning convolution neural network and level sets (DCNN-LS) within a user-input bounding box. However, some cases with poor image quality or with advanced bladder cancer spreading into the neighboring organs caused inaccurate segmentation. We have newly developed an automated U-DL method to estimate a likelihood map of the bladder in CTU. The U-DL did not require a user-input box and the level sets for postprocessing. To identify the best model for this task, we compared the following models: (a) two-dimensional (2D) U-DL and 3D U-DL using 2D CT slices and 3D CT volumes, respectively, as input, (b) U-DLs using CT images of different resolutions as input, and (c) U-DLs with and without automated cropping of the bladder as an image preprocessing step. The segmentation accuracy relative to the reference standard was quantified by six measures: average volume intersection ratio (AVI), average percent volume error (AVE), average absolute volume error (AAVE), average minimum distance (AMD), average Hausdorff distance (AHD), and the average Jaccard index (AJI). As a baseline, the results from our previous DCNN-LS method were used. RESULTS: In the test set, the best 2D U-DL model achieved AVI, AVE, AAVE, AMD, AHD, and AJI values of 93.4 ± 9.5%, -4.2 ± 14.2%, 9.2 ± 11.5%, 2.7 ± 2.5 mm, 9.7 ± 7.6 mm, 85.0 ± 11.3%, respectively, while the corresponding measures by the best 3D U-DL were 90.6 ± 11.9%, -2.3 ± 21.7%, 11.5 ± 18.5%, 3.1 ± 3.2 mm, 11.4 ± 10.0 mm, and 82.6 ± 14.2%, respectively. For comparison, the corresponding values obtained with the baseline method were 81.9 ± 12.1%, 10.2 ± 16.2%, 14.0 ± 13.0%, 3.6 ± 2.0 mm, 12.8 ± 6.1 mm, and 76.2 ± 11.8%, respectively, for the same test set. The improvement for all measures between the best U-DL and the DCNN-LS were statistically significant (P < 0.001). CONCLUSION: Compared to a previous DCNN-LS method, which depended on a user-input bounding box, the U-DL provided more accurate bladder segmentation and was more automated than the previous approach.
Authors: Elaine M Caoili; Richard H Cohan; Melvyn Korobkin; Joel F Platt; Isaac R Francis; Gary J Faerber; James E Montie; James H Ellis Journal: Radiology Date: 2002-02 Impact factor: 11.105
Authors: Gary S Sudakoff; Dell P Dunn; Michael L Guralnick; Robert S Hellman; Daniel Eastwood; William A See Journal: J Urol Date: 2008-01-25 Impact factor: 7.450
Authors: Xiaofan Xiong; Timothy J Linhardt; Weiren Liu; Brian J Smith; Wenqing Sun; Christian Bauer; John J Sunderland; Michael M Graham; John M Buatti; Reinhard R Beichel Journal: Med Phys Date: 2020-01-06 Impact factor: 4.071
Authors: Kerstin Johnsson; Johan Brynolfsson; Hannicka Sahlstedt; Nicholas G Nickols; Matthew Rettig; Stephan Probst; Michael J Morris; Anders Bjartell; Mathias Eiber; Aseem Anand Journal: Eur J Nucl Med Mol Imaging Date: 2021-08-31 Impact factor: 10.057
Authors: Hoon Ko; Jimi Huh; Kyung Won Kim; Heewon Chung; Yousun Ko; Jai Keun Kim; Jei Hee Lee; Jinseok Lee Journal: J Med Internet Res Date: 2022-01-03 Impact factor: 5.428
Authors: Xiaopan Xu; Huanjun Wang; Yan Guo; Xi Zhang; Baojuan Li; Peng Du; Yang Liu; Hongbing Lu Journal: Front Oncol Date: 2021-07-15 Impact factor: 6.244