| Literature DB >> 30838121 |
Mumtaz Hussain Soomro1, Matteo Coppotelli1, Silvia Conforto1, Maurizio Schmid1, Gaetano Giunta1, Lorenzo Del Secco2, Emanuele Neri2, Damiano Caruso3, Marco Rengo3, Andrea Laghi3.
Abstract
The main goal of this work is to automatically segment colorectal tumors in 3D T2-weighted (T2w) MRI with reasonable accuracy. For such a purpose, a novel deep learning-based algorithm suited for volumetric colorectal tumor segmentation is proposed. The proposed CNN architecture, based on densely connected neural network, contains multiscale dense interconnectivity between layers of fine and coarse scales, thus leveraging multiscale contextual information in the network to get better flow of information throughout the network. Additionally, the 3D level-set algorithm was incorporated as a postprocessing task to refine contours of the network predicted segmentation. The method was assessed on T2-weighted 3D MRI of 43 patients diagnosed with locally advanced colorectal tumor (cT3/T4). Cross validation was performed in 100 rounds by partitioning the dataset into 30 volumes for training and 13 for testing. Three performance metrics were computed to assess the similarity between predicted segmentation and the ground truth (i.e., manual segmentation by an expert radiologist/oncologist), including Dice similarity coefficient (DSC), recall rate (RR), and average surface distance (ASD). The above performance metrics were computed in terms of mean and standard deviation (mean ± standard deviation). The DSC, RR, and ASD were 0.8406 ± 0.0191, 0.8513 ± 0.0201, and 2.6407 ± 2.7975 before postprocessing, and these performance metrics became 0.8585 ± 0.0184, 0.8719 ± 0.0195, and 2.5401 ± 2.402 after postprocessing, respectively. We compared our proposed method to other existing volumetric medical image segmentation baseline methods (particularly 3D U-net and DenseVoxNet) in our segmentation tasks. The experimental results reveal that the proposed method has achieved better performance in colorectal tumor segmentation in volumetric MRI than the other baseline techniques.Entities:
Mesh:
Substances:
Year: 2019 PMID: 30838121 PMCID: PMC6374810 DOI: 10.1155/2019/1075434
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1An illustration of colorectal tumor location, intensity, and size variation in a different slice of the same volume where the cancerous region is contoured by the red marker.
Figure 2Block diagram of the proposed method.
Figure 3Comparison of learning curves of the examined methods. (a–d) Learning curves which correspond to 3D FCNNs, 3D U-net, DenseVoxNet, and the proposed 3D MSDenseNet methods, respectively.
Figure 4Qualitative comparison of colorectal tumor segmentation results produced by each method. In (a), from left to right columns are the raw MRI input volume and cropped volume, first three columns correspond to predicted probability by 3D FCNNs, and segmentation results by 3D FCNNs (red) and 3D FCNNs + 3D level set (red) overlapped with true ground truth (green), correspondingly. Similarly, second, third, and fourth three columns are related to predicted probability and segmentation results by rest of methods: 3D U-net (red), 3D U-net + 3D level set (red), DenseVoxNet (red), DenseVoxNet + 3D level set (red), 3D MSDensenet (red), and 3D MSDensenet + 3D level set (red), respectively. In (b), we have overlapped the 3D masks segmented by each method with the ground truth 3D mask. In (b), from left to right are ground truth 3D mask, overlapping of segmented 3D mask by 3D FCNNs (red), 3D FCNNs + 3D level set (red), 3D U-net (red), 3D U-net + 3D level set (red), DenseVoxNet (red), DenseVoxNet + 3D level set (red), 3D MSDensenet (red), and 3D MSDensenet + 3D level set (red) with the ground truth 3D mask (green points). The green points which are not covered by the segmentation results (red) of each method are referred as false negatives.
Quantitative comparison of colorectal tumor segmentation results.
| Methods | Performance metrics | ||
|---|---|---|---|
| DSC | RR | ASD (mm) | |
| 3D FCNNs [ | 0.6519 ± 0.0181 | 0.6858 ± 0.1017 | 4.2613 ± 3.1603 |
| 3D U-net [ | 0.7227 ± 0.0128 | 0.7463 ± 0.0302 | 3.0173 ± 3.0133 |
| DenseVoxNet [ | 0.7826 ± 0.0146 | 0.8061 ± 0.0187 | 2.7253 ± 2.9024 |
| 3D MSDenseNet (proposed method) |
|
| 2.6407 ± 2.7975 |
| 3D FCNNs + 3D level set [ | 0.7591 ± 0.0169 | 0.7903 ± 0.0183 | 3.0029 ± 2.9819 |
| 3D U-net + 3D level set | 0.8217 ± 0.0173 | 0.8394 ± 0.0193 | 2.8815 ± 2.6901 |
| DenseVoxNet + 3D level set | 0.8261 ± 0.0139 | 0.8407 ± 0.0177 |
|
| 3D MSDenseNet + 3D level set (proposed method) |
|
| 2.5401 ± 2.402 |