| Literature DB >> 28706185 |
Stefano Trebeschi1,2, Joost J M van Griethuysen1,2, Doenja M J Lambregts1, Max J Lahaye1, Chintan Parmar3, Frans C H Bakers4, Nicky H G M Peters5, Regina G H Beets-Tan1,2, Hugo J W L Aerts6,7.
Abstract
Multiparametric Magnetic Resonance Imaging (MRI) can provide detailed information of the physical characteristics of rectum tumours. Several investigations suggest that volumetric analyses on anatomical and functional MRI contain clinically valuable information. However, manual delineation of tumours is a time consuming procedure, as it requires a high level of expertise. Here, we evaluate deep learning methods for automatic localization and segmentation of rectal cancers on multiparametric MR imaging. MRI scans (1.5T, T2-weighted, and DWI) of 140 patients with locally advanced rectal cancer were included in our analysis, equally divided between discovery and validation datasets. Two expert radiologists segmented each tumor. A convolutional neural network (CNN) was trained on the multiparametric MRIs of the discovery set to classify each voxel into tumour or non-tumour. On the independent validation dataset, the CNN showed high segmentation accuracy for reader1 (Dice Similarity Coefficient (DSC = 0.68) and reader2 (DSC = 0.70). The area under the curve (AUC) of the resulting probability maps was very high for both readers, AUC = 0.99 (SD = 0.05). Our results demonstrate that deep learning can perform accurate localization and segmentation of rectal cancer in MR imaging in the majority of patients. Deep learning technologies have the potential to improve the speed and accuracy of MRI-based rectum segmentations.Entities:
Mesh:
Year: 2017 PMID: 28706185 PMCID: PMC5509680 DOI: 10.1038/s41598-017-05728-9
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Example of Multiparametric MR in a rectal cancer patient. mpMR of the pelvis of a male patient with rectal cancer before the start of the treatment. Corresponding slices of different sequences on the transversal plane are shown. (a) The sequences are, in order: T2 weighted, DWI B1000, DWI B0 and fusion imaging between T2 weighted and the DWI B1000. Notice how anatomical structures and tissues surrounding the tumour – such as prostate, bladder, and seminal vesicles – and artefacts in general show the same hyper-intensity on the DWI of the tumour. (b) Delineations of the tumour done by (from the right left hand side): the experienced reader used for the training, the independent reader, the result of the algorithm and the corresponding probability map generated by the algorithm.
Patient Characteristics.
| Centre A | Centre B | Both Centres | p-value | |
|---|---|---|---|---|
| N | 91 | 49 | 140 | — |
| Males/Females | 66/25 | 31/18 | 97/43 | |
| Age | 66.6 ± 9.3 | 65.6 ± 9.8 | 66.2 ± 9.4 | |
| Tumour Volumeα | 19.0 ± 22.3 cm3 | 23.8 ± 29.3 cm3 | 20.7 ± 25.0 cm3 |
α according to the segmentation performed by the experienced reader. No significant difference has been found between the two centres.
Sequence parameters of the diffusion-weighted imaging used during the study period.
| Centre A | Centre B | ||||
|---|---|---|---|---|---|
| Repetition Time | 4004–4829 | 4971 | 4172–5241 | 5100 | 4300 |
| Echo Time | 70 | 70 | 68–70 | 79 | |
| Number of Slices | 50 | 24 | 20–24 | 34 | 34 |
| Slice Thickness (mm) | 5 | 5 | 5 | 5 | 6 |
| Slice Gap (mm) | 0.5 | 0.5 | 0.5 | 0.5 | 0 |
| In-Plane Resolution | 2.50 × 3.11 (−3.18) | 1.87 × 2.31 | 1.82 × 2.27 | 1.70 × 1.30 | 2.0 × 2.0 |
| Echo train length | 1 | 1 | 1 | 1 | 1 |
| N. Signal Averages | 4 | 5 | 5 | 6 | 6 |
| b-values | 0, (100), 500, 1000 | 0, 500, 1000 | 0, (25,50,100), 500, 1000 | 0, 500, 1000 | 0, 300, 1100 |
| Fat Suppr. Tech. | STIRδ | SPIRγ/fatsatα | SPAIRβ | SPIRγ/fatsatα | SPIRγ/fatsatα |
| Echo Planar Im. | 53–55 | 55 | 61 | 148 | 150 |
αFat Saturation, βSpectral Attenuated Inversion Recovery, γSpectral Pre-saturation with Inversion Recovery, δShort T1 Inversion Recovery.
Figure 2Scheme of the Proposed Solution. (a) On the left-hand side, a multiparametric representation of the imaging is created via fusion of corresponding slices from different sequences into the colour channels of the RGB model. In the centre, the label map is first divided in tumour region (RT) and the background regions (RB) according to the delineation done by the experienced reader. On the right-hand side, N voxels (together with their surrounding patch) are then randomly sampled from these regions to maintain a balance between number of voxels representing the tumour and number of voxels representing healthy tissue. (b) The architecture of the network, which is trained with the patches of the images in the discovery set. The patches of the images in the test set are used to control for model overfitting. (c) The 3D probability map is generated by classification of each voxel using the trained model. The probability map is thresholded to find the components where the probability of tumour is higher than the probability of healthy tissue. The largest component is selected as segmentation of the tumour.
Figure 3CNN Training and Validation. Performance of the CNN on the discovery dataset: (a) accuracy, (b) cross entropy and (c) improvement (Δ cross entropy). Improvement shown in panel (c), in computed on the test set only, preventing the model from overfitting. Performance of the CNN on the validation dataset: (d) the Area under the ROC curve (AUC) of the probability map with respect to the reader segmentation, and (e) Dice Similarity Coefficient (DSC) of the generated segmentations.
Figure 4Example cases. Six example cases of the segmentation performed by the CNN. The algorithm correctly localized and segmented the tumour in case I to IV (small FOV images), but failed in cases with larger FOVs (cases V and VI) where parts of the cavernous bodies of the penis were erroneously included in these examples.