| Literature DB >> 33385701 |
Wei Shao1, Linda Banh2, Christian A Kunder3, Richard E Fan4, Simon J C Soerensen4, Jeffrey B Wang5, Nikola C Teslovich4, Nikhil Madhuripan6, Anugayathri Jawahar7, Pejman Ghanouni8, James D Brooks4, Geoffrey A Sonn9, Mirabela Rusu10.
Abstract
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet.Entities:
Keywords: Histopathology; Image registration; MRI; deep learning; prostate cancer; radiology-pathology fusion
Year: 2020 PMID: 33385701 PMCID: PMC7856244 DOI: 10.1016/j.media.2020.101919
Source DB: PubMed Journal: Med Image Anal ISSN: 1361-8415 Impact factor: 8.545
Summary of datasets. We used 152 = 111 + 16 + 25 patients from three cohorts. T2-w MRI: T2-weighted MRI, H&E: Hematoxylin and Eosin, TR: repetition time, TE: echo time, H: in-plane image height, W: in-plane image width, D: through-plane image depth.
| Cohort 1 (Stanford) | Cohort 2 (TCIA) | Cohort 3 (TCIA) | ||||
|---|---|---|---|---|---|---|
| Number of patients | 111 | 16 | 25 | |||
| Modality | MRI | Histology | MRI | Histology | MRI | Histology |
| Manufacturer | GE | - | Siemens | - | Philips | - |
| Coil type | Surface | - | Endorectal | - | Endorectal | - |
| Sequence | T2-w MRI | Whole-mount | T2-w MRI | Pseudo-whole mount | T2-w MRI | Blockface-whole mount |
| Acquisition characteristics | TR: [3.9s, 6.3s], TE: [122ms, 130ms] | H&E stained, 3D-printed mold | TR: [3.7s, 7.0s], TE: 107ms | H&E stained | TR: 8.9s, TE: 120 ms | H&E stained, mold |
| Image size | H,W: {256, 512}, D: [24, 43] | H,W:[1663, 7556] | H,W: 320x320, D: [21, 31] | H,W:[2368, 6324] | H,W: 512, D: 26 | H,W: [496, 2881] |
| In-plane resolution (mm) | [0.27, 0.94] | {0.0081, 0.0162} | [0.41, 0.43] | 0.0072 | 0.27 | {0.0846, 0.0216} |
| Distance between slices | [3mm, 5.2mm] | [3mm, 5.2mm] | 4mm | Free hand | 3mm | 3mm |
Fig. 1.Proposed pipeline for registration of MRI and histopathology images. The yellow rectangle highlights the prostate in the MRI slice. The preprocessed images I and I represent the moving and the fixed images, respectively. Images I and I are fed into the image registration neural network to estimate θ that represents the affine and nonrigid transformation parameters. Cancer labels (the red outlines) in the histopathology slice are then deformed into the MRI slice using the estimated transformations.
Fig. 2.Two-stage registration framework using deep neural networks (Rocco et al., 2017). The first stage estimates an affine transform that globally aligns the two images. The second stage uses the affine transform as initialization to determine a thin-plate spline transform. Composing the two transforms gives the resulting correspondence map between I and I.
Fig. 3.Regression network for estimating transformation parameters from the correspondence map f Rocco et al. (2017).
Fig. 4.Generating training dataset by applying known transformations. I is the original image, ϕ is either an affine or thin-plate spline transform, and I is the deformed image by applying ϕ to I. Each tuple (I, I, ϕ) is considered as one training example.
Fig. 5.Training loss and validation loss curves of ProsRegNet affine and deformable registration networks.
Fig. 6.Typical deformed grid images from ProsRegNet registration.
Fig. 7.Registration results for three different subjects (one from each cohort) using the proposed ProsRegNet deep learning registration pipeline. The MRI slices were chosen as the fixed images. (Left) MRI, (Middle) registered histopathology image, (Right) MRI overlaid with registered histopathology image. Cancer labels from the histopathology images were mapped onto MRI using estimated transformations from image registration.
Fig. 8.Box plots of different measures for the RAPSODI, CNNGeometric, and ProRegNet registration approaches of three cohorts. SS: statistically significant (p ≤ 0.05), NS: not significant (p > 0.05).
Registration results of the RAPSODI, CNNGeometric, ProsRegNet approaches of three cohorts.
| Dataset | Registration | Dice Coefficient | Hausdorff | Urethra | Landmark | Computation |
|---|---|---|---|---|---|---|
| Cohort 1 | RAPSODI | 1.83 (± 0.50) | 2.48 (± 0.78) | 2.88 (± 0.73) | 264 (± 150) | |
| CNNGeometric | 0.962 (± 0.01) | 2.43 (± 0.83) | 2.62 (± 0.86) | 2.72 (± 0.75) | ||
| ProsRegNet | 0.975 (± 0.01) | |||||
| Cohort 2 | RAPSODI | 2.58 (± 1.05) | 2.96 (± 1.23) | NA | 60 (± 47) | |
| CNNGeometric | 0.948 (± 0.01) | 3.05 (± 0.69) | 2.78 (± 2.03) | NA | ||
| ProsRegNet | 0.961 (± 0.01) | NA | ||||
| Cohort 3 | RAPSODI | 0.966 (± 0.01) | 2.62 (± 1.32) | 3.3 (± 1.90) | NA | 31 (± 11) |
| CNNGeometric | 0.946 (± 0.01) | 2.68 (± 0.33) | NA | |||
| ProsRegNet | 2.91 (± 1.99) | NA |
Accuracy of the RAPSODI, CNNGeometric, ProsRegNet approaches for aligning cancerous regions.
| Dataset | Registration | Dice | Hausdorff |
|---|---|---|---|
| Cohort 1 | RAPSODI | 0.624 (± 0.12) | 6.02 (± 2.78) |
| CNNGeometric | 0.610 (± 0.11) | 5.70 (± 2.22) | |
| ProsRegNet | |||
| Cohort 2 | RAPSODI | 0.573 (± 0.13) | 5.42 (± 2.00) |
| CNNGeometric | 5.34 (± 2.14) | ||
| ProsRegNet | 0.563 (±0.14) |
Registration results of ProsRegNet trained with only prostate masks and CNNGeometric trained with multi-modal image pairs.
| Dataset | Registration Approach | Dice Coefficient | Hausdorff Distance (mm) | Urethra Deviation (mm) | Landmark Error (mm) |
|---|---|---|---|---|---|
| Cohort 1 | ProsRegNet (masks only) | 0.979 (± 0.01) | 1.49 (± 0.44) | 2.98 (± 0.82) | 3.39 (± 0.68) |
| CNNGeometric (multi-modal) | 0.960 (± 0.01) | 2.42 (± 0.55) | 2.55 (± 0.73) | 2.79 (± 0.74) | |
| Cohort 2 | ProsRegNet (masks only) | 0.971 (± 0.01) | 1.61 (± 0.33) | 2.85 (± 1.34) | NA |
| CNNGeometric (multi-modal) | 0.910 (± 0.03) | 4.08 (± 1.14) | 2.82 (± 1.34) | NA | |
| Cohort 3 | ProsRegNet (masks only) | 0.976 (± 0.01) | 1.60 (± 0.38) | 3.57 (± 2.28) | NA |
| CNNGeometric (multi-modal) | 0.947 (± 0.01) | 3.00 (± 0.82) | 3.17 (± 2.07) | NA |