| Literature DB >> 29754382 |
Daniel Toth1,2, Shun Miao3, Tanja Kurzendorfer4, Christopher A Rinaldi5,6, Rui Liao3, Tommaso Mansi3, Kawal Rhode6, Peter Mountney3.
Abstract
PURPOSE: In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application.Entities:
Keywords: Cardiac registration; Cardiac resynchronization therapy; Deep learning; Image fusion; Imitation learning
Mesh:
Year: 2018 PMID: 29754382 PMCID: PMC6096758 DOI: 10.1007/s11548-018-1774-y
Source DB: PubMed Journal: Int J Comput Assist Radiol Surg ISSN: 1861-6410 Impact factor: 2.924
Fig. 1Overview of the model-to-image registration method with an artificial agent
Fig. 2Architecture of the neural network that represents the artificial agent
Fig. 3Model extraction from CT images
Fig. 4Relation of fixed and moving image a before, b after registration, and c the overlay of the registered mask (green). Showing the ROI (green box), the fixed (blue cross) and the moving image landmark (red cross)
TRE of the cross landmark initially (start) and after registration
| Mean | StD. | Percentiles | ||||||
|---|---|---|---|---|---|---|---|---|
| 50% | 60% | 70% | 80% | 90% | 100% | |||
| Start (mm) | 22.80 | 10.50 | 21.42 | 25.22 | 30.03 | 33.50 | 36.96 | 47.88 |
| GO (mm) | 9.65 | 6.23 | 8.39 | 9.80 | 11.50 | 14.08 | 17.78 | 46.83 |
| GO+ (mm) | 10.49 | 5.97 | 9.42 | 10.87 | 12.49 | 15.00 | 18.54 | 38.02 |
| GC (mm) | 9.15 | 6.74 | 7.74 | 9.32 | 11.03 | 13.68 | 18.12 | 44.09 |
| GC+ (mm) | 7.80 | 6.30 | 5.91 | 7.51 | 9.30 | 11.55 | 16.43 | 48.37 |
| GI (mm) | 8.44 | 6.61 | 6.47 | 7.58 | 8.97 | 11.68 | 16.37 | 48.55 |
| GI+ (mm) | 6.79 | 4.75 | 5.63 | 6.50 | 7.48 | 8.84 | 11.77 | 46.14 |
| Manual | 6.48 | 5.60 | 4.93 | 5.97 | 7.49 | 8.70 | 11.37 | 40.82 |
| Agent (mm) | 2.92 | 2.22 | 2.34 | 2.80 | 3.45 | 4.23 | 5.76 | 16.11 |
Manual registration for a single, randomly chosen perturbation in each case
Fig. 5Evolution of the a root mean square (RMS) TRE and b individual parameters in the case shown in Fig. 4. The parameter error curves correspond to horizontal translation vertical translation and in plane rotation
Fig. 6Cases showing different degrees of robustness. a–d Convergence of the center point through the agents actions from various starting positions on the boundary of the purple circle. e–h Randomly chosen exemplary results. a P8: highly robust b P12: highly robust c P16: robust d P15: least robust e P8: success f P12: success g P16: success h P15: failure
Fig. 7Deviations of results from the median. The points mark the outliers