| Literature DB >> 34220429 |
Christine A Edwards1,2,3, Abhinav Goyal2,4, Aaron E Rusheen2,4, Abbas Z Kouzani1, Kendall H Lee2,3,4,5.
Abstract
Functional neurosurgery requires neuroimaging technologies that enable precise navigation to targeted structures. Insufficient image resolution of deep brain structures necessitates alignment to a brain atlas to indirectly locate targets within preoperative magnetic resonance imaging (MRI) scans. Indirect targeting through atlas-image registration is innately imprecise, increases preoperative planning time, and requires manual identification of anterior and posterior commissure (AC and PC) reference landmarks which is subject to human error. As such, we created a deep learning-based pipeline that consistently and automatically locates, with submillimeter accuracy, the AC and PC anatomical landmarks within MRI volumes without the need for an atlas. Our novel deep learning pipeline (DeepNavNet) regresses from MRI scans to heatmap volumes centered on AC and PC anatomical landmarks to extract their three-dimensional coordinates with submillimeter accuracy. We collated and manually labeled the location of AC and PC points in 1128 publicly available MRI volumes used for training, validation, and inference experiments. Instantiations of our DeepNavNet architecture, as well as a baseline model for reference, were evaluated based on the average 3D localization errors for the AC and PC points across 311 MRI volumes. Our DeepNavNet model significantly outperformed a baseline and achieved a mean 3D localization error of 0.79 ± 0.33 mm and 0.78 ± 0.33 mm between the ground truth and the detected AC and PC points, respectively. In conclusion, the DeepNavNet model pipeline provides submillimeter accuracy for localizing AC and PC anatomical landmarks in MRI volumes, enabling improved surgical efficiency and accuracy.Entities:
Keywords: deep brain stimulation; deep learning; human-machine teaming; landmark localization; neuroimaging; neuronavigation; neurosurgery planning
Year: 2021 PMID: 34220429 PMCID: PMC8245762 DOI: 10.3389/fnins.2021.670287
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
FIGURE 1Example of an MRI volume annotated using 3D Slicer software. The volume is displayed in Right-Anterior-Superior (RAS) coordinate space. The annotator manually adjusts fiducial markers overlaid on the AC and PC points from linked 2-D views. For this subject, the AC and PC points are both visible in the same sagittal slice, whereas only the AC point is visible in the axial and coronal slices shown here. Depending on the orientation of the brain within the MRI volume, the AC and PC points are sometimes both visible within a single sagittal and an axial slice. The 3D coordinates for each point are displayed in millimeters in the Cartesian X-Y-Z space. The labels and corresponding 3D coordinates are saved to a fiducial comma-separated values (.fcsv) file.
DeepNavNet training experiments.
| 1 | ✓ | ✓ | ✓ | |||
| 2 | ✓ | ✓ | ✓ | |||
| 3 | ✓ | ✓ | ✓ | |||
| 4 | ✓ | ✓ | ✓ | |||
| 5 | ✓ | ✓ | ✓ | |||
| 6 | ✓ | ✓ | ✓ | |||
| 7 | ✓ | ✓ | ✓ | |||
| 8 | ✓ | ✓ | ✓ | |||
FIGURE 2Training and validation of DeepNavNet model using NiftyNet framework. Each input data volume is 256 × 256 × 150 voxels with 1 mm3 × 1 mm3 × 1 mm3 resolution. Key parameters from a customized NiftyNet configuration file are displayed on the right. These parameters were used to create DeepNavNet Model #1 (see Table 1).
FIGURE 3DeepNavNet heatmap regression using NiftyNet Framework for test data.
FIGURE 4Post-processing and overlay visualizations of a predicted AC-PC heatmap volume. (A) The 3D coordinates of AC and PC points are extracted from each predicted heatmap volume. (B) Visualizations of a post-processed predicted heatmap overlaid on its corresponding MRI scan. The radius of the spheres is approximately 7.5 mm.
Average 3D AC and PC localization errors of baseline and DeepNavNet models, measured in millimeters.
| 0 (Baseline) | 1.22 ± 0.44 (2.57) | 1.00 ± 0.45 (2.31) |
| 1 (RMSE-L1-RMSprop) ( | ||
| 2 (RMSE-L2-RMSprop) ( | 0.84 ± 0.35 (1.91) | 0.80 ± 0.36 (1.84) |
| 3 (Huber-L1-RMSprop) ( | 0.93 ± 0.40 (2.08) | 1.00 ± 0.38 (2.14) |
| 4 (Huber-L2-RMSprop) ( | 0.81 ± 0.35 (1.85) | 0.92 ± 0.39 (2.01) |
| 5 (RMSE-L1-adam) ( | 0.87 ± 0.38 (2.05) | 0.86 ± 0.35 (1.87) |
| 6 (RMSE-L2-adam) ( | 1.01 ± 0.43 (2.08) | 1.03 ± 0.42 (2.07) |
| 7 (Huber-L1-adam) ( | X | X |
| 8 (Huber-L2-adam) ( | X | X |
FIGURE 5Boxplots of localization errors achieved, measured in millimeters, by the baseline and DeepNavNet models 1-6. Statistical analysis of the baseline and Models 1-5 were conducted using all of the test data (N = 311), whereas Model 6 was evaluated using a subset (N = 91). Box whiskers span 1.5 times the interquartile range (IQR) in both directions. The 95% confidence interval spans 1.57 times the IQR /√n around the median notch for each box. Non-overlapping notches indicate significant differences between models, and asterisks indicate p-values < <0.001 relative to baseline error. (A) Boxplots for AC localization errors. (B) Boxplots for PC localization errors.
Number of outliers and the resulting standard deviation and maximum error measured in millimeters with the outliers included for the baseline and our DeepNavNet models.
| 0 (Baseline) | 11, 5.01 (62.59) | 10, 3.45 (49.76) |
| 1 (RMSE-L1-RMSprop) ( | ||
| 2 (RMSE-L2-RMSprop) ( | 2, 0.38 (2.94) | 1, 0.36 (1.91) |
| 3 (Huber-L1-RMSprop) ( | 4, 0.42 (2.38) | 0, 0.38 (2.14) |
| 4 (Huber-L2-RMSprop) ( | 4, 0.39 (2.31) | 3, 0.41 (2.74) |
| 5 (RMSE-L1-adam) ( | 1, 0.39 (2.49) | 4, 0.37 (2.30) |
| 6 (RMSE-L2-adam) ( | 6, 11.39 (79.40) | 4, 8.09 (72.65) |
FIGURE 6Training and validation learning curves for DeepNavNet Model-1.
FIGURE 7Tensorboard training learning curves for DeepNavNet Models 1-8 using hyperparameters described in Table 1.