Literature DB >> 30258685

Real-time corneal segmentation and 3D needle tracking in intrasurgical OCT.

Brenton Keller1, Mark Draelos1, Gao Tang2, Sina Farsiu1,3, Anthony N Kuo1,3, Kris Hauser4, Joseph A Izatt3.   

Abstract

Ophthalmic procedures demand precise surgical instrument control in depth, yet standard operating microscopes supply limited depth perception. Current commercial microscope-integrated optical coherence tomography partially meets this need with manually-positioned cross-sectional images that offer qualitative estimates of depth. In this work, we present methods for automatic quantitative depth measurement using real-time, two-surface corneal segmentation and needle tracking in OCT volumes. We then demonstrate these methods for guidance of ex vivo deep anterior lamellar keratoplasty (DALK) needle insertions. Surgeons using the output of these methods improved their ability to reach a target depth, and decreased their incidence of corneal perforations, both with statistical significance. We believe these methods could increase the success rate of DALK and thereby improve patient outcomes.

Entities:  

Keywords:  (100.0100) Image processing; (110.4500) Optical coherence tomography; (170.1610) Clinical applications

Year:  2018        PMID: 30258685      PMCID: PMC6154196          DOI: 10.1364/BOE.9.002716

Source DB:  PubMed          Journal:  Biomed Opt Express        ISSN: 2156-7085            Impact factor:   3.732


1. Introduction

Optical coherence tomography (OCT) produces micrometer-scale tomographic images of both the anterior and posterior segments of the human eye [1]. Earlier studies showed the impact of OCT imaging during surgery or for examination of patients under anesthesia using handheld systems separate from the operating microscope for imaging [2, 3]. Recently, research groups and commercial entities have integrated OCT systems into surgical operating microscopes to provide direct visualization of ophthalmic surgery in real-time [4-16]. Early versions of microscope integrated OCT (MICOT) systems were restricted to real-time two-dimensional imaging [4, 6, 7, 9, 14–16], but research prototype systems can now acquire, process, and render three-dimensional OCT data in real-time [12, 17–23]. Surgeons are able to view the OCT images in their operating microscopes via heads up displays (HUDs) [10, 11, 14–16, 24]. MIOCT has demonstrated its utility for both visualizing ophthalmic surgical procedures and enhancing surgeon performance in ex vivo depth-based tasks [4, 12, 25–32]. Deep anterior lamellar keratoplasty (DALK) is a type of corneal transplant where the corneal stroma and epithelium are replaced, and the host Descemet’s membrane (DM) and endothelium remain. This procedure differs from penetrating keratoplasty (PKP) wherein the entire cornea, including the DM and endothelium, is replaced. DALK has a lower chance of graft rejection (the antigenic endothelium is not transplanted), increased predicted graft survival [33], and fewer post-operative complications [34], with no significant difference in visual acuity [33,34] compared to PKP. One of the more common techniques for performing DALK is the “big bubble” technique [35]. In this method, the surgeon inserts a hypodermic needle into the corneal stroma and advances the needle, following the curvature of the cornea, to the apex. At the apex, the surgeon injects an air bubble, which ideally separates DM from the stroma. The surgeon can then easily resect the epithelium and stroma without harming the endothelium. The main drawback of big bubble DALK is that positioning the needle and injecting the air bubble are both extremely difficult. Inserting the needle too superficially prevents the bubble from separating the proper deeper layers. Inserting the needle too deeply increases the risk of perforating DM, in which case the surgeon usually must revert to PKP. As such, failure rates for separating stroma from DM range between 44% to 54% [33, 36, 37]. In a prior study, MIOCT was used to monitor the penetration depth of the needle in ex vivo big bubble DALK trials. That study determined the success of big bubble formation was highly dependent on the final needle penetration depth as a percentage of corneal thickness [38]. However, in that study needle depth was determined via manual segmentation of MIOCT data after the procedure. The purpose of this work is to automate needle depth estimation and provide real-time information about needle progress to the surgeon during the procedure. We present methods for segmenting two corneal surfaces and tracking a needle’s location in real-time for use in guiding big bubble DALK.

1.1. Related work

Many methods for both corneal and retinal segmentation exist in the literature [39-54], and some are capable of processing images in real-time (i.e. keeping up with a data acquisition rate of at least several frames per second) [49, 50]. The segmentation method presented in this work is most closely related to the graph theory approach used by LaRocca et al. [43], but differs in that we acquire images over a larger field of view and use a swept-source laser based MIOCT system. Furthermore, we integrate the segmentation code into our acquisition software, impose a real-time processing constraint, and address the problem of the needle and its OCT “shadow” corrupting the segmentation. One benefit of intrasurgical OCT is that the surgeon can observe dynamic tissue-tool interactions. However, this is only possible if the tool is visible in the OCT image/volume. For two-dimensional OCT, this requires the surgeon or an operator to manually track the tool with the OCT B-scan location [55]. In three-dimensional OCT, the surgeon or operator needs to determine which B-scan(s) within the volume contain the tool. To address this problem, several automated tool tracking solutions have been proposed in the literature. El-Haddad et. al tracked the lateral position of the tool by placing fiducial markers on the base of the tool and using a stereo camera pair to track the tool tip [56]. Viehland et. al performed tool tracking in intrasurgical OCT with software using multiple volumetric renderings, but assumed little to no contact between the tissue and tool [57]. Zhou et al. developed two methods, a morphological approach and 2D deep learning approach, to segment a needle above the cornea of pig eyes. Gesser et. al developed a 3D convolutional neural network to determine the 6D pose of custom designed marker, which could be attached to a tool [58]. A hardware based approach, specifically designed for DALK, involved placing the OCT fiber in the needle to provide the surgeon with an M-mode scan, which conveyed depth information about their needle as an insertion was performed [59]. The needle tracking approach described here is capable of estimating the needle tip in three dimensions by fitting a 3D model to automatically detected needle voxels, which provides five degree of freedom tracking. Since our intended use case, big bubble DALK, requires the needle to puncture the cornea, our tracking approach must work even when the needle is physically altering the tissue and interfering with the OCT imaging light.

2. Methods

2.1. System description

The optical setup of our MIOCT system was described in detail in a previous publication [12]. Briefly, we used a 100 kHz swept frequency laser centered at 1060 nm. The optical signal detection chain included a balanced photoreceiver (Thorlabs; Newton, NJ) and a 1.8 GS/s digitizer (Alazar; Quebec, Canada). The system had a peak sensitivity of 102 dB and a −6 dB falloff distance of 4.9 mm. We used a raster scan acquisition pattern with volume dimensions of 5.47 mm × 12.0 mm × 8.0 mm, consisting of 2205 spectral samples/A-scan, 688 A-scans/B-scan (500 usable A-scans/B-scan), and 96 B-scans/volume. All computation was performed on a 64-bit Windows 10 machine with an Intel i7-6850K processor and an NVIDIA GeForce GTX Titan X graphics card. The segmentation and tracking algorithms ran on the CPU whereas OCT processing and display code ran on the GPU.

2.2. Requirements and assumptions

We designed the segmentation and tracking algorithms to minimize perceived image latency for the system operator, given the acquisition rate constraints of our system. In our setup, OCT volumes (each comprising 96 B-scans) were processed on the GPU in parallel and the partial volume display was updated in groups of 32 sequential B-scans, while the next group was being acquired. B-scan groups of 32 (acquired every 688 × 32 ÷ 105 = 220 ms) were chosen to exploit the parallel processing capabilities of the GPU, while limiting the partial volume rendering latency (i.e., the maximum time between data acquisition and display) to less than 1/3 second (220 ms acquisition time plus approximately 50 ms processing/display time). This real-time constraint required processing 32 B-scans of raw interferograms into images, segmenting the images, and tracking the needle (once per volume) before the next group of 32 B-scans were acquired. This left 170 ms to segment B-scans and track the needle. Because of this aggressive time constraint, we segmented B-scans in parallel but were able to segment only every other B-scan in the volume (16 of the 32 B-scans in each group). Additionally, we assumed that the entire cornea would be visible and laterally centered (roughly) in the volume and that the needle would be hyper-reflective.

2.3. Algorithmic description

A flow chart of the entire segmentation and needle tracking process is shown in Fig. 1 . The following subsections describe the methods used for segmentation and tracking in the order in which they were performed.
Fig. 1

Flow chart of segmentation and tracking to find needle penetration depth. The acquisition software acquired volumes of 96 B-scans in three groups of 32 B-scans. B-scan segmentation of each group occurred during the acquisition of the next group.

Flow chart of segmentation and tracking to find needle penetration depth. The acquisition software acquired volumes of 96 B-scans in three groups of 32 B-scans. B-scan segmentation of each group occurred during the acquisition of the next group.

2.3.1. Real-time segmentation

We performed segmentation in parallel by processing B-scans independently on separate CPU hardware threads using the OpenMP parallel computing library [60]. This section describes the method used to segment each individual B-scan. We began the segmentation process by low-pass filtering the B-scan to reduce noise and prevent aliasing. Our filter was a 3×11 Gaussian filter with a standard deviation of 1.0 in both x and y. We then downsampled the image by a factor of two in the lateral dimension and a factor of five in the axial dimension. We determined the minimum amount of downsampling required to segment 16 B-scans in under 170 ms, while still leaving time to perform needle tracking. We downsampled the axial dimension more than the lateral dimension because our system’s axial resolution was greater than its lateral resolution. After blurring and downsampling, we subtracted the average A-scan from each A-scan in the image to remove horizontal line artifacts [43]. Next, we obtained four gradient images, two horizontal images and two vertical images, by convolving the image with [−1, −1, −1, −1, −1, 0, 1, 1, 1, 1, 1], [−1, −1, −1, −1, −1, 0, 1, 1, 1, 1, 1] ⊤, [1, 1, 1, 1, 1, 0, −1, −1, −1, −1, −1], and [1, 1, 1, 1, 1, 0, −1, −1, −1, −1, −1]⊤ filters and setting negative values to zero. We then normalized the gradient images between zero and one. These gradient images were used to detect black to white vertical (top down) transitions, white to black vertical transitions, black to white horizontal (left to right) transitions, and white to black horizontal transitions. We constructed a graph from the gradient images by following the procedures outlined in previous publications [42–44, 52]. Pixels in the image represented nodes, or vertices, in the graph and weighted edges connected neighboring pixels. In our graph, each pixel was connected to the pixels above, above and right, right, below and right, and below it. A starting node was connected to all nodes in the image’s leftmost column, and an ending node was connected to all the nodes in the image’s rightmost column. Weights between starting/ending nodes and other nodes were set to the minimum possible weight. Equation (1) [42] was used to compute weights between neighboring nodes, where w was the edge weight connecting nodes i and j, G was the gradient value at node i, G was the gradient value at node j, and d was the physical distance between node i and j. When searching for the epithelial surface, we used the black to white horizontal gradient image when node i was below node j, the black to white vertical gradient image when nodes i and j were horizontal, and the white to black horizontal gradient image when node i was above node j. Searching for the endothelial surface required that we change the gradients used to create the graph. As such, we used the white to black horizontal gradient image when node i was below node j, the white to black vertical gradient image when nodes i and j were horizontal, and the black to white horizontal gradient image when node i was above node j. To constrain the search space, we found a rough estimate of the epithelial surface by using the black to white vertical gradient image and fitting a second order polynomial to the maximum gradient in each column using iteratively re-weighted least squares [61, 62]. Shrinking and expanding this estimate in the direction normal to the estimate by 0.4 and 0.8 times the mean center corneal thickness of 537 µm [63] provided us with a minimum and maximum height search area constraint for the epithelial surface (magenta lines in Fig. 2(B) ). We removed nodes in the graph above/below the maximum/minimum height restrictions and used the minimum height estimate to constrain the starting and ending point of our graph search. The minimum and maximum height constraints ensured we segmented only the epithelial surface, while also decreasing the search duration. Because the constraints were only estimates, we removed edges between the starting/ending node and leftmost/rightmost column outside a 100 µm window around the intersection of the minimum height estimate with the leftmost and rightmost columns. We found the epithelial surface by searching for the shortest path from the starting node to the ending node using Dijkstra’s algorithm [64].
Fig. 2

Illustration of corneal segmentation. (A) Original image obtained from human cadaver corneal sample. (B) Epithelial segmentation (orange) with epithelial constraints (magenta). (C) Epithelial and endothelial segmentation (orange) with endothelial constraints (magenta).

Illustration of corneal segmentation. (A) Original image obtained from human cadaver corneal sample. (B) Epithelial segmentation (orange) with epithelial constraints (magenta). (C) Epithelial and endothelial segmentation (orange) with endothelial constraints (magenta). After segmenting the epithelial surface, we fit a smoothing spline [65] to the result. We shrank the epithelial surface spline to be 0.8 and 2.3 times the mean center corneal thickness [63] and removed vertices above/below these lines to establish a search area constraint for the endothelial surface. The search was allowed to start/finish within 100 µm of the intersection of the minimum height constraint with the leftmost/rightmost columns or the bottom row. We found the endothelial surface by searching for the shortest path from the starting node to the ending node using Dijkstra’s algorithm (Fig. 2(C)).

2.3.2. Needle tracking

Once the entire volume was acquired and segmented, we searched for the needle. Because we assumed the needle to be hyper-reflective, we reduced our search space to only include bright voxels within the volume. Thus, we created a maximum intensity projection (MIP) of a DC subtracted volume, in which each B-scan had its average A-scan subtracted. This DC subtraction helped to suppress bright horizontal line artifacts introduced by the needle. We applied a threshold of 210 (the pixel values ranged from 0 to 255) to the MIP and recorded the depth of points in the MIP above the threshold. Then, we performed connected component (CC) labeling [66] on the thresholded depth map. We considered pixels connected if they were within 100 µm of each other laterally and within 50 µm in depth. Choosing this distance-based connectivity definition helped separate the needle from other bright points in the image, such as the corneal apex, and accommodated small gaps in the needle image. We then filtered the CCs using their physical dimensions. The needle is much longer than it is wide and for this reason we considered only CCs with an appropriate second principal component. Using the equation for the variance of a uniform distribution, , where was the variance of the current CC, we checked to see if the width of the CC was within an acceptable range of the known needle width (410 µm for the 27-gauge needle we used). For our system and scan dimensions we assumed that any CC between 0.65 and 1.25 times the known needle width could be the needle. It was necessary to allow for a range of widths for two reasons: (1) the MIP was acquired from a rolling shutter and needle movement could increase or decrease the width of the needle; and (2) thresholding could eliminate actual needle pixels and thereby reduce the width. If any of the CC matched the needle width criterion, we extrapolated the first principal component to the MIP borders to determine an estimate for the lateral position of the needle at the edge of the image (base) and needle tip. The base was estimated as the point in the CC closest to a border and the tip was estimated as the point in the CC furthest from the base estimate. CCs with a base more than 300 µm away from a border were discarded, as it was highly unlikely that needle’s base was not near the edge of the volume. Figure 3 shows an example of the process used to determine the lateral position of the base and tip of a needle in a volume. After determining the lateral position of the base and tip of the needle, we obtained depth estimates for the needle base and tip to seed a 3D model fit. To accomplish this, we looked at each pixel in the MIP identified as the needle, found the closest point on the line connecting base to tip, and recorded the depth and percent distance from base. We fit a robust first order polynomial mapping percent distance from base to depth. The depths at the base and tip were calculated by evaluating this polynomial at 0% and 100%.
Fig. 3

Representation of the process used to determine an estimate of the needle base and tip. (A) DC-subtracted maximum intensity projection (MIP). (B) Thresholded depth map. (C) Six largest connected components from the depth map. Only the green connected component fit the width criterion. (D) Needle base estimate (green circle) and needle tip estimate (red circle) based on the intersection of the line formed by the first principal component with the borders of the image (blue line). Pixels identified as the needle are orange. Best viewed in color.

Representation of the process used to determine an estimate of the needle base and tip. (A) DC-subtracted maximum intensity projection (MIP). (B) Thresholded depth map. (C) Six largest connected components from the depth map. Only the green connected component fit the width criterion. (D) Needle base estimate (green circle) and needle tip estimate (red circle) based on the intersection of the line formed by the first principal component with the borders of the image (blue line). Pixels identified as the needle are orange. Best viewed in color. We fit a 3D model to the tool instead of using our tip estimate because the 3D model utilized all needle information available in the volume and therefore was more robust to noise. The additional step of fitting a 3D model did not require significant processing time (Section 3.2). We used needle voxels identified in the volume to fit a 3D model. The needle pixels found from the MIP provided us with the lateral location of potential needle voxels. At each needle pixel location, we searched the A-scan for voxels brighter than 180 (voxel values ranged from 0 to 255). However, this allowed any bright voxel in the A-scan to be added to the set of needle voxels, which interfered with our model fitting. To prevent erroneous voxels from being added, we used the polynomial which mapped percent distance from base to depth computed in our previous step. We required the distance in depth between any potential needle voxel and calculated depth at that needle pixel location to be less than the known needle radius. Once we identified all the needle voxels, we used the Iterative Closest Point (ICP) algorithm [67] to fit a 3D model of the needle. Our three-dimensional model was a hollow semicylinder with outer diameter equal to that of the needle, inner diameter equal to 3/4 the needle diameter, and length equal to the needle length. ICP determined the rotation and translation from needle model to needle voxels that minimized the sum of distances between all needle voxels and their corresponding closest point on the model. The closest point on the model to a needle voxel was computed using the procedure outlined by Barbier and Galin [68]. We used the needle base and tip location estimates computed from the depth map to seed ICP with an initial transform. The output of ICP was a transform that provided the needle’s yaw, pitch, and tip position. We did not find a meaningful roll angle for the needle due to its rotational symmetry. Although yaw and pitch were not used in our ex vivo study, yaw could be used to align the scan’s fast axis with the needle to provide a higher resolution cross section to the surgeon.

2.3.3. Needle shadow segmentation correction

Correctly segmenting the two corneal surfaces required additional techniques for those B-scans also containing the needle. Our segmentation method used image gradients to delineate surfaces, but the presence of a hyper-reflective needle created stronger gradients than did the corneal surfaces. This caused the segmentation to include the needle when tracing out the corneal surfaces (Fig. 4(A) ). Furthermore, the needle cast a shadow, obscuring anything below it. To address these problems, we corrected the segmentation of B-scans where the needle was present after locating the needle. For each segmented surface, we created a two-dimensional height map of the segmentation. Then, using the output of our needle tracking in the en face image, we marked pixels to correct by inflating the needle 1.0 mm along its principal axis and by 0.75 mm in the orthogonal direction. This inflation was necessary because our graph-based segmentation failed at points around the needle (white arrow Fig. 4(A)). Next, we performed a trial inpainting [69, 70] of marked pixels in the height map (green pixels Fig. 4(D)). Inpainting uses information from surrounding pixels to compute the value of pixels marked to be inpainted. If marked pixels in the trial inpainting did not change significantly, we concluded they contained valuable information that could be used to more accurately update the value of pixels which did change significantly. Therefore, only marked pixels that changed in height by more than 150 µm and 50 µm in the epithelial and endothelial surfaces, respectively, were inpainted in the final inpainting (green pixels Fig. 4(E)). From the final inpainted height map (Fig. 4(F)), we reconstructed the segmentation for each B-scan. This reconstruction preserved the connectivity and smoothness constraints imposed by the graph search for single B-scans, and inpainting enforced smoothness and connectivity constraints across B-scans. We found inpainting performed best when the needle was aligned with the scan’s fast axis. Because our tracking method determined needle yaw, our system was capable of adjusting the scan’s rotation to align the scan’s fast axis with the needle.
Fig. 4

Example needle shadow segmentation correction. (A) B-scan with uncorrected segmentation. Shadows from the needle interfere with the endothelial surface segmentation. Inflating the needle allows for the area by the white arrow to be corrected. (B) Corrected segmentation taken from height map in (F). (C) Height map of the endothelial surface of the original segmentation. Black arrow denotes corrupted segmentation caused by the needle. (D) Height map of the endothelial surface of the original segmentation with the inflated needle pixels marked in green and the location of B-scan (A) and (B) denoted by the blue line. (E) Height map of the endothelial surface of the original segmentation with pixels that changed after the trial inpainting marked in green. (F) Corrected height map after inpainting green pixels in (E). Black arrow denotes original location of corrupted segmentation.

Example needle shadow segmentation correction. (A) B-scan with uncorrected segmentation. Shadows from the needle interfere with the endothelial surface segmentation. Inflating the needle allows for the area by the white arrow to be corrected. (B) Corrected segmentation taken from height map in (F). (C) Height map of the endothelial surface of the original segmentation. Black arrow denotes corrupted segmentation caused by the needle. (D) Height map of the endothelial surface of the original segmentation with the inflated needle pixels marked in green and the location of B-scan (A) and (B) denoted by the blue line. (E) Height map of the endothelial surface of the original segmentation with pixels that changed after the trial inpainting marked in green. (F) Corrected height map after inpainting green pixels in (E). Black arrow denotes original location of corrupted segmentation.

2.3.4. Percent depth calculation

After correcting the segmentation, we computed the needle penetration depth in the cornea. We used the method from Pasricha et al. [38] because they showed this particular depth measurement was strongly indicative of bubble formation and computing it did not take a significant amount of time (Section 3.2). We created a segmented OCT cross section along the axis of the needle by linearly interpolating the segmented epithelial and endothelial surfaces. The segmented endothelial surface and tracked tool tip were refraction corrected using 2-D refraction correction [71] assuming a corneal refractive index of 1.376 [72]. Using this cross section along the needle axis, we determined the point on the epithelial surface with a normal vector whose line passed closest to the computed needle tip. Then, we located the point on the endothelial surface that was closest to the line formed by the epithelial surface point and the needle tip. The penetration depth was the distance between the epithelial surface point and the needle tip as a percentage of the total distance between epithelial and endothelial surface points. A sample refraction corrected cross section along the needle axis illustrating the penetration depth calculation is shown in Fig. 5 .
Fig. 5

Refraction corrected cross section along the axis of the needle. Green dots denote the epithelial surface point, needle tip, and endothelial surface point used to compute the depth along the magenta line.

Refraction corrected cross section along the axis of the needle. Green dots denote the epithelial surface point, needle tip, and endothelial surface point used to compute the depth along the magenta line.

2.4. Experiments

We quantitatively validated our image segmentation, needle tracking, and overall system efficacy for DALK needle insertions in three separate experiments. Corneal surgery fellows performed the needle insertion step of DALK in human donor corneas, and we measured the efficacy of the system by recording their perforation rate, perforation-free final depth, and the variance of their perforation-free final depth with and without segmentation/tracking. The validation experiments for our segmentation and system efficacy used nine human cadaver corneas, provided by Miracles in Sight (Winston-Salem, NC). Donor corneas were mounted on a Barron artificial anterior chamber (Katena, Denville, NJ) and a syringe was used to pressurize the corneas with saline. The corneas were imaged under an MIOCT system, which included a stereoscopic microscope (Fig. 6 ). We used a 27-gauge needle in all experiments. Needle tracking validation was performed without the operating microscope. This study was approved by the Duke University Health System Institutional Review Board was in accordance with Health Insurance Portability and Accountability Act regulations and the standards of the 1964 Declaration of Helsinki.
Fig. 6

Experimental setup for validation experiments. In the experiment where corneal fellows inserted needles into the cornea, a tracked cross section was displayed on the monitor next to the microscope.

Experimental setup for validation experiments. In the experiment where corneal fellows inserted needles into the cornea, a tracked cross section was displayed on the monitor next to the microscope.

2.4.1. Real-time segmentation

To test our segmentation algorithm, we imaged seven of the nine donor corneas and segmented the center 25 B-scans of the volume (175 B-scans total). We compared the results of our automatic segmentation to a single manual grader’s segmentation. Because our automatic segmentation downsampled the images, we upsampled and linearly interpolated our automatic segmentation before comparing it to the manual segmentation. We also corrected for manual grader bias [42] by randomly selecting one manually segmented B-scan from each of the seven corneas and adding the average bias to our automatic segmentation. To test our segmentation correction via inpainting, we imaged the same seven corneas while a corneal surgery fellow positioned a 27-gauge needle in the volume above the cornea. We then compared the result of the corrected automatic segmentation to the manual segmentation with no needle in the cornea. To account for bulk cornea motion between the volume acquisitions with and without the needle, we registered B-scans [73] before comparing the segmentation. We computed the vertical difference between manually and automatically segmented layers at each A-scan. Our error measurement was the mean absolute vertical difference between methods. We obtained the average error using the center 80% of A-scans in each B-scan to remove the influence of large errors in the clinically less relevant periphery.

2.4.2. Needle tracking

To evaluate needle tracking performance, we mounted a 27-gauge needle on a calibrated 3-axis micrometer stage and roughly aligned the axes of translation to those of the OCT volume using three rotation stages. By aligning the translation and volume axes, we were able to approximate the error along the insertion direction, the direction orthogonal to insertion, and in depth. The needle was inserted along the fast axis of our scan. We translated the needle by hand in a 64-point grid pattern inside the volume and recorded the output of our tracking. The pattern consisted of four points spaced over 7.5 mm along the direction of insertion, four points spaced over 6.0 mm along the direction perpendicular to insertion, and four points spaced over 3.0 mm in depth. To compute the tracking error, we took the pattern as ground truth and computed the difference in incremental displacement between successive pattern points and our tracked positions. To test the accuracy of our yaw/pitch rotation estimates, we mounted the needle on a calibrated rotation stage and rotated the needle in increments of 5° over 360° (yaw) and 1° over ±10° (pitch). We used the rotation stage as ground truth and error was measured as the difference in angle between our tracking and the rotation stage.

2.4.3. Entire system

We evaluated the efficacy of our segmentation and tracking by having surgeons perform the needle insertion step of DALK in ex vivo human donor corneas. We compared the performance of corneal surgical fellows using a stereoscopic operating microscope alone (i.e., the clinical standard of care) to their performance when they used the microscope and the output of our tracking/segmentation. The surgeons were provided a tracked cross section along the axis of their needle (as in Fig. 5) labeled with the needle’s calculated percent depth in the cornea on a monitor next to the microscope (Fig. 6). A total of three corneal surgical fellows performed needle insertions. Each surgeon completed a total of 24 consecutive trials on three different corneas, eight per cornea. Prior to performing trials, we showed each surgeon example OCT cross sections of various needle penetration depths (Fig. 7 ). In 12 insertions, the surgeon viewed the procedure through the surgical microscope only, and in the other 12 insertions the surgeon viewed the procedure both through the microscope and on a monitor near the microscope, which displayed the output of our segmentation/tracking algorithm. Surgeons roughly aligned their needle with the scan’s fast axis to ensure a high resolution cross section (~500 pixels wide vs ~96). We used the first four insertions, two microscope-only, and two microscope with segmentation/tracking to familiarize the surgeons with our setup. These were not included in the statistical analysis of surgeon performance. In each trial, surgeons attempted to insert a needle into the donor cornea to 80%–90% depth and indicated when they would have injected the air bubble to end the trial. The order of all trials was randomized.
Fig. 7

Series of images depicting different needle penetration depths, as shown to all surgeons prior to performing the experiment. Needle percent depths are displayed at the bottom of each image.

Series of images depicting different needle penetration depths, as shown to all surgeons prior to performing the experiment. Needle percent depths are displayed at the bottom of each image. We recorded volume and segmentation/tracking time series for each insertion. From the volume time series, we manually determined the final needle depth at the end of each insertion and whether the surgeon punctured through the endothelial layer at any point. We compared the number of punctures, the final percent depth of non-puncture insertions, and the variance of the final percent depth of non-puncture insertions between the two types of visualizations (microscope only and microscope with segmentation/tracking). We modeled the likelihood of a puncture for the two visualization methods using a generalized linear mixed model to account for effects from surgeons and corneas. We used Levene’s test to check if there was a significant difference in the variance of the final percent depth between the two visualizations. Because there was a significant difference in the variance between the two visualization methods, we modeled the estimated final percent depth surgeons would achieve with a Gaussian location-scale linear mixed model [74] using data from the non-puncture trials. We also calculated the absolute error between our estimate of the final needle depth and the manually determined final needle depth.

3. Results

3.1. Real-time segmentation

The mean absolute ± standard deviation error between automatic and manual segmentation for the epithelial surface with no needle was 17 µm ± 13 µm and the error for the endothelial surface was 25 µm ± 23 µm. The error between the corrected automatic segmentation with a needle and the manual segmentation for the epithelial surface was 24 µm ± 26 µm and the error for the endothelial surface was 30 µm ± 32 µm. A qualitative comparison of manual versus automatic segmentation with and without a needle present is shown in Fig. 8 . Segmentation error statistics are shown in Table 1. The mean time required to segment 16 B-scans during all trials was 79.6 ms ± 6.5 ms.
Fig. 8

Comparison of manual and automatic segmentation for a B-scan with and without a needle. (A) Original B-scan, with no needle. (B) Segmented B-scan. Green denotes the manual segmentation and purple denotes the automatic segmentation. Where the green is not visible, the two methods segmented the same point. (C) Original B-scan with a needle. (D) Uncorrected automatic segmentation. (E) Corrected automatic segmentation (purple) and manual segmentation (green). Best viewed in color.

Table 1

Mean Absolute A-scan Segmentation Error

No NeedleNeedle
Mean (mm)SD (mm)Mean (mm)SD (mm)
Epithelial Surface0.0170.0130.0240.026
Endothelial Surface0.0250.0230.0300.032
Comparison of manual and automatic segmentation for a B-scan with and without a needle. (A) Original B-scan, with no needle. (B) Segmented B-scan. Green denotes the manual segmentation and purple denotes the automatic segmentation. Where the green is not visible, the two methods segmented the same point. (C) Original B-scan with a needle. (D) Uncorrected automatic segmentation. (E) Corrected automatic segmentation (purple) and manual segmentation (green). Best viewed in color. Mean Absolute A-scan Segmentation Error

3.2. Needle tracking

The approximate RMS tracking errors between our algorithm and the calibrated micrometer stage along the insertion direction, orthogonal to the insertion direction, and in depth were 15 µm, 16 µm, and 7 µm, respectively. The overall RMS position tracking error in 3D was 12 µm. The RMS errors for yaw and pitch were 0.300° and 0.099°, respectively. Needle tracking RMS errors are shown in Table 2. The mean time required for the software to find the needle and correct the segmentation was 16.8 ms ± 6.4 ms. The mean total time needed to segment the last group of B-scans in the volume, track the needle, and correct the segmentation was 96.8 ms ± 8.9 ms, which was below the 170 ms real-time deadline.
Table 2

Needle Tracking Position and Rotation Error

Position (mm)Rotation (°)
InsertionOrth. to InsertionDepthTotalYawPitch
RMS Error0.0150.0160.0070.0120.3000.099
Needle Tracking Position and Rotation Error

3.3. Entire system

Surgeons punctured through the endothelium in 2 of 30 trials using segmentation/tracking and in 15 of 30 trials using only the microscope. This reduction in puncture rate with segmentation/tracking was statistically significant (p < 0.001). A scatter plot of the final percent depth for trials where the surgeon did not puncture the endothelium is shown in Fig. 9(A) , and a plot of the performance of our automatic needle depth calculation compared against manual segmentation is shown in Fig. 9(B). The standard deviation of the percent depth for segmentation/tracking trials was 6.68% (N = 28), and the standard deviation of the percent depth for microscope only trials was 17.16% (N = 15). The reduced variance in insertions with segmentation/tracking was statistically significant (p = 0.009). Our model estimated that surgeons able to view the output of our segmentation/tracking would achieve a mean percent depth of 79.3% and would achieve a mean percent depth of 62.2% given only the microscope. This increase in estimated final needle depth with segmentation/tracking was statistically significant (p < 0.001). The mean absolute error between our automatic needle percent depth calculation and the manual calculation was 6.83%(49 µm) ± 4.45%(34 µm), over 70 trials. The needle was not identified by our method in the final volume twice.
Fig. 9

(A) Plot of the final needle depth expressed as a percent of corneal thickness for all trials in which the surgeon did not puncture the endothelium. A blue X indicates the mean of the group and error bars denote one standard deviation. (B) Plot illustrating performance of the automatic needle percent depth calculation compared to the manual calculation.

(A) Plot of the final needle depth expressed as a percent of corneal thickness for all trials in which the surgeon did not puncture the endothelium. A blue X indicates the mean of the group and error bars denote one standard deviation. (B) Plot illustrating performance of the automatic needle percent depth calculation compared to the manual calculation.

4. Discussion

This work demonstrated how automatic real-time quantitative metrics obtained from OCT can drastically improve surgeon performance in an ex vivo setting when compared to using only the stereo microscope, the current clinical standard. We were able to segment images despite tissue deformation, shadowing, and artifacts introduced by the needle. Our computational based approach (as opposed to a hardware-based approach [56]) to needle tracking allowed us to track the needle without having to perform calibration between the OCT and needle coordinate frames. Although calibration is not computationally expensive, it must be performed any time either imaging system (OCT or tracking cameras) is moved. Our approach to tracking and segmentation also required no modifications to the needle, in contrast to prior work [10, 56, 58, 59]. We were able to visualize the needle and important boundaries by correcting the segmentation. However, the principal disadvantage of our approach was the tracking update rate which was limited by our 100 kHz A-scan rate. The delay surgeons experienced between performing an action and seeing it displayed depended on the location of their needle in the volume. If their needle was in the first B-scans of the volume, the worst case scenario, the approximate delay was 688 × 96 ÷ 105 + 96.8 ms = 758 ms. If their needle was in the last B-scans of the volume, the best case scenario, the approximate delay was 96.8 ms. For most insertions, the surgeon’s needle was near the middle of the volume and the approximate delay was 688 × 48 ÷ 105 + 96.8 ms = 427 ms. This latency could explain why surgeons still punctured the endothelium twice even with OCT guidance. We and others have demonstrated faster OCT systems [19,23,75-77], which would reduce this latency. The need for segmentation and tracking to run in real-time was a major constraint when designing these methods. Any additional delay in providing feedback to surgeons decreased the usefulness of the feedback because of the dynamic nature of procedures. To meet this demand we segmented B-scans in parallel, greatly reduced the search space when segmenting the epithelial and endothelial surfaces (Fig. 2), and downsampled the data in all three dimensions prior to segmentation. By downsampling the data, we took advantage of the fact that corneal boundaries are naturally smooth (having been fit by low order polynomials in prior work [43]), while still showing fully sampled and rendered images to the surgeon for real time guidance. It is possible to design significantly more accurate segmentation methods [78]. The artifacts, shadowing, and disruption of the segmentation introduced by the needle necessitated use of empirically chosen parameters. We note that although we relied on these parameters, they were validated on a data set completely independent from the one with which they were selected. Additionally, because the segmentation and tracking methods operate in real-time, these parameters could be easily adjusted during live imaging to achieve the desired result. While our results from the ex vivo needle insertions were encouraging, we recognize there were some limitations in our experimental design. We did not directly compare the performance of surgeons against a stereoscopic operating microscope with manually tracked OCT. This additional experiment might have provided insight into the reason for surgeons’ improved performance. However, we chose not to perform this experiment for two reasons. First, while intrasurgical OCT systems have recently become commercially available [14-16] and are promising for this application, these systems are not yet in widespread use. As such, the majority of surgeons in practice still perform the DALK procedure without OCT guidance. We believe that a comparison of our method against the current clinical standard is most appropriate. Second, surgeon performance with manual needle tracking critically depends on the ability of the operator to accurately position the displayed B-scan at the needle tip. We would have had difficulty distinguishing whether performance differences between automatic and manual tracking were due to operator performance or manual B-scan tracking error. Additionally, to decrease the number of donor corneas required to perform the experiment, we did not have the surgeon inject an air bubble. Although needle penetration depth is strongly correlated with success of big bubble formation [38], injecting air would have provided a more definitive result. Finally, the simulated procedure did not capture all the complexity of live human surgery where patient motion, additional tools, and diseased corneas present additional barriers to successful segmentation and tracking.

5. Conclusion

In this work, we developed methods for real-time segmentation and needle tracking of volumetric corneal OCT data for use in an intrasurgical setting. Tracking and segmentation were used to provide surgeons with live feedback of their needle penetration depth in ex vivo DALK needle insertions. With this feedback, surgeons perforated less frequently, achieved higher penetration depths, and demonstrated improved consistency than when using only the surgical microscope. Automatically providing ophthalmic surgeons with the anatomically-relevant location of their tool during surgery has the potential to increase the success rate of technically difficult procedures such as DALK.
  57 in total

1.  Distribution of central corneal thickness and its association with intraocular pressure: The Rotterdam Study.

Authors:  R C Wolfs; C C Klaver; J R Vingerling; D E Grobbee; A Hofman; P T de Jong
Journal:  Am J Ophthalmol       Date:  1997-06       Impact factor: 5.258

2.  Intraoperative OCT-Assisted DMEK: 14 Consecutive Cases.

Authors:  Alain Saad; Emmanuel Guilbert; Alice Grise-Dulac; Patrick Sabatier; Damien Gatinel
Journal:  Cornea       Date:  2015-07       Impact factor: 2.651

3.  Graph-based multi-surface segmentation of OCT data using trained hard and soft constraints.

Authors:  Pascal A Dufour; Lala Ceklic; Hannan Abdillahi; Simon Schröder; Sandro De Dzanet; Ute Wolf-Schnurrbusch; Jens Kowal
Journal:  IEEE Trans Med Imaging       Date:  2012-10-18       Impact factor: 10.048

4.  Graphics processing unit accelerated optical coherence tomography processing at megahertz axial scan rate and high resolution video rate volumetric rendering.

Authors:  Yifan Jian; Kevin Wong; Marinko V Sarunic
Journal:  J Biomed Opt       Date:  2013-02       Impact factor: 3.170

5.  Microscope-integrated intraoperative OCT with electrically tunable focus and heads-up display for imaging of ophthalmic surgical maneuvers.

Authors:  Yuankai K Tao; Sunil K Srivastava; Justis P Ehlers
Journal:  Biomed Opt Express       Date:  2014-05-20       Impact factor: 3.732

6.  Needle Depth and Big-Bubble Success in Deep Anterior Lamellar Keratoplasty: An Ex Vivo Microscope-Integrated OCT Study.

Authors:  Neel D Pasricha; Christine Shieh; Oscar M Carrasco-Zevallos; Brenton Keller; David Cunefare; Jodhbir S Mehta; Sina Farsiu; Joseph A Izatt; Cynthia A Toth; Anthony N Kuo
Journal:  Cornea       Date:  2016-11       Impact factor: 2.651

7.  Insights into advanced retinopathy of prematurity using handheld spectral domain optical coherence tomography imaging.

Authors:  Sai H Chavala; Sina Farsiu; Ramiro Maldonado; David K Wallace; Sharon F Freedman; Cynthia A Toth
Journal:  Ophthalmology       Date:  2009-09-18       Impact factor: 12.079

8.  Visualization of real-time intraoperative maneuvers with a microscope-mounted spectral domain optical coherence tomography system.

Authors:  Justis P Ehlers; Yuankai K Tao; Sina Farsiu; Ramiro Maldonado; Joseph A Izatt; Cynthia A Toth
Journal:  Retina       Date:  2013-01       Impact factor: 4.256

9.  Integrative advances for OCT-guided ophthalmic surgery and intraoperative OCT: microscope integration, surgical instrumentation, and heads-up display surgeon feedback.

Authors:  Justis P Ehlers; Sunil K Srivastava; Daniel Feiler; Amanda I Noonan; Andrew M Rollins; Yuankai K Tao
Journal:  PLoS One       Date:  2014-08-20       Impact factor: 3.240

10.  Live volumetric (4D) visualization and guidance of in vivo human ophthalmic surgery with intraoperative optical coherence tomography.

Authors:  O M Carrasco-Zevallos; B Keller; C Viehland; L Shen; G Waterman; B Todorich; C Shieh; P Hahn; S Farsiu; A N Kuo; C A Toth; J A Izatt
Journal:  Sci Rep       Date:  2016-08-19       Impact factor: 4.379

View more
  10 in total

1.  Real-time retinal layer segmentation of OCT volumes with GPU accelerated inferencing using a compressed, low-latency neural network.

Authors:  Svetlana Borkovkina; Acner Camino; Worawee Janpongsri; Marinko V Sarunic; Yifan Jian
Journal:  Biomed Opt Express       Date:  2020-06-24       Impact factor: 3.732

2.  Analysis and evaluation of BC-mode OCT image visualization for microsurgery guidance.

Authors:  Shuwen Wei; Shoujing Guo; Jin U Kang
Journal:  Biomed Opt Express       Date:  2019-09-19       Impact factor: 3.732

3.  Accurate tissue interface segmentation via adversarial pre-segmentation of anterior segment OCT images.

Authors:  Jiahong Ouyang; Tejas Sudharshan Mathai; Kira Lathrop; John Galeotti
Journal:  Biomed Opt Express       Date:  2019-09-20       Impact factor: 3.732

4.  Mixed pyramid attention network for nuclear cataract classification based on anterior segment OCT images.

Authors:  Xiaoqing Zhang; Zunjie Xiao; Xiaoling Li; Xiao Wu; Hanxi Sun; Jin Yuan; Risa Higashita; Jiang Liu
Journal:  Health Inf Sci Syst       Date:  2022-03-25

5.  Automated instrument-tracking for 4D video-rate imaging of ophthalmic surgical maneuvers.

Authors:  Eric M Tang; Mohamed T El-Haddad; Shriji N Patel; Yuankai K Tao
Journal:  Biomed Opt Express       Date:  2022-02-15       Impact factor: 3.732

6.  Optical coherence tomography refraction and optical path length correction for image-guided corneal surgery.

Authors:  Yuan Tian; Mark Draelos; Ryan P McNabb; Kris Hauser; Anthony N Kuo; Joseph A Izatt
Journal:  Biomed Opt Express       Date:  2022-08-31       Impact factor: 3.562

7.  Optical Coherence Tomography-Guided Robotic Ophthalmic Microsurgery via Reinforcement Learning from Demonstration.

Authors:  Brenton Keller; Mark Draelos; Kevin Zhou; Ruobing Qian; Anthony Kuo; George Konidaris; Kris Hauser; Joseph Izatt
Journal:  IEEE Trans Robot       Date:  2020-04-16       Impact factor: 6.835

8.  Optical Coherence Tomography Guided Robotic Needle Insertion for Deep Anterior Lamellar Keratoplasty.

Authors:  Mark Draelos; Gao Tang; Brenton Keller; Anthony Kuo; Kris Hauser; Joseph A Izatt
Journal:  IEEE Trans Biomed Eng       Date:  2019-11-20       Impact factor: 4.538

9.  Lightweight Learning-Based Automatic Segmentation of Subretinal Blebs on Microscope-Integrated Optical Coherence Tomography Images.

Authors:  Zhenxi Song; Liangyu Xu; Jiang Wang; Reza Rasti; Ananth Sastry; Jianwei D Li; William Raynor; Joseph A Izatt; Cynthia A Toth; Lejla Vajzovic; Bin Deng; Sina Farsiu
Journal:  Am J Ophthalmol       Date:  2020-07-21       Impact factor: 5.258

10.  Pathological-Corneas Layer Segmentation and Thickness Measurement in OCT Images.

Authors:  Amr Elsawy; Giovanni Gregori; Taher Eleiwa; Mohamed Abdel-Mottaleb; Mohamed Abou Shousha
Journal:  Transl Vis Sci Technol       Date:  2020-10-21       Impact factor: 3.283

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.