| Literature DB >> 32038863 |
Eung-Joo Lee1,2, William Plishker2, Xinyang Liu3, Shuvra S Bhattacharyya1, Raj Shekhar2,3.
Abstract
Surgical tool tracking has a variety of applications in different surgical scenarios. Electromagnetic (EM) tracking can be utilised for tool tracking, but the accuracy is often limited by magnetic interference. Vision-based methods have also been suggested; however, tracking robustness is limited by specular reflection, occlusions, and blurriness observed in the endoscopic image. Recently, deep learning-based methods have shown competitive performance on segmentation and tracking of surgical tools. The main bottleneck of these methods lies in acquiring a sufficient amount of pixel-wise, annotated training data, which demands substantial labour costs. To tackle this issue, the authors propose a weakly supervised method for surgical tool segmentation and tracking based on hybrid sensor systems. They first generate semantic labellings using EM tracking and laparoscopic image processing concurrently. They then train a light-weight deep segmentation network to obtain a binary segmentation mask that enables tool tracking. To the authors' knowledge, the proposed method is the first to integrate EM tracking and laparoscopic image processing for generation of training labels. They demonstrate that their framework achieves accurate, automatic tool segmentation (i.e. without any manual labelling of the surgical tool to be tracked) and robust tool tracking in laparoscopic image sequences.Entities:
Keywords: annotated training data; automatic tool segmentation; binary segmentation mask; computer vision; deep learning-based methods; electromagnetic tracking; endoscopes; image segmentation; image sequences; laparoscopic image processing; learning (artificial intelligence); light-weight deep segmentation network; medical image processing; medical robotics; neural nets; pixel-wise training data; real-time surgical tool tracking; robust tool tracking; supervised segmentation; surgery; surgical scenarios; surgical tool segmentation; tracking; tracking robustness; vision-based methods
Year: 2019 PMID: 32038863 PMCID: PMC6952260 DOI: 10.1049/htl.2019.0083
Source DB: PubMed Journal: Healthc Technol Lett ISSN: 2053-3713
Fig. 1Overall structure of our proposed framework. Semantic labels generated in the Semantic Labelling subsystem are used to train a DCNN for the Weakly Supervised Segmentation subsystem
Fig. 2Coarse seed generation using EM tracking
a Hybrid sensor system setup
b Coarse seed cues derived from EM tracking: tip point (red point) and intersection point (green point)
Fig. 3Fine seed derivation
a ROI generation
b Line detection using the Probabilistic Hough Transform
c Orientation-based line filtering
Fig. 4Semantic labelling using the Random Walks framework
a Feature map overlaid onto the input image
b Seed refinement
c Binary semantic labelling
Fig. 5Illustration of results obtained from
a Semantic labelling from the training dataset
b Binary segmentation on the testing dataset. In the last column of the testing illustration, the red trajectory indicates the pose of tool
Performance comparison of semantic labelling with different thresholds used for the seed refinement procedures
| 0.6 | 0.7 | 0.8 | 0.9 | |
|---|---|---|---|---|
| 0.1 | 64.1 | 70.3 | 73.8 | 68.4 |
| 0.2 | 72.2 | 75.2 | 82.1 | 75.9 |
| 0.3 | 76.3 | 82.6 | 86.0 | 79.5 |
| 0.4 | 82.9 | 88.1 | 85.3 |
Here, and represent thresholds for foreground and background seeds, respectively. The table reports mean DSC values with respect to manually segmented labelling.
Quantitative results of the proposed framework for semantic labelling and segmentation tasks based on 200 manually segmented labelling
| Segmentation DCNN | Semantic labelling (Training) | Binary segmentation (Phantom) | Binary segmentation (In-vivo) | |||
|---|---|---|---|---|---|---|
| model | DSC | Jaccard Index | DSC | Jaccard Index | DSC | Jaccard Index |
| LinkNet-34 | 89.28% (3.37) | 85.62% (4.62) | 89.53% (4.20) | 86.36% (5.72) | 87.45% (5.02) | 84.32% (5.75) |
| LinkNet-152 | 90.14% (2.21) | 88.35% (5.17) | 91.46% (3.67) | 89.76% (4.83) | 88.86% (6.22) | 85.37% (6.52) |
| TernausNet-11 | 87.05% (3.82) | 86.15% (4.92) | 87.31% (3.90) | 85.61% (4.43) | 85.45% (6.21) | 83.67% (5.12) |
| U-Net | 75.28% (6.37) | 73.47% (8.62) | 72.67% (7.21) | 69.61% (9.16) | 71.45% (8.02) | 68.67% (9.30) |
The table reports mean values with standard deviations shown in parentheses.
Tracking accuracy in degrees acquired from validation datasets for the segmentation task
| EM tracking | Proposed tracking | |||
|---|---|---|---|---|
| baseline | LinkNet-34 | LinkNet-152 | TernausNet-11 | U-Net |
The table reports mean values with standard deviations shown in parentheses.