| Literature DB >> 34542839 |
Daniel Paysan1, Luis Haug2, Michael Bajka3, Markus Oelhafen4, Joachim M Buhmann2.
Abstract
PURPOSE: Virtual reality-based simulators have the potential to become an essential part of surgical education. To make full use of this potential, they must be able to automatically recognize activities performed by users and assess those. Since annotations of trajectories by human experts are expensive, there is a need for methods that can learn to recognize surgical activities in a data-efficient way.Entities:
Keywords: Deep Learning; Probabilistic modeling; Representation Learning; Self-supervised Learning; Surgical Activity Recognition; Unsupervised Learning
Mesh:
Year: 2021 PMID: 34542839 PMCID: PMC8589823 DOI: 10.1007/s11548-021-02493-z
Source DB: PubMed Journal: Int J Comput Assist Radiol Surg ISSN: 1861-6410 Impact factor: 2.924
Fig. 1Summary of the proposed unsupervised activity recognition approach using self-supervised representation learning to allow for the modeling of different data modalities, i.e., the use of video and sensor data
Fig. 2Plot of the activation of the update gate 14 for which the first increase encodes the end of the diagnosis here shown for two randomly chosen trajectories
The mean IoU (mIoU) scores (higher scores are better) evaluated on the 18 trajectories for which ground truth annotations for two key activities were obtained from the domain expert
| 0.7409 (0.2809) | 0.1183 (0.2270) | |
Fig. 3Visualization of the segmentation of the 18 trajectories of medium length using the SensorHSMM and the UpdateGateHSMM. Only the latter makes use of features derived from self-supervised representation learning
Fig. 4Comparison of the segmentations for the annotated trajectory of the surgery