PURPOSE: Automatically segmenting and classifying surgical activities is an important prerequisite to providing automated, targeted assessment and feedback during surgical training. Prior work has focused almost exclusively on recognizing gestures, or short, atomic units of activity such as pushing needle through tissue, whereas we also focus on recognizing higher-level maneuvers, such as suture throw. Maneuvers exhibit more complexity and variability than the gestures from which they are composed, however working at this granularity has the benefit of being consistent with existing training curricula. METHODS: Prior work has focused on hidden Markov model and conditional-random-field-based methods, which typically leverage unary terms that are local in time and linear in model parameters. Because maneuvers are governed by long-term, nonlinear dynamics, we argue that the more expressive unary terms offered by recurrent neural networks (RNNs) are better suited for this task. Four RNN architectures are compared for recognizing activities from kinematics: simple RNNs, long short-term memory, gated recurrent units, and mixed history RNNs. We report performance in terms of error rate and edit distance, and we use a functional analysis-of-variance framework to assess hyperparameter sensitivity for each architecture. RESULTS: We obtain state-of-the-art performance for both maneuver recognition from kinematics (4 maneuvers; error rate of [Formula: see text]; normalized edit distance of [Formula: see text]) and gesture recognition from kinematics (10 gestures; error rate of [Formula: see text]; normalized edit distance of [Formula: see text]). CONCLUSIONS: Automated maneuver recognition is feasible with RNNs, an exciting result which offers the opportunity to provide targeted assessment and feedback at a higher level of granularity. In addition, we show that multiple hyperparameters are important for achieving good performance, and our hyperparameter analysis serves to aid future work in RNN-based activity recognition.
PURPOSE: Automatically segmenting and classifying surgical activities is an important prerequisite to providing automated, targeted assessment and feedback during surgical training. Prior work has focused almost exclusively on recognizing gestures, or short, atomic units of activity such as pushing needle through tissue, whereas we also focus on recognizing higher-level maneuvers, such as suture throw. Maneuvers exhibit more complexity and variability than the gestures from which they are composed, however working at this granularity has the benefit of being consistent with existing training curricula. METHODS: Prior work has focused on hidden Markov model and conditional-random-field-based methods, which typically leverage unary terms that are local in time and linear in model parameters. Because maneuvers are governed by long-term, nonlinear dynamics, we argue that the more expressive unary terms offered by recurrent neural networks (RNNs) are better suited for this task. Four RNN architectures are compared for recognizing activities from kinematics: simple RNNs, long short-term memory, gated recurrent units, and mixed history RNNs. We report performance in terms of error rate and edit distance, and we use a functional analysis-of-variance framework to assess hyperparameter sensitivity for each architecture. RESULTS: We obtain state-of-the-art performance for both maneuver recognition from kinematics (4 maneuvers; error rate of [Formula: see text]; normalized edit distance of [Formula: see text]) and gesture recognition from kinematics (10 gestures; error rate of [Formula: see text]; normalized edit distance of [Formula: see text]). CONCLUSIONS: Automated maneuver recognition is feasible with RNNs, an exciting result which offers the opportunity to provide targeted assessment and feedback at a higher level of granularity. In addition, we show that multiple hyperparameters are important for achieving good performance, and our hyperparameter analysis serves to aid future work in RNN-based activity recognition.
Authors: John D Birkmeyer; Jonathan F Finks; Amanda O'Reilly; Mary Oerline; Arthur M Carlin; Andre R Nunn; Justin Dimick; Mousumi Banerjee; Nancy J O Birkmeyer Journal: N Engl J Med Date: 2013-10-10 Impact factor: 91.245
Authors: Daniel J Scott; Juan C Cendan; Carla M Pugh; Rebecca M Minter; Gary L Dunnington; Rosemary A Kozar Journal: J Surg Res Date: 2008-03-13 Impact factor: 2.192
Authors: Elizabeth Wenghofer; Daniel Klass; Michal Abrahamowicz; Dale Dauphinee; André Jacques; Sydney Smee; David Blackmore; Nancy Winslade; Kristen Reidel; Ilona Bartman; Robyn Tamblyn Journal: Med Educ Date: 2009-12 Impact factor: 6.251
Authors: Daniel Paysan; Luis Haug; Michael Bajka; Markus Oelhafen; Joachim M Buhmann Journal: Int J Comput Assist Radiol Surg Date: 2021-09-20 Impact factor: 2.924
Authors: Guillermo Sánchez-Brizuela; Francisco-Javier Santos-Criado; Daniel Sanz-Gobernado; Eusebio de la Fuente-López; Juan-Carlos Fraile; Javier Pérez-Turiel; Ana Cisnal Journal: Sensors (Basel) Date: 2022-07-11 Impact factor: 3.847
Authors: Somayeh B Shafiei; Mohammad Durrani; Zhe Jing; Michael Mostowy; Philippa Doherty; Ahmed A Hussein; Ahmed S Elsayed; Umar Iqbal; Khurshid Guru Journal: Sensors (Basel) Date: 2021-03-03 Impact factor: 3.576