Literature DB >> 26896147

Task-Level vs. Segment-Level Quantitative Metrics for Surgical Skill Assessment.

S Swaroop Vedula1, Anand Malpani2, Narges Ahmidi2, Sanjeev Khudanpur3, Gregory Hager2, Chi Chiung Grace Chen4.   

Abstract

OBJECTIVE: Task-level metrics of time and motion efficiency are valid measures of surgical technical skill. Metrics may be computed for segments (maneuvers and gestures) within a task after hierarchical task decomposition. Our objective was to compare task-level and segment (maneuver and gesture)-level metrics for surgical technical skill assessment.
DESIGN: Our analyses include predictive modeling using data from a prospective cohort study. We used a hierarchical semantic vocabulary to segment a simple surgical task of passing a needle across an incision and tying a surgeon's knot into maneuvers and gestures. We computed time, path length, and movements for the task, maneuvers, and gestures using tool motion data. We fit logistic regression models to predict experience-based skill using the quantitative metrics. We compared the area under a receiver operating characteristic curve (AUC) for task-level, maneuver-level, and gesture-level models.
SETTING: Robotic surgical skills training laboratory. PARTICIPANTS: In total, 4 faculty surgeons with experience in robotic surgery and 14 trainee surgeons with no or minimal experience in robotic surgery.
RESULTS: Experts performed the task in shorter time (49.74s; 95% CI = 43.27-56.21 vs. 81.97; 95% CI = 69.71-94.22), with shorter path length (1.63m; 95% CI = 1.49-1.76 vs. 2.23; 95% CI = 1.91-2.56), and with fewer movements (429.25; 95% CI = 383.80-474.70 vs. 728.69; 95% CI = 631.84-825.54) than novices. Experts differed from novices on metrics for individual maneuvers and gestures. The AUCs were 0.79; 95% CI = 0.62-0.97 for task-level models, 0.78; 95% CI = 0.6-0.96 for maneuver-level models, and 0.7; 95% CI = 0.44-0.97 for gesture-level models. There was no statistically significant difference in AUC between task-level and maneuver-level (p = 0.7) or gesture-level models (p = 0.17).
CONCLUSIONS: Maneuver-level and gesture-level metrics are discriminative of surgical skill and can be used to provide targeted feedback to surgical trainees.
Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Medical Knowledge; Patient Care; Practice-Based Learning and Improvement; objective skill assessment; robotic surgical skills; segment-level skill metrics; task decomposition; task-level skill metrics

Mesh:

Year:  2016        PMID: 26896147     DOI: 10.1016/j.jsurg.2015.11.009

Source DB:  PubMed          Journal:  J Surg Educ        ISSN: 1878-7452            Impact factor:   2.891


  9 in total

Review 1.  Deep learning with convolutional neural network for objective skill evaluation in robot-assisted surgery.

Authors:  Ziheng Wang; Ann Majewicz Fey
Journal:  Int J Comput Assist Radiol Surg       Date:  2018-09-25       Impact factor: 2.924

2.  Query-by-example surgical activity detection.

Authors:  Yixin Gao; S Swaroop Vedula; Gyusung I Lee; Mija R Lee; Sanjeev Khudanpur; Gregory D Hager
Journal:  Int J Comput Assist Radiol Surg       Date:  2016-04-12       Impact factor: 2.924

3.  Anatomical Region Segmentation for Objective Surgical Skill Assessment with Operating Room Motion Data.

Authors:  Yangming Li; Randall A Bly; R Alex Harbison; Ian M Humphreys; Mark E Whipple; Blake Hannaford; Kris S Moe
Journal:  J Neurol Surg B Skull Base       Date:  2017-07-31

Review 4.  Objective Assessment of Surgical Technical Skill and Competency in the Operating Room.

Authors:  S Swaroop Vedula; Masaru Ishii; Gregory D Hager
Journal:  Annu Rev Biomed Eng       Date:  2017-03-27       Impact factor: 9.590

5.  Frame-wise detection of surgeon stress levels during laparoscopic training using kinematic data.

Authors:  Yi Zheng; Grey Leonard; Herbert Zeh; Ann Majewicz Fey
Journal:  Int J Comput Assist Radiol Surg       Date:  2022-02-12       Impact factor: 2.924

6.  Technical Skill Impacts the Success of Sequential Robotic Suturing Substeps.

Authors:  Daniel I Sanford; Balint Der; Taseen F Haque; Runzhuo Ma; Ryan Hakim; Jessica H Nguyen; Steven Cen; Andrew J Hung
Journal:  J Endourol       Date:  2022-02       Impact factor: 2.942

7.  Temporal clustering of surgical activities in robot-assisted surgery.

Authors:  Aneeq Zia; Chi Zhang; Xiaobin Xiong; Anthony M Jarc
Journal:  Int J Comput Assist Radiol Surg       Date:  2017-05-05       Impact factor: 2.924

8.  Human-centric predictive model of task difficulty for human-in-the-loop control tasks.

Authors:  Ziheng Wang; Ann Majewicz Fey
Journal:  PLoS One       Date:  2018-04-05       Impact factor: 3.240

9.  Movement-level process modeling of microsurgical bimanual and unimanual tasks.

Authors:  Jani Koskinen; Antti Huotarinen; Antti-Pekka Elomaa; Bin Zheng; Roman Bednarik
Journal:  Int J Comput Assist Radiol Surg       Date:  2021-12-15       Impact factor: 2.924

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.