Literature DB >> 26778711

Crowd-Sourced Assessment of Technical Skills for Validation of Basic Laparoscopic Urologic Skills Tasks.

Timothy M Kowalewski1, Bryan Comstock2, Robert Sweet3, Cory Schaffhausen1, Ashleigh Menhadji4, Timothy Averch5, Geoffrey Box6, Timothy Brand7, Michael Ferrandino8, Jihad Kaouk9, Bodo Knudsen6, Jaime Landman10, Benjamin Lee11, Bradley F Schwartz12, Elspeth McDougall13, Thomas S Lendvay14.   

Abstract

PURPOSE: The BLUS (Basic Laparoscopic Urologic Skills) consortium sought to address the construct validity of BLUS tasks and the wider problem of accurate, scalable and affordable skill evaluation by investigating the concordance of 2 novel candidate methods with faculty panel scores, those of automated motion metrics and crowdsourcing.
MATERIALS AND METHODS: A faculty panel of surgeons (5) and anonymous crowdworkers blindly reviewed a randomized sequence of a representative sample of 24 videos (12 pegboard and 12 suturing) extracted from the BLUS validation study (454) using the GOALS (Global Objective Assessment of Laparoscopic Skills) survey tool with appended pass-fail anchors via the same web based user interface. Pre-recorded motion metrics (tool path length, jerk cost etc) were available for each video. Cronbach's alpha, Pearson's R and ROC with AUC statistics were used to evaluate concordance between continuous scores, and as pass-fail criteria among the 3 groups of faculty, crowds and motion metrics.
RESULTS: Crowdworkers provided 1,840 ratings in approximately 48 hours, 60 times faster than the faculty panel. The inter-rater reliability of mean expert and crowd ratings was good (α=0.826). Crowd score derived pass-fail resulted in 96.9% AUC (95% CI 90.3-100; positive predictive value 100%, negative predictive value 89%). Motion metrics and crowd scores provided similar or nearly identical concordance with faculty panel ratings and pass-fail decisions.
CONCLUSIONS: The concordance of crowdsourcing with faculty panels and speed of reviews is sufficiently high to merit its further investigation alongside automated motion metrics. The overall agreement among faculty, motion metrics and crowdworkers provides evidence in support of the construct validity for 2 of the 4 BLUS tasks.
Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

Keywords:  clinical competence; crowdsourcing; laparoscopy; urologic surgical procedures; validation studies

Mesh:

Year:  2016        PMID: 26778711     DOI: 10.1016/j.juro.2016.01.005

Source DB:  PubMed          Journal:  J Urol        ISSN: 0022-5347            Impact factor:   7.450


  16 in total

1.  C-SATS: Assessing Surgical Skills Among Urology Residency Applicants.

Authors:  Simone L Vernez; Victor Huynh; Kathryn Osann; Zhamshid Okhunov; Jaime Landman; Ralph V Clayman
Journal:  J Endourol       Date:  2016-10-11       Impact factor: 2.942

2.  Machine learning methods for automated technical skills assessment with instructional feedback in ultrasound-guided interventions.

Authors:  Matthew S Holden; Sean Xia; Hillary Lia; Zsuzsanna Keri; Colin Bell; Lindsey Patterson; Tamas Ungi; Gabor Fichtinger
Journal:  Int J Comput Assist Radiol Surg       Date:  2019-04-20       Impact factor: 2.924

3.  How Do Thresholds of Principle and Preference Influence Surgeon Assessments of Learner Performance?

Authors:  Tavis Apramian; Sayra Cristancho; Alp Sener; Lorelei Lingard
Journal:  Ann Surg       Date:  2018-08       Impact factor: 12.969

4.  Predicting surgical skill from the first N seconds of a task: value over task time using the isogony principle.

Authors:  Anna French; Thomas S Lendvay; Robert M Sweet; Timothy M Kowalewski
Journal:  Int J Comput Assist Radiol Surg       Date:  2017-05-17       Impact factor: 2.924

5.  Bidirectional long short-term memory for surgical skill classification of temporally segmented tasks.

Authors:  Jason D Kelly; Ashley Petersen; Thomas S Lendvay; Timothy M Kowalewski
Journal:  Int J Comput Assist Radiol Surg       Date:  2020-09-30       Impact factor: 2.924

6.  Meaningful Assessment of Robotic Surgical Style using the Wisdom of Crowds.

Authors:  M Ershad; R Rege; A Majewicz Fey
Journal:  Int J Comput Assist Radiol Surg       Date:  2018-03-24       Impact factor: 2.924

7.  Feasibility of expert and crowd-sourced review of intraoperative video for quality improvement of intracorporeal urinary diversion during robotic radical cystectomy.

Authors:  Mitchell G Goldenberg; Jamal Nabhani; Christopher J D Wallis; Sameer Chopra; Andrew J Hung; Anne Schuckman; Hooman Djaladat; Siamak Daneshmand; Mihir M Desai; Monish Aron; Inderbir S Gill; Raj Satkunasivam
Journal:  Can Urol Assoc J       Date:  2017-10       Impact factor: 1.862

8.  Video replay in surgery: Can we make the "right call" in predicting outcomes?

Authors:  Edward D Matsumoto
Journal:  Can Urol Assoc J       Date:  2017-10       Impact factor: 1.862

9.  The effect of video playback speed on surgeon technical skill perception.

Authors:  Jason D Kelly; Ashley Petersen; Thomas S Lendvay; Timothy M Kowalewski
Journal:  Int J Comput Assist Radiol Surg       Date:  2020-04-15       Impact factor: 2.924

10.  A Vision for Using Simulation & Virtual Coaching to Improve the Community Practice of Orthopedic Trauma Surgery.

Authors:  Geb W Thomas; Steven Long; Marcus Tatum; Timothy Kowalewski; Dominik Mattioli; J Lawrence Marsh; Heather R Kowalski; Matthew D Karam; Joan E Bechtold; Donald D Anderson
Journal:  Iowa Orthop J       Date:  2020
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.