Literature DB >> 24555877

Crowd-Sourced Assessment of Technical Skills: a novel method to evaluate surgical performance.

Carolyn Chen1, Lee White2, Timothy Kowalewski3, Rajesh Aggarwal4, Chris Lintott5, Bryan Comstock6, Katie Kuksenok7, Cecilia Aragon7, Daniel Holst8, Thomas Lendvay9.   

Abstract

BACKGROUND: Validated methods of objective assessments of surgical skills are resource intensive. We sought to test a web-based grading tool using crowdsourcing called Crowd-Sourced Assessment of Technical Skill.
MATERIALS AND METHODS: Institutional Review Board approval was granted to test the accuracy of Amazon.com's Mechanical Turk and Facebook crowdworkers compared with experienced surgical faculty grading a recorded dry-laboratory robotic surgical suturing performance using three performance domains from a validated assessment tool. Assessor free-text comments describing their rating rationale were used to explore a relationship between the language used by the crowd and grading accuracy.
RESULTS: Of a total possible global performance score of 3-15, 10 experienced surgeons graded the suturing video at a mean score of 12.11 (95% confidence interval [CI], 11.11-13.11). Mechanical Turk and Facebook graders rated the video at mean scores of 12.21 (95% CI, 11.98-12.43) and 12.06 (95% CI, 11.57-12.55), respectively. It took 24 h to obtain responses from 501 Mechanical Turk subjects, whereas it took 24 d for 10 faculty surgeons to complete the 3-min survey. Facebook subjects (110) responded within 25 d. Language analysis indicated that crowdworkers who used negation words (i.e., "but," "although," and so forth) scored the performance more equivalently to experienced surgeons than crowdworkers who did not (P < 0.00001).
CONCLUSIONS: For a robotic suturing performance, we have shown that surgery-naive crowdworkers can rapidly assess skill equivalent to experienced faculty surgeons using Crowd-Sourced Assessment of Technical Skill. It remains to be seen whether crowds can discriminate different levels of skill and can accurately assess human surgery performances.
Copyright © 2014 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Crowdsourcing; Education; GEARS; OSATS; Robotic surgery; Training

Mesh:

Year:  2013        PMID: 24555877     DOI: 10.1016/j.jss.2013.09.024

Source DB:  PubMed          Journal:  J Surg Res        ISSN: 0022-4804            Impact factor:   2.192


  30 in total

1.  C-SATS: Assessing Surgical Skills Among Urology Residency Applicants.

Authors:  Simone L Vernez; Victor Huynh; Kathryn Osann; Zhamshid Okhunov; Jaime Landman; Ralph V Clayman
Journal:  J Endourol       Date:  2016-10-11       Impact factor: 2.942

2.  Crowdtruth validation: a new paradigm for validating algorithms that rely on image correspondences.

Authors:  Lena Maier-Hein; Daniel Kondermann; Tobias Roß; Sven Mersmann; Eric Heim; Sebastian Bodenstedt; Hannes Götz Kenngott; Alexandro Sanchez; Martin Wagner; Anas Preukschas; Anna-Laura Wekerle; Stefanie Helfert; Keno März; Arianeb Mehrabi; Stefanie Speidel; Christian Stock
Journal:  Int J Comput Assist Radiol Surg       Date:  2015-04-18       Impact factor: 2.924

3.  A study of crowdsourced segment-level surgical skill assessment using pairwise rankings.

Authors:  Anand Malpani; S Swaroop Vedula; Chi Chiung Grace Chen; Gregory D Hager
Journal:  Int J Comput Assist Radiol Surg       Date:  2015-06-30       Impact factor: 2.924

4.  Evaluation of crowd-sourced assessment of the critical view of safety in laparoscopic cholecystectomy.

Authors:  Shanley B Deal; Dimitrios Stefanidis; Dana Telem; Robert D Fanelli; Marian McDonald; Michael Ujiki; L Michael Brunt; Adnan A Alseidi
Journal:  Surg Endosc       Date:  2017-04-25       Impact factor: 4.584

5.  Assessment of Robotic Console Skills (ARCS): construct validity of a novel global rating scale for technical skills in robotically assisted surgery.

Authors:  May Liu; Shreya Purohit; Joshua Mazanetz; Whitney Allen; Usha S Kreaden; Myriam Curet
Journal:  Surg Endosc       Date:  2017-07-01       Impact factor: 4.584

6.  The minimally acceptable classification criterion for surgical skill: intent vectors and separability of raw motion data.

Authors:  Rodney L Dockter; Thomas S Lendvay; Robert M Sweet; Timothy M Kowalewski
Journal:  Int J Comput Assist Radiol Surg       Date:  2017-05-18       Impact factor: 2.924

7.  Video assessment of laparoscopic skills by novices and experts: implications for surgical education.

Authors:  Celine Yeung; Brian Carrillo; Victor Pope; Shahob Hosseinpour; J Ted Gerstle; Georges Azzie
Journal:  Surg Endosc       Date:  2017-02-15       Impact factor: 4.584

8.  A computer vision technique for automated assessment of surgical performance using surgeons' console-feed videos.

Authors:  Amir Baghdadi; Ahmed A Hussein; Youssef Ahmed; Lora A Cavuoto; Khurshid A Guru
Journal:  Int J Comput Assist Radiol Surg       Date:  2018-11-20       Impact factor: 2.924

9.  The effect of video playback speed on surgeon technical skill perception.

Authors:  Jason D Kelly; Ashley Petersen; Thomas S Lendvay; Timothy M Kowalewski
Journal:  Int J Comput Assist Radiol Surg       Date:  2020-04-15       Impact factor: 2.924

10.  A Vision for Using Simulation & Virtual Coaching to Improve the Community Practice of Orthopedic Trauma Surgery.

Authors:  Geb W Thomas; Steven Long; Marcus Tatum; Timothy Kowalewski; Dominik Mattioli; J Lawrence Marsh; Heather R Kowalski; Matthew D Karam; Joan E Bechtold; Donald D Anderson
Journal:  Iowa Orthop J       Date:  2020
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.