Shanley B Deal1, Thomas S Lendvay2, Mohamad I Haque3, Timothy Brand3, Bryan Comstock2, Justin Warren2, Adnan Alseidi4. 1. Virginia Mason Medical Center, Department of General Surgery, Mailstop H8-GME, 1100 9th Ave., Seattle, WA, 98101, USA. Electronic address: Shanley.Deal@virginiamason.org. 2. Department of Urology, University of Washington, Seattle, WA, USA. 3. Madigan Army Medical Center, Department of General Surgery, Tacoma, WA, USA. 4. Virginia Mason Medical Center, Department of General Surgery, Mailstop H8-GME, 1100 9th Ave., Seattle, WA, 98101, USA.
Abstract
BACKGROUND: Objective, unbiased assessment of surgical skills remains a challenge in surgical education. We sought to evaluate the feasibility and reliability of Crowd-Sourced Assessment of Technical Skills. METHODS: Seven volunteer general surgery interns were given time for training and then testing, on laparoscopic peg transfer, precision cutting, and intracorporeal knot-tying. Six faculty experts (FEs) and 203 Amazon.com Mechanical Turk crowd workers (CWs) evaluated 21 deidentified video clips using the Global Objective Assessment of Laparoscopic Skills validated rating instrument. RESULTS: Within 19 hours and 15 minutes we received 662 eligible ratings from 203 CWs and 126 ratings from 6 FEs over 10 days. FE video ratings were of borderline internal consistency (Krippendorff's alpha = .55). FE ratings were highly correlated with CW ratings (Pearson's correlation coefficient = .78, P < .001). CONCLUSION: We propose the use of Crowd-Sourced Assessment of Technical Skills as a reliable, basic tool to standardize the evaluation of technical skills in general surgery.
BACKGROUND: Objective, unbiased assessment of surgical skills remains a challenge in surgical education. We sought to evaluate the feasibility and reliability of Crowd-Sourced Assessment of Technical Skills. METHODS: Seven volunteer general surgery interns were given time for training and then testing, on laparoscopic peg transfer, precision cutting, and intracorporeal knot-tying. Six faculty experts (FEs) and 203 Amazon.com Mechanical Turk crowd workers (CWs) evaluated 21 deidentified video clips using the Global Objective Assessment of Laparoscopic Skills validated rating instrument. RESULTS: Within 19 hours and 15 minutes we received 662 eligible ratings from 203 CWs and 126 ratings from 6 FEs over 10 days. FE video ratings were of borderline internal consistency (Krippendorff's alpha = .55). FE ratings were highly correlated with CW ratings (Pearson's correlation coefficient = .78, P < .001). CONCLUSION: We propose the use of Crowd-Sourced Assessment of Technical Skills as a reliable, basic tool to standardize the evaluation of technical skills in general surgery.
Authors: Shanley B Deal; Dimitrios Stefanidis; Dana Telem; Robert D Fanelli; Marian McDonald; Michael Ujiki; L Michael Brunt; Adnan A Alseidi Journal: Surg Endosc Date: 2017-04-25 Impact factor: 4.584
Authors: H Alejandro Rodriguez; Monica T Young; Hope T Jackson; Brant K Oelschlager; Andrew S Wright Journal: Surg Endosc Date: 2017-09-15 Impact factor: 4.584
Authors: Saba Balvardi; Anitha Kammili; Melissa Hanson; Carmen Mueller; Melina Vassiliou; Lawrence Lee; Kevin Schwartzman; Julio F Fiore; Liane S Feldman Journal: Surg Endosc Date: 2022-05-12 Impact factor: 4.584
Authors: Perrine Créquit; Ghizlène Mansouri; Mehdi Benchoufi; Alexandre Vivot; Philippe Ravaud Journal: J Med Internet Res Date: 2018-05-15 Impact factor: 5.428