Lee W White1, Timothy M Kowalewski2, Rodney Lee Dockter2, Bryan Comstock3, Blake Hannaford4, Thomas S Lendvay5. 1. 1 School of Medicine, Stanford University, Palo Alto, California. (At time of data collection and analysis: Department of Bioengineering, University of Washington , Seattle, Washington.). 2. 2 Department of Mechanical Engineering, University of Minnesota , Minneapolis, Minnesota. 3. 3 Department of Biostatistics, University of Washington , Seattle, Washington. 4. 4 Department of Electrical Engineering, University of Washington , Seattle, Washington. 5. 5 Department of Urology, Seattle Children's Hospital, University of Washington Medical Center, University of Washington , Seattle, Washington.
Abstract
BACKGROUND: A surgeon's skill in the operating room has been shown to correlate with a patient's clinical outcome. The prompt accurate assessment of surgical skill remains a challenge, in part, because expert faculty reviewers are often unavailable. By harnessing the power of large readily available crowds through the Internet, rapid, accurate, and low-cost assessments may be achieved. We hypothesized that assessments provided by crowd workers highly correlate with expert surgeons' assessments. MATERIALS AND METHODS: A group of 49 surgeons from two hospitals performed two dry-laboratory robotic surgical skill assessment tasks. The performance of these tasks was video recorded and posted online for evaluation using Amazon Mechanical Turk. The surgical tasks in each video were graded by (n=30) varying crowd workers and (n=3) experts using a modified global evaluative assessment of Robotic Skills (GEARS) grading tool, and the mean scores were compared using Cronbach's alpha statistic. RESULTS: GEARS evaluations from the crowd were obtained for each video and task and compared with the GEARS ratings from the expert surgeons. The crowd-based performance scores agreed with the performance assessments by experts with a Cronbach's alpha of 0.84 and 0.92 for the two tasks, respectively. CONCLUSION: The assessment of surgical skill by crowd workers resulted in a high degree of agreement with the scores provided by expert surgeons in the evaluation of basic robotic surgical dry-laboratory tasks. Crowd responses cost less and were much faster to acquire. This study provides evidence that crowds may provide an adjunctive method for rapidly providing feedback of skills to training and practicing surgeons.
BACKGROUND: A surgeon's skill in the operating room has been shown to correlate with a patient's clinical outcome. The prompt accurate assessment of surgical skill remains a challenge, in part, because expert faculty reviewers are often unavailable. By harnessing the power of large readily available crowds through the Internet, rapid, accurate, and low-cost assessments may be achieved. We hypothesized that assessments provided by crowd workers highly correlate with expert surgeons' assessments. MATERIALS AND METHODS: A group of 49 surgeons from two hospitals performed two dry-laboratory robotic surgical skill assessment tasks. The performance of these tasks was video recorded and posted online for evaluation using Amazon Mechanical Turk. The surgical tasks in each video were graded by (n=30) varying crowd workers and (n=3) experts using a modified global evaluative assessment of Robotic Skills (GEARS) grading tool, and the mean scores were compared using Cronbach's alpha statistic. RESULTS: GEARS evaluations from the crowd were obtained for each video and task and compared with the GEARS ratings from the expert surgeons. The crowd-based performance scores agreed with the performance assessments by experts with a Cronbach's alpha of 0.84 and 0.92 for the two tasks, respectively. CONCLUSION: The assessment of surgical skill by crowd workers resulted in a high degree of agreement with the scores provided by expert surgeons in the evaluation of basic robotic surgical dry-laboratory tasks. Crowd responses cost less and were much faster to acquire. This study provides evidence that crowds may provide an adjunctive method for rapidly providing feedback of skills to training and practicing surgeons.
Authors: Shanley B Deal; Dimitrios Stefanidis; Dana Telem; Robert D Fanelli; Marian McDonald; Michael Ujiki; L Michael Brunt; Adnan A Alseidi Journal: Surg Endosc Date: 2017-04-25 Impact factor: 4.584
Authors: Geb W Thomas; Steven Long; Marcus Tatum; Timothy Kowalewski; Dominik Mattioli; J Lawrence Marsh; Heather R Kowalski; Matthew D Karam; Joan E Bechtold; Donald D Anderson Journal: Iowa Orthop J Date: 2020