May Liu1, Shreya Purohit2, Joshua Mazanetz3, Whitney Allen4, Usha S Kreaden5, Myriam Curet6. 1. Medical Research, Intuitive Surgical, Inc., Sunnyvale, CA, USA. may.liu@intusurg.com. 2. Product Marketing, Intuitive Surgical, Inc., Sunnyvale, CA, USA. 3. Clinical Training, Torax Medical Inc., St Paul, MN, USA. 4. Product Education, Intuitive Surgical, Inc., Sunnyvale, CA, USA. 5. Clinical Affairs, Intuitive Surgical, Inc., Sunnyvale, CA, USA. 6. Medical Research, Intuitive Surgical, Inc., Sunnyvale, CA, USA.
Abstract
BACKGROUND: Skill assessment during robotically assisted surgery remains challenging. While the popularity of the Global Evaluative Assessment of Robotics Skills (GEARS) has grown, its lack of discrimination between independent console skills limits its usefulness. The purpose of this study was to evaluate construct validity and interrater reliability of a novel assessment designed to overcome this limitation. METHODS: We created the Assessment of Robotic Console Skills (ARCS), a global rating scale with six console skill domains. Fifteen volunteers who were console surgeons for 0 ("novice"), 1-100 ("intermediate"), or >100 ("experienced") robotically assisted procedures performed three standardized tasks. Three blinded raters scored the task videos using ARCS, with a 5-point Likert scale for each skill domain. Scores were analyzed for evidence of construct validity and interrater reliability. RESULTS: Group demographics were indistinguishable except for the number of robotically assisted procedures performed (p = 0.001). The mean scores of experienced subjects exceeded those of novices in dexterity (3.8 > 1.4, p < 0.001), field of view (4.1 > 1.8, p < 0.001), instrument visualization (3.9 > 2.2, p < 0.001), manipulator workspace (3.6 > 1.9, p = 0.001), and force sensitivity (4.3 > 2.6, p < 0.001). The mean scores of intermediate subjects exceeded those of novices in dexterity (2.8 > 1.4, p = 0.002), field of view (2.8 > 1.8, p = 0.021), instrument visualization (3.2 > 2.2, p = 0.045), manipulator workspace (3.1 > 1.9, p = 0.004), and force sensitivity (3.7 > 2.6, p = 0.033). The mean scores of experienced subjects exceeded those of intermediates in dexterity (3.8 > 2.8, p = 0.003), field of view (4.1 > 2.8, p < 0.001), and instrument visualization (3.9 > 3.2, p = 0.044). Rater agreement in each domain demonstrated statistically significant concordance (p < 0.05). CONCLUSIONS: We present strong evidence for construct validity and interrater reliability of ARCS. Our study shows that learning curves for some console skills plateau faster than others. Therefore, ARCS may be more useful than GEARS to evaluate distinct console skills. Future studies will examine why some domains did not adequately differentiate between subjects and applications for intraoperative use.
BACKGROUND: Skill assessment during robotically assisted surgery remains challenging. While the popularity of the Global Evaluative Assessment of Robotics Skills (GEARS) has grown, its lack of discrimination between independent console skills limits its usefulness. The purpose of this study was to evaluate construct validity and interrater reliability of a novel assessment designed to overcome this limitation. METHODS: We created the Assessment of Robotic Console Skills (ARCS), a global rating scale with six console skill domains. Fifteen volunteers who were console surgeons for 0 ("novice"), 1-100 ("intermediate"), or >100 ("experienced") robotically assisted procedures performed three standardized tasks. Three blinded raters scored the task videos using ARCS, with a 5-point Likert scale for each skill domain. Scores were analyzed for evidence of construct validity and interrater reliability. RESULTS: Group demographics were indistinguishable except for the number of robotically assisted procedures performed (p = 0.001). The mean scores of experienced subjects exceeded those of novices in dexterity (3.8 > 1.4, p < 0.001), field of view (4.1 > 1.8, p < 0.001), instrument visualization (3.9 > 2.2, p < 0.001), manipulator workspace (3.6 > 1.9, p = 0.001), and force sensitivity (4.3 > 2.6, p < 0.001). The mean scores of intermediate subjects exceeded those of novices in dexterity (2.8 > 1.4, p = 0.002), field of view (2.8 > 1.8, p = 0.021), instrument visualization (3.2 > 2.2, p = 0.045), manipulator workspace (3.1 > 1.9, p = 0.004), and force sensitivity (3.7 > 2.6, p = 0.033). The mean scores of experienced subjects exceeded those of intermediates in dexterity (3.8 > 2.8, p = 0.003), field of view (4.1 > 2.8, p < 0.001), and instrument visualization (3.9 > 3.2, p = 0.044). Rater agreement in each domain demonstrated statistically significant concordance (p < 0.05). CONCLUSIONS: We present strong evidence for construct validity and interrater reliability of ARCS. Our study shows that learning curves for some console skills plateau faster than others. Therefore, ARCS may be more useful than GEARS to evaluate distinct console skills. Future studies will examine why some domains did not adequately differentiate between subjects and applications for intraoperative use.
Authors: Timothy J Tausch; Timothy M Kowalewski; Lee W White; Patrick S McDonough; Timothy C Brand; Thomas S Lendvay Journal: J Urol Date: 2012-07-20 Impact factor: 7.450
Authors: Syed Johar Raza; Erinn Field; Christopher Jay; Daniel Eun; Michael Fumo; Jim C Hu; David Lee; Zayn Mehboob; John Nyquist; James O Peabody; Richard Sarle; Hans Stricker; Zhengyu Yang; Gregory Wilding; James L Mohler; Khurshid A Guru Journal: Urology Date: 2015-01 Impact factor: 2.649
Authors: Saratu Kutana; Daniel P Bitner; Poppy Addison; Paul J Chung; Mark A Talamini; Filippo Filicori Journal: Surg Endosc Date: 2022-02-28 Impact factor: 3.453