PURPOSE: To evaluate the EyeSi(™) simulator in regard to assessing competence in cataract surgery. The primary objective was to explore all simulator metrics to establish a proficiency-based test with solid evidence. The secondary objective was to evaluate whether the skill assessment was specific to cataract surgery. METHODS: We included 26 ophthalmic trainees (no cataract surgery experience), 11 experienced cataract surgeons (>4000 cataract procedures) and five vitreoretinal surgeons. All subjects completed 13 different modules twice. Simulator metrics were used for the assessments. RESULTS: Total module score on seven of 13 modules showed significant discriminative ability between the novices and experienced cataract surgeons. The intermodule reliability coefficient was 0.76 (p < 0.001). A pass/fail level was defined from the total score on these seven modules using the contrasting-groups method. The test had an overall ability to discriminate between novices and experienced cataract surgeons, as 21 of 26 novices (81%) versus one of 11 experienced surgeons (9%) did not pass the test. The vitreoretinal surgeons scored significantly higher than the novices (p = 0.006), but not significantly lower than the experienced cataract surgeons (p = 0.32). CONCLUSION: We have established a performance test, consisting of seven modules on the EyeSi(™) simulator, which possess evidence of validity. The test is a useful and reliable tool for assessment of both cataract surgical and general microsurgical skills in vitro.
PURPOSE: To evaluate the EyeSi(™) simulator in regard to assessing competence in cataract surgery. The primary objective was to explore all simulator metrics to establish a proficiency-based test with solid evidence. The secondary objective was to evaluate whether the skill assessment was specific to cataract surgery. METHODS: We included 26 ophthalmic trainees (no cataract surgery experience), 11 experienced cataract surgeons (>4000 cataract procedures) and five vitreoretinal surgeons. All subjects completed 13 different modules twice. Simulator metrics were used for the assessments. RESULTS: Total module score on seven of 13 modules showed significant discriminative ability between the novices and experienced cataract surgeons. The intermodule reliability coefficient was 0.76 (p < 0.001). A pass/fail level was defined from the total score on these seven modules using the contrasting-groups method. The test had an overall ability to discriminate between novices and experienced cataract surgeons, as 21 of 26 novices (81%) versus one of 11 experienced surgeons (9%) did not pass the test. The vitreoretinal surgeons scored significantly higher than the novices (p = 0.006), but not significantly lower than the experienced cataract surgeons (p = 0.32). CONCLUSION: We have established a performance test, consisting of seven modules on the EyeSi(™) simulator, which possess evidence of validity. The test is a useful and reliable tool for assessment of both cataract surgical and general microsurgical skills in vitro.
Authors: Morten la Cour; Ann Sofia Skou Thomsen; Mark Alberti; Lars Konge Journal: Graefes Arch Clin Exp Ophthalmol Date: 2019-01-15 Impact factor: 3.117
Authors: Mia L Østergaard; Kristina R Nielsen; Elisabeth Albrecht-Beste; Lars Konge; Michael B Nielsen Journal: Eur Radiol Date: 2017-07-04 Impact factor: 5.315
Authors: Patrick C Staropoli; Ninel Z Gregori; Anna K Junk; Anat Galor; Raquel Goldhardt; Brian E Goldhagen; Wei Shi; William Feuer Journal: Simul Healthc Date: 2018-02 Impact factor: 1.929
Authors: Grace L Paley; Rebecca Grove; Tejas C Sekhar; Jack Pruett; Michael V Stock; Tony N Pira; Steven M Shields; Evan L Waxman; Bradley S Wilson; Mae O Gordon; Susan M Culican Journal: J Surg Educ Date: 2021-02-25 Impact factor: 2.891