| Literature DB >> 35903256 |
Hugo Layard Horsfall1,2, Zeqian Mao1, Chan Hee Koh1,2, Danyal Z Khan1,2, William Muirhead1,2, Danail Stoyanov2, Hani J Marcus1,2.
Abstract
Background: An exoscope heralds a new era of optics in surgery. However, there is limited quantitative evidence describing and comparing the learning curve.Entities:
Keywords: education; exoscope; innovation; learning curve; microscope; neurosurgery; surgery
Year: 2022 PMID: 35903256 PMCID: PMC9316615 DOI: 10.3389/fsurg.2022.920252
Source DB: PubMed Journal: Front Surg ISSN: 2296-875X
Figure 1Microsurgical grape dissection task “Star’s the limit” setup. (A) Microscope trial. (B) “Star’s the limit.” Note the grape has a homogeneous shape, drawn on by stencil, and secured using needles to ensure constant position across the trials.
Microsurgical grape dissection task “Star’s the limit” grading rubric.
| Items | Descriptions |
|---|---|
| Time to Complete | The time to completion (seconds) is recorded, up to 5 min for each repetition; otherwise, participants are told to stop |
| Completeness of the Dissected Star | Defined as star-shaped grape skin is obtained |
| 0 for failure | |
| 1 for success | |
| Clean Star with No Flesh – “flesh score” | The dissected star needs to be “clean skin” without flesh attached |
| 0 points for a lot of flesh or no star obtained | |
| 1 point for some flesh | |
| 2 points for no flesh | |
| Edge within Limit – “edge score” | Incision needs to be made within the drawn line |
| Both the dissected star and the remaining grape is examined; 1 point for the existence of the blackish on each edge | |
| If no star obtained, up to 10 points since only the main grape can be assessed | |
| If star obtained, up to 20 points | |
| Perforation | The number of perforations made is recorded |
| 1 point deduction for every perforation into the deep grape flesh |
Summary of best selective model function for discrimination between novice and expert performances.
| AIC | logLik | |
|---|---|---|
| (1−Total time/300) | 13.14999 | −3.574996 |
| (1−Total time/300)+Edge | 15.01496 | −3.507481 |
| (1−Total time/300)+Accuracy score | 15.05006 | −3.525028 |
| (1−Total time/300)+Clean star | 15.14999 | −3.574994 |
| (1−Total time/300)+Clean star + Edge | 16.79633 | −3.398165 |
| (1−Total time/300)+Edge + Perforations | 17.01496 | −3.507480 |
AIC, Akaike information criterion; logLik, log likelihood ratio.
Figure 2Composite performance score of novice surgeons completing the microsurgical grape dissection task with a threshold for expert performance (gray dashed line 70; 0–100). Each graph represents a separate novice. In total, participants completed 20 repetitions of the task on each device consecutively. (A) Novice surgeons’ performance scores plotted against the number of trials performed, with the group starting with the ORBEYE and microscope. The first colored graph represents the first device, and the second colored graph represents the crossover to the second device. (B) Modeled learning curves using the modified Weibull function.
Figure 3(A) Plateau performance between the novices, starting the “Star’s the limit” task on the exoscope or the microscope. The performance score was 0–100, generated through methods outlined in section Methods, with an expert threshold score of 70. (B) Significant noninferiority of the microscope and exoscope based on novice performance compared against the expert performance. (C) Learning rate of novices on the exoscope or the microscope. (D) Learning rate and statistical noninferiority of the exoscope compared to the microscope.
Figure 4Subjective impression of the optical device. (A) NASA Raw Task Load Index (NASA R-TLX) score for each dimension and total workload compared between novices with different instruments. (B) Subjective questionnaire (Supplementary Material Table 2): Q1: better visualization; Q2: greater freedom of movement; Q3 more comfortable; Q4: easier to perform a task with; and Q5: prefer to use in the future.