| Literature DB >> 36118643 |
Catherine Eley1, Neil D Hawkes2, Richard J Egan3,4, David B Robinson1, Chris Brown1,3, Sam Murray5, Keith Siau6, Wyn Lewis1.
Abstract
Background and study aims Virtual reality endoscopic simulation training has the potential to expedite competency development in novice trainees. However, simulation platforms must be realistic and confer face validity. This study aimed to determine the face validity of high-fidelity virtual reality simulation (EndoSim, Surgical Science, Gothenburg), and establish benchmark metrics to guide the development of a Simulation Pathway to Improve Competency in Endoscopy (SPICE). Methods A pilot cohort of four experts rated simulated exercises (Likert scale score 1-5) and following iterative development, 10 experts completed 13 simulator-based endoscopy exercises amounting to 859 total metric values. Results Expert metric performance demonstrated equivalence ( P = 0.992). In contrast, face validity of each exercise varied among experts (median 4 (interquartile range [IQR] 3-5), P < 0.003) with Mucosal Examination receiving the highest scores (median 5 [IQR 4.5-5], P = 1.000) and Loop Management and Intubation exercises receiving the lowest scores (median 3 [IQR 1-3], P < 0.001, P = 0.004), respectively. The provisional validated SPICE comprised 13 exercises with pass marks and allowance buffers defined by median and IQR expert performance. Conclusions EndoSim Face Validity was very good related to early scope handling skills, but more advanced competencies and translation of acquired clinical skills require further research within an established training program. The existing training deficit with superadded adverse effects of the COVID pandemic make this initiative an urgent priority. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/).Entities:
Year: 2022 PMID: 36118643 PMCID: PMC9473829 DOI: 10.1055/a-1882-4246
Source DB: PubMed Journal: Endosc Int Open ISSN: 2196-9736
Fig. 1A Simcart, table-mounted, height-adjustable EndoSim Virtual Reality (VR) endoscopy simulator with integrated haptic technology. (EndoSim: Surgical Science Sweden AB).
Variation in expert Likert scores related to pilot exercises.
|
|
|
|
| Mucosal examination | 5 [4.5–5] | 1.000 |
| Examination | 4.5 [4–5] | 0.686 |
| Knob handling | 4.5 [4–5] | 0.686 |
| Visualize colon 1 | 4 [4–4.5] | 0.343 |
| Scope handling | 4 [4–4.5] | 0.343 |
| Navigation skill | 4 [3.75–4] | 0.057 |
| Retroflexion | 4 [3.5–4] | 0.057 |
| Photo and Probing | 3.5 [2–5] | 0.486 |
| Navigation tip/torque | 3.5 [2.5–4.5] | 0.200 |
| ESGE photo |
|
|
| Loop management 1 |
|
|
| Loop management 2 |
|
|
ESGE, European Society of Gastrointestinal Endoscopy.
P values were generated using Mann-Whitney U test to compare Likert score per exercise against the highest rated (Mucosal Examination).
Variation in expert Likert scores across validation study exercises.
|
|
|
|
| Visualize colon 1 | 4.5 [4–5] | 1.00 |
| Visualize colon 2 | 4.5 [4–5] | 1.00 |
| Scope handling | 4.5 [3–5] | 0.796 |
| Examination | 4 [4–5] | 0.796 |
| Navigation skill | 4 [4–5] | 0.853 |
| Mucosal examination | 4 [4–5] | 0.739 |
| Knob handling | 4 [4–5] | 0.529 |
| Photo and probing | 4 [3.5–5] | 0.579 |
| Retroflexion | 4 [2–5] | 0.218 |
| Navigation tip/torque | 3.75 [3–4] | 0.105 |
| ESGE photo | 3.75 [3–4] | 0.105 |
| Intubation case 3 |
|
|
| Loop management |
|
|
ESGE, European Society of Gastrointestinal Endoscopy.
P values were generated using Mann-Whitney U test to compare Likert score per exercise against the highest rated (Visualize Colon 1).
Fig. 2Evaluation of each pilot exercise by 4 expert endoscopists (Likert scores – 1: Very poor to 5: Very good)
Variation in metric values related to performance of 10 experts.
| DOPS category | Metric | Value | Median [IQR] | |
| Scope handling | Colonoscope rotation | Degrees | 2758 [1540–4142] | 0.912 |
| Slot collisions | Number | 3 [2–5] | 0.437 | |
| Insertion path length | mm | 1114 [883–1664] | 0.434 | |
| Targets photographed | % | 100 [100–100] | 1.000 | |
| All photo targets complete | Yes/no | 1 [1–1] | 0.437 | |
| Deviations from 45 degrees | Number | 3 [3–12] | 0.437 | |
| Angulation tip control | Missed target | Number | 0 [0–1] | 0.437 |
| Knob rotation left/right | Degrees | 240 [63–964] | 0.026 | |
| Knob rotation up/down | Degrees | 1622 [846–3655] | 0.268 | |
| Probed outside of target | Number | 3 [2–6] | 0.437 | |
| Targets probed | % | 100 [100–100] | 1.000 | |
| Into trachea | Yes/no | 0 [0–0] | 1.000 | |
| Collisions against mucosa | Number | 5 [4–9] | 0.038 | |
| Average photo quality | % | 100 [95–100] | 0.437 | |
| Tip path length | mm | 3102 [2383–6266] | 0.955 | |
| Targets aligned | % | 100 [100–100] | 1.000 | |
| Red out | Number | 0 [0–1] | 0.437 | |
| Time in red out | Seconds | 0 [0–1.25] | 0.437 | |
| Pace and Progress | Total time | Seconds | 163 [101–227] | 0.069 |
| Time to papilla | Seconds | 62 [44–74] | 0.187 | |
| Visualisation | Targets seen | % | 100 [100–100] | 0.437 |
| Targets inspected | % | 95 [90–100] | 0.126 | |
| Lumen seen | % | 100 [100–100] | 0.037 | |
| Lumen inspected | % | 99 [98–99] | 0.109 | |
| Stomach visualized | % | 97 [93–99] | 0.259 | |
| Duodenum visualized | % | 46 [42–49] | 0.365 | |
| Papilla reached | Yes/no | 1 [1–1] | 1.000 | |
| Patient comfort | Max Torque | Newton | 0.3 [-0.1–3.4] | 0.437 |
| Max insertion force | Newton | 7.5 [2.9–19.3] | 0.437 | |
| Miscellaneous | Tool unprotected | mm | 1212 [277–3602] | 0.849 |
| Side view assistance | Seconds | 0 [0–11] | 0.027 | |
| Net insufflation | 0 [0–0] | 1.000 | ||
| Time in excess insufflation | Seconds | 0 [0–0] | 0.423 | |
| Percentage of time insufflation | % | 1.5 [0–7] | 0.075 | |
| Excess insufflations | Number | 0 [0–0] | 0.423 |
DOPS, direct observation of procedural skills; IQR, interquartile range.
Fig. 3Evaluation of each exercise by 10 expert endoscopists (Likert scores – 1: Very poor to 5: Very good)