| Literature DB >> 32743896 |
Madoka Higuchi1, Takashige Abe1, Kiyohiko Hotta1, Ken Morita2, Haruka Miyata1, Jun Furumido1, Naoya Iwahara1, Masafumi Kon1, Takahiro Osawa1, Ryuji Matsumoto1, Hiroshi Kikuchi1, Yo Kurashima3, Sachiyo Murai1, Abdullatif Aydin4, Nicholas Raison4, Kamran Ahmed4, Muhammad Shamim Khan4, Prokar Dasgupta4, Nobuo Shinohara1.
Abstract
OBJECTIVES: To develop a wet laboratory training model for learning core laparoscopic surgical skills and evaluating learners' competency level outside the operating room.Entities:
Keywords: animal organs; laparoscopic surgery; simulation; surgical education; wet lab training
Mesh:
Year: 2020 PMID: 32743896 PMCID: PMC7589398 DOI: 10.1111/iju.14315
Source DB: PubMed Journal: Int J Urol ISSN: 0919-8172 Impact factor: 3.369
Fig. 1Photographs of the simulation training. (a) Training view. Candidates were informed of training slots, prepared on the Doodle website (calendar tool for time management and coordinating meetings), via email. Thereafter, participants voluntarily booked convenient slots according to their schedules, and participated in the training. (b,c) Task 1 (tissue dissection around aorta). (d) Task 2 (tissue dissection and divide renal artery). In tasks 1 and 2, laparoscopic scissors (Scissors Metzenbaum; Olympus, Tokyo, Japan), laparoscopic grasping forceps (CLICKline CROCE‐OLMI Grasping Forceps; Karl Storz, Tokyo, Japan) and a laparoscopic clip applier (Hem‐o‐lok Endoscopic Appliers Large; Teleflex, Tokyo, Japan) were used. (e) Task 3 (renal parenchymal closure). In task 3, laparoscopic needle holders were used (KOH Macro Needle Holder, ratchet position right, jaws curved to left, and KOH Macro Needle Holder, ratchet position left, jaws curved to right; Karl Storz). (f) Box trainer. (g) Setting of aorta in task 1. (h) Setting of kidney in task 2. (i) Setting of kidney in task 3.
Assessment sheet of ALL
| Tasks 1 and 2 | |||||
|---|---|---|---|---|---|
| Domain | 1 | 2 | 3 | 4 | 5 |
| Traction | Usually not performed | Performed half of the time | Tissue is always stretched out under appropriate tension to visualize connective tissue surrounding the vessels | ||
| Blunt dissection | Usually not performed | Performed half of the time | Tissue is always dissected (blunt dissection) in a safe manner under direct visualization | ||
| Sharp dissection | Usually not performed | Performed half of the time | Tissue is always dissected (sharp dissection) in a safe manner under direct visualization | ||
| Skeltonization of vascular structure | Usually not performed | Performed half of the time | Vascular structure is always dissected sufficiently for subsequent ligation by Hem‐o‐lok (Weck‐lok) | ||
| Applying Hem‐o‐lok (Weck‐lok) | Usually not performed | Performed half of the time | Hem‐o‐lok (Weck‐lok) is always placed perpendicular to the vessel, and closed safely under direct visualization |
Summary of participants’ backgrounds
|
| |
|---|---|
| Age (years) | Median 29 (range 20–52) |
| Sex (male/female) | 40/14 |
| Background | |
| Urologists |
|
| Medical students |
|
| Junior residents |
|
| Experience of laparoscopic surgery | |
| Experts (≥50 surgeries) |
|
| Intermediates (10–49) |
|
| Novices (0–9) |
|
| Endoscopic surgical skill qualification (yes/no) | 10/44 |
| Experience of simulation training (yes/no) | 36/18 |
Summary of interrater reliability of GOALS and ALL scores, and intrarater reliability of ALL scores
| Task 1 ( | Task 2 ( | Task 3 ( | |
|---|---|---|---|
| Interrater reliability | |||
| ICC (2, 1), GOALS score | 0.745 | 0.718 | 0.857 |
| ICC (2, 1), ALL score | 0.692 | 0.693 | 0.844 |
Interrater reliability is the degree of agreement among raters.
Intra‐rater reliability is the degree of agreement among assessments carried out by a single rater.
Fig. 2GOALS scores at the time of participants’ first training session divided by previous experience of laparoscopic surgery. (a) Task 1. (b) Task 2. (c) Task 3. There were significant differences in GOALS scores among the three groups in all three tasks.
Fig. 3ALL scores at the time of participants’ first training session divided by previous experience of laparoscopic surgery. (a) Task 1. (b) Task 2. (c) Task 3. There were significant differences in ALL scores among the three groups in all three tasks.
Fig. 4NASA‐TLX scores at the time of participants’ first training session divided by previous experience of laparoscopic surgery. (a) Task 1. (b) Task 2. (c) Task 3. Higher NASA‐TLX scores were observed in novices, and there were significant differences in tasks 1 (P = 0.0004) and 2 (P = 0.0002), and marginal differences in task 3 (P = 0.0745) among the three groups.
Fig. 5ROC curves of tasks 1, 2 and 3 for classifying the ESSQ qualification status based on GOALS score. All three tasks showed good separability.
Fig. 6Tissue similarity and effectiveness of each task evaluated by the experts and intermediates. Most of the aspects were rated as above average, except for fat tissue. Both the intermediates and experts rated all three tasks as being effective for training.
Fig. 7Learning curves of task 1 in the 15 participants who underwent the training multiple times. (a) GOALS score. (b) ALL score. (c) NASA‐TLX score. No constant trend was observed among the participants, although an increasing tendency in GOALS and ALL scores, and a decreasing tendency in NASA‐TLX scores were observed in several participants.