| Literature DB >> 34629352 |
Yasuo Murai1, Shun Sato1, Atsushi Tsukiyama1, Asami Kubota1, Akio Morita1.
Abstract
The increase in minimally invasive surgery has led to a decrease in surgical experience. To date, there is only limited research examining whether skills are evaluated objectively and equally in simulation training, especially in microsurgery. The purpose of this study was to analyze the objectivity and equality of simulation evaluation results conducted in a contest format. A nationwide recruitment process was conducted to select study participants. Participants were recruited from a pool of qualified physicians with less than 10 years of experience. In this study, the simulation procedure consisted of incising a 1 mm thick blood vessel and suturing it with a 10-0 thread using a microscope. Initially, we planned to have the neurosurgical supervisors score the simulation procedure by direct observation. However, due to COVID-19, some study participants were unable to attend. Thus requiring some simulation procedures to be scored by video review. A total of 14 trainees participated in the study. The Cronbach's alpha coefficient among the scorers was 0.99, indicating a strong correlation. There was no statistically significant difference between the scores from the video review and direct observation judgments. There was a statistically significant difference (p <0.001) between the scores for some criteria. For the eight criteria, individual scorers assigned scores in a consistent pattern. However, this pattern differed between scorers indicating that some scorers were more lenient than others. The results indicate that both video review and direct observation methods are highly objective techniques evaluate simulation procedures.Entities:
Keywords: microsurgery; objective assessment; skills; techniques
Mesh:
Year: 2021 PMID: 34629352 PMCID: PMC8666297 DOI: 10.2176/nmc.oa.2021-0191
Source DB: PubMed Journal: Neurol Med Chir (Tokyo) ISSN: 0470-8105 Impact factor: 1.742
Fig. 1The review video consisted of a video of the microscope screen and a video of the surgeon’s entire body taken from the side and back of the surgeon.
Fig. 2A view of the venue at the actual contest.
Results of video review (upper) and in-person (lower) contest
| Grader | Judge A | Rank | Judge B | Rank | Judge C | Rank | Judge D | Rank | Judge E | Rank | Final Total score |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 39 | 1 | 32 | 1 | 27 | 3 | 39 | 2 | 30 | 3 | 167 |
| 2 | 37 | 2 | 30 | 4 | 26 | 4 | 37 | 3 | 32 | 1 | 162 |
| 3 | 27 | 7 | 32 | 1 | 33 | 1 | 40 | 1 | 25 | 6 | 157 |
| 4 | 31 | 4 | 25 | 6 | 29 | 2 | 35 | 6 | 29 | 4 | 149 |
| 5 | 29 | 6 | 25 | 6 | 25 | 5 | 32 | 7 | 32 | 1 | 143 |
| 5 | 31 | 4 | 28 | 5 | 21 | 6 | 36 | 5 | 27 | 5 | 143 |
| 7 | 37 | 2 | 24 | 8 | 18 | 8 | 37 | 3 | 23 | 7 | 139 |
| 8 | 22 | 8 | 31 | 3 | 21 | 6 | 32 | 7 | 20 | 8 | 126 |
| Final Rank | Judge F | Rank | Judge G | Rank | Judge H | Rank | Judge I | Rank | Judge J | Rank | Final Total score |
| 1 | 33 | 1 | 32 | 1 | 33 | 1 | 33 | 1 | 39 | 1 | 170 |
| 2 | 31 | 2 | 27 | 2 | 22 | 3 | 26 | 4 | 36 | 3 | 142 |
| 3 | 24 | 6 | 27 | 2 | 21 | 5 | 27 | 2 | 37 | 2 | 136 |
| 4 | 26 | 3 | 21 | 4 | 23 | 2 | 27 | 2 | 33 | 5 | 130 |
| 5 | 25 | 4 | 20 | 5 | 18 | 6 | 24 | 5 | 35 | 4 | 122 |
| 6 | 26 | 3 | 20 | 5 | 22 | 3 | 21 | 6 | 33 | 5 | 122 |
Results of video review broken down by evaluation criteria (Judges A–E), and results of in-person contest between broken down by evaluation criteria (Judges F–J)
| Judge | Posture | Microscope | Tremor | Cut | Needle | Suture | Stitch | Knot | Total | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Mean | SE | Mean | SE | Mean | SE | Mean | SE | Mean | SE | Mean | SE | Mean | SE | Mean | SE | Mean | SE | ||||||||||
| A | 4.38 | 0.18 | 4.13 | 0.35 | 3.38 | 0.53 | 4 | 0.33 | 4.25 | 0.37 | 4.13 | 0.35 | 3.38 | 0.26 | ** | 4 | 0.33 | 31.63 | 2.04 | ||||||||
| B | 3.88 | 0.13 | 3.38 | 0.26 | 3.75 | 0.16 | 3 | 0.42 | 3.75 | 0.16 | 3.75 | 0.16 | * | 3.5 | 0.19 | * | 3.38 | 0.26 | 28.38 | 1.18 | |||||||
| C | 3.38 | 0.18 | ** | 3.25 | 0.25 | * | 2.75 | 0.25 | * | 3.13 | 0.3 | 3.25 | 0.31 | 3.13 | 0.3 | ** | 3 | 0.27 | ** | 3.13 | 0.23 | 25 | 1.72 | ** | |||
| D | 4.13 | 0.23 | 4.5 | 0.19 | 4.62 | 0.18 | 4.38 | 0.18 | 3.88 | 0.23 | 4.88 | 0.13 | 5 | 0 | 4.63 | 0.26 | 36 | 1.04 | |||||||||
| E | 3.88 | 0.13 | 3.75 | 0.16 | 3.13 | 0.3 | ** | 3 | 0.38 | 3.25 | 0.31 | 3.5 | 0.27 | ** | 3.38 | 0.32 | ** | 3.38 | 0.26 | 27.25 | 1.53 | * | |||||
| F | 3.67 | 0.21 | * | 3.5 | 0.22 | * | 3 | 0.37 | 3.17 | 0.4 | 2.67 | 0.33 | 4.17 | 0.31 | 4.17 | 0.17 | 3.17 | 0.31 | 27.5 | 1.48 | |||||||
| G | 3.57 | 0.2 | * | 3.29 | 0.18 | ** | 2.86 | 0.26 | 3.29 | 0.29 | 3 | 0.38 | 2.71 | 0.29 | ** | 3 | 0.38 | 3.14 | 0.26 | 24.86 | 1.61 | * | |||||
| H | 3.29 | 0.29 | ** | 3.29 | 0.36 | ** | 3.14 | 0.4 | 2.71 | 0.29 | * | 2.43 | 0.3 | ** | 2.71 | 0.36 | ** | 3 | 0.38 | 3 | 0.53 | 23.57 | 1.81 | ** | |||
| I | 3.43 | 0.3 | ** | 3.57 | 0.3 | * | 3.14 | 0.26 | 3.14 | 0.26 | 3 | 0.22 | 3.43 | 0.3 | 3.43 | 0.3 | 3.43 | 0.2 | 26.57 | 1.39 | |||||||
| J | 5 | 0 | 5 | 0 | 4 | 0.38 | 4 | 0.22 | 4.29 | 0.29 | 4.71 | 0.18 | 4.43 | 0.3 | 4.43 | 0.3 | 35.86 | 0.88 | |||||||||
*p <0.05; **p <0.01.
SE: standard error
The results of the partial correlation coefficients extracted for the individual judges (A–J) for the video review (upper) and in-person contest (lower)
| B | C | D | E | |
|---|---|---|---|---|
| A | 0.9579 | 0.9433 | 0.9791 | 0.9754 |
| B | (–) | 0.9731 | 0.9877 | 0.9656 |
| C | (–) | 0.9757 | 0.97 | |
| D | (–) | 0.9757 | ||
| E | (–) | |||
| G | H | I | J | |
| F | 0.9812 | 0.9799 | 0.9851 | 0.988 |
| G | (–) | 0.9808 | 0.9906 | 0.9848 |
| H | (–) | 0.9847 | 0.9753 | |
| I | (–) | 0.9907 | ||
| J | (–) |