Literature DB >> 32266340

Impact of Simulation Training on Diagnostic Arthroscopy Performance: A Randomized Controlled Trial.

Kevin C Wang1, Eamon D Bernardoni2, Eric J Cotter2, Brian J Cole3, Nikhil N Verma3, Anthony A Romeo3, Charles A Bush-Joseph3, Bernard R Bach3, Rachel M Frank4.   

Abstract

PURPOSE: To determine the impact of training on a virtual reality arthroscopy simulator on both simulator and cadaveric performance in novice trainees.
METHODS: A randomized controlled trial of 28 participants without prior arthroscopic experience was conducted. All participants received a demonstration of how to use the ArthroVision Virtual Reality Simulator and were then randomized to receive either no training (control group, n = 14) or a fixed protocol of simulation training (n = 14). All participants took a pretest on the simulator, completing 9 tasks ranging from camera-steadying tasks to probing structures. The training group then trained on the simulator (1 time per week for 3 weeks). At week 4, all participants completed a 2-part post-test, including (1) performing all tasks on the simulator and (2) performing a diagnostic arthroscopy on a cadaveric knee and shoulder. An independent, blinded observer assessed the performance on diagnostic arthroscopy using the Arthroscopic Surgical Skill Evaluation Tool scale. To compare differences between non-normally distributed groups, the Mann-Whitney U test was used. An independent-samples t test was used for normally distributed groups. The Friedman test with pair-wise comparisons using Bonferroni correction was used to compare scores within groups at multiple time points. Bonferroni adjustment was applied as a multiplier to the P value; thus, the α level remained consistent. Significance was defined as P < .05.
RESULTS: In both groups, all tasks except task 5 (in which completion time was relatively fixed) showed a significant degree of correlation between task completion time and other task-specific metrics. A significant difference between the trained and control groups was found for post-test task completion time scores for all tasks. Qualitative analysis of box plots showed minimal change after 3 trials for most tasks in the training group. There was no statistical correlation between the performance on diagnostic arthroscopy on either the knee or shoulder and simulation training, with no difference in Arthroscopic Surgical Skill Evaluation Tool scores in the training group compared with controls.
CONCLUSIONS: Our study suggests that an early ceiling effect is shown on the evaluated arthroscopic simulator model and that additional training past the point of proficiency on modern arthroscopic simulator models does not provide additional transferable benefits on a cadaveric model. LEVEL OF EVIDENCE: Level I, randomized controlled trial.
© 2019 Published by Elsevier on behalf of the Arthroscopy Association of North America.

Entities:  

Year:  2019        PMID: 32266340      PMCID: PMC7120830          DOI: 10.1016/j.asmr.2019.07.002

Source DB:  PubMed          Journal:  Arthrosc Sports Med Rehabil        ISSN: 2666-061X


Graduate medical education, specifically surgical education, has traditionally been rooted in an apprenticeship model. The “see one, do one, teach one” approach has guided education in the operating room and has led to the development of surgical education as it stands today. However, there are rising concerns regarding burnout—a condition correlated with longer work hours, a skewed work-life balance, and potentially negative patient outcomes—among orthopaedic surgery trainees.1, 2, 3 Given these concerns, the traditional method of technical training—emphasizing repetition and exposure—has come under scrutiny as educators attempt to improve on this framework. These changing attitudes have been reflected in recent changes in work-hour restrictions and formal surgical skill training programs put forth by the Accreditation Council for Graduate Medical Education. Changing attitudes toward surgical education have developed in parallel with new educational tools. The development of modern computers and virtual reality systems offers exciting opportunities to augment surgical training. Specifically, these technologies enable development of technical skills in a controlled, low-risk environment that allows trainees to break down complex techniques into digestible components that can be practiced to the point of competency. New training models enable mastery of these component skills to allow trainees to perform complex tasks more effectively, and these models allow measurement of stepwise procedural proficiency. Investigations showing the efficacy of proficiency-based progression in the attainment of scope-based skills (arthroscopy and laparoscopy) support this theory by showing improved performance of trainees in proficiency-based progression programs compared with controls.5, 6 Previous investigations have generally supported some degree of skill transfer from practice on arthroscopic simulator models to performance on either cadaveric models or live arthroscopy.7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 However, some investigations have challenged the idea that simulator training provides generalizable skill acquisition.19, 20 In addition, the questions of how much simulator training is necessary and if there is an upper limit to the benefits of simulator training hold particular importance as new simulator technologies are being integrated into residency training programs. The purpose of this study was to determine the impact of training on a virtual reality arthroscopy simulator on both simulator and cadaveric performance in novice trainees. We hypothesized that training on a simulator would result in improvements in arthroscopy performance on a cadaveric specimen and secondarily hypothesized that training on the simulator would show a ceiling effect.

Methods

Participants

After institutional review board approval was received (No. 16082301), a total of 28 novice trainees (preclinical medical and premedical students) at a single institution were invited to participate. All volunteers gave informed consent and were recruited on a voluntary basis without compensation. Participation status did not influence academic standing. Subjects were enrolled between November and December 2016. Initial testing and training were conducted from December 2016 to January 2017, and post-test data collection occurred in February 2017. Subjects were excluded if they had any previous arthroscopy experience or previous formal arthroscopy simulator training. Demographic information was collected prior to testing and included subject age, sex, video game use, and handedness using subject-reported surveys; video game use was collected on a 5-point Likert scale. There were no changes to the inclusion criteria after trial commencement.

Study Design

The study was a single-blinded, prospective, randomized controlled trial with a parallel-group design. All subjects underwent an initial simulator pretest on an ArthroVision Virtual Reality Simulator (Swemac, Linköping, Sweden). Subjects were then allocated to either a simulator training group (n = 14) or no training group (control group, n = 14) by the primary author (K.C.W.) using a computer-based random number generator set to create 2 equal-sized groups. Sample size was limited by the number of volunteers and is comparable with previously published trials, but no power analysis was conducted.6, 16 Allocation was concealed from participants and investigators until after completion of the pretest by the computer. After completion of the training curriculum, all subjects from both groups participated in an additional, identical simulator session (simulator post-test). After this post-test, all subjects were shown two 5-minute videos of the appropriate technique for diagnostic arthroscopy of each joint and given a 5-minute overview of relevant anatomy prior to performing a 5-minute diagnostic arthroscopy on a cadaveric knee and shoulder. Simulation training and post-test data collection were conducted at a central location with no access to the simulator between sessions.

Simulator Training

The ArthroVision Virtual Reality Simulator was used for training and testing (Fig 1). Each identical simulator session consisted of 9 tasks. A degree of construct validity on this simulator model has been previously established through the relation to other variables; this simulator model showed discriminative ability based on surgical level of training. Session scores were obtained from the default metrics provided by the simulator. The tasks and measurement metrics collected automatically by the simulator program are shown in Table 1. To compare between trials and subjects, completion time was used because it was the only metric reported consistently between tasks.
Fig 1

ArthroVision simulator setup. All simulations used the default settings and manufacturer-determined computerized scoring metrics.

Table 1

Simulator Tasks and Metrics

TaskDescriptionMetrics Recorded by Simulator
1Steadying camera and telescoping on a targetTime, path, path in focus
2Performing periscopic movements around targets in a circleTime, path
3Tracking a moving targetTime, time out of focus, distance deviation, centering deviation
4Achieving deliberate linear scope motion with the cameraTime, path
5Tracking and probing a moving targetTime out of focus, manipulating time out of focus, time touching track, distance and centering deviation
6Performing periscopic movements around a targetTime, path, telescoping path, XY path, view direction deviation
7Measuring with a probeTime, path, size deviation
8Steadying camera, telescoping, and probingTime, time out of focus, scope and probe path, manipulating time out of focus
9Steadying camera, telescoping, and probing different positionTime, time out of focus, scope and probe path, manipulating time out of focus
ArthroVision simulator setup. All simulations used the default settings and manufacturer-determined computerized scoring metrics. Simulator Tasks and Metrics The training group performed a simulator session once weekly for 3 consecutive weeks. Each simulator session consisted of a single run through each of the 9 tasks. The control group had no access to the simulator. Both groups completed a post-test simulator session after the training period was completed.

Cadaveric Post-Test

After the training period, all subjects completed a cadaveric post-test on both a fresh-frozen human knee and shoulder. Subjects were shown instructional videos depicting the steps of diagnostic arthroscopy and were provided a diagram of anatomic landmarks. Basic information on the assessment criteria and a checklist of tasks were provided during the instructional period and while subjects were performing the procedure (Appendix 1). All videos were prerecorded and narrated by the senior author (R.M.F.). The primary outcome collected from the cadaveric post-test was the Arthroscopic Surgical Skill Evaluation Tool (ASSET) score, a global rating scale for arthroscopic skill. All procedures were recorded using the scope camera, blinded by removing any subject-identifying features, and graded using the ASSET by a fellowship-trained orthopaedic sports surgeon who was blinded to group allocation (R.M.F.). There were no changes in trial outcomes after the trial was commenced. After cadaveric post-test collection, the trial was completed.

Statistical Analysis

All statistical analysis was performed using SPSS software (IBM, Chicago, IL). The Shapiro-Wilk test for normality was used to assess the normality of the distribution of variables of interest. For non–normally distributed variables, the Mann-Whitney U test was applied, and for normally distributed points, an independent-samples t test was used. To compare nominal variables, the Fisher exact test was used. The Wilcoxon signed rank test was used to compare pretest scores with post-test scores within groups. The Friedman test was run to determine if there were differences in scores during the pretest, post-test, and training period in the simulator-trained group. Pair-wise comparisons were performed with Bonferroni correction for multiple comparisons. In the SPSS statistical package, this is performed by multiplying the raw P value by the appropriate Bonferroni adjustment rather than adjusting the α level. To assess the relation between test scores and subject demographic traits, a Spearman rank order correlation was run. An independent-samples t test was used to compare the performance on final cadaveric testing between the simulator-trained and control groups.

Results

Twenty-eight participants completed the study, with 14 participants in each group. Except for 1 subject in the control group who did not complete the cadaveric post-test because of scheduling conflicts, all subjects completed all portions of the trial (Fig 2). There was no significant difference in age (mean ± standard deviation, 24.86 ± 2.38 years in control group vs 24.36 ± 2.59 years in simulator group; P = .599), sex (71.4% male subjects in both groups, P = .999), right-hand dominance (92.9% in control group vs 85.7% in simulator group, P = .999), or video game use (mean ± standard deviation, 2.61 ± 0.88 in control group vs 2.13 ± 0.82 in simulator group; P = .146) between the groups. In both groups, all tasks except task 5 (in which completion time was fixed) showed a significant degree of correlation between task completion time and other computer-collected, task-specific metrics (Table 2).
Fig 2

CONSORT (Consolidated Standards of Reporting Trials) flow diagram showing patient recruitment, follow-up, and analysis.

Table 2

Spearman ρ Correlation for Task Completion and All Other Simulator Scoring Metrics

CE/P Value
Task 1Task 2Task 3Task 4Task 6Task 7Task 8Task 9
Path0.749/.0020.974/<.0010.851/<.0010.807/<.0010.811/<.0010.780/.0010.895/<.001
Path in focus0.798/.001
Telescoping path0.820/<.001
XY path0.833/<.001
View direction deviation
Total distance deviation0.877/<.001
Total centering deviation0.736/<.001
Size error 10.059/.84
Probe path0.851/<.0010.846/<.001
Time out of focus0.925/<.0010.429/.1260.912/<.001
Manipulation time out of focus0.53/.0510.389/.1690.714/.004
Total track touch time0.262/.366

CE, correlation coefficient.

Representative of 5 size errors (1 for each object measured during the task).

CONSORT (Consolidated Standards of Reporting Trials) flow diagram showing patient recruitment, follow-up, and analysis. Spearman ρ Correlation for Task Completion and All Other Simulator Scoring Metrics CE, correlation coefficient. Representative of 5 size errors (1 for each object measured during the task). A significant difference in pretest completion time was found for task 1 (P = .044) between the simulator and control groups, favoring the simulator group; otherwise, no significant differences in completion time were found between the 2 groups (Table 3). Pretest completion time was not significantly correlated with video game experience (P = .152).
Table 3

Median Task Completion Time on Simulator Pretest

Task 1Task 2Task 3Task 4Task 5Task 6Task 7Task 8Task 9
Median time (IQR), s
 Simulator group123.7 (68.7)130.4 (102.0)138.7 (83.4)26.3 (5.9)73.6 (0.0)212.9 (121.8)321.4 (128.7)173.6 (102.6)133.5 (121.6)
 Control group169.7 (128.0)104.9 (104.4)159.3 (183.2)25.9 (7.0)73.6 (0.0)252.9 (125.6)301.4 (224.6)198.2 (140.4)141.5 (235.2)
P value.044.427.511.867.329.265.839.946.280

IQR, interquartile range.

Median Task Completion Time on Simulator Pretest IQR, interquartile range. A significant difference between the simulator and control groups was noted for post-test completion time for all measured tasks except task 5 (Table 4). Post-test completion times for both groups were significantly different from the respective pretests. Qualitative analysis of box plots showed minimal change after 3 trials for most tasks in the training group (Fig 3). The Friedman test also showed no significant change between the final 3 trials (2 training sessions and the post-test) for tasks 3, 4, 6, and 8. The Bonferroni-adjusted P value for each of these tasks was .199, .292, .058, and .232, respectively, with an α level of .05. For all other tasks, pair-wise comparisons found no significant difference in completion time between trials 4 and 5 (last training session and final post-test session).
Table 4

Median Task Completion Time on Simulator Post-Test

Task 1Task 2Task 3Task 4Task 5Task 6Task 7Task 8Task 9
Median time (IQR), s
 Simulator group60.5 (34.3)53.1 (30.7)64.3 (24.8)20.4 (5.1)73.6 (0.0)117.6 (77.4)139.7 (31.9)74.2 (25.3)49.8 (16.2)
 Control group92.4 (58.9)77.9 (32.5)107.1 (85.0)25.3 (7.6)73.6 (0.0)196.3 (97.5)255.4 (55.7)116.1 (80.8)85.3 (75.7)
P value.01.039.014.002.650.044<.001.001<.001

IQR, interquartile range.

Fig 3

Task completion time for simulator-trained group graphed by trial number. Trial 1 represents the initial pretest; trials 2, 3, and 4 are the additional training sessions; and trial 5 represents the simulator post-test.

Median Task Completion Time on Simulator Post-Test IQR, interquartile range. Task completion time for simulator-trained group graphed by trial number. Trial 1 represents the initial pretest; trials 2, 3, and 4 are the additional training sessions; and trial 5 represents the simulator post-test. A negative correlation was found between completion time and post-test cadaveric ASSET score (Spearman ρ = –0.084 for pretest and Spearman ρ = –0.094 for post-test); however, this did not reach significance (P = .666 for pretest and P = .649 for post-test). The primary outcome measurement, cadaveric post-test ASSET score, was 19.25 ± 2.46 in the simulator group and 18.00 ± 7.43 in the control group. These scores were not significantly different between groups (P = .555). This finding showed a Cohen d effect size of 0.23 (95% confidence interval, –0.528 to 0.987), showing a small effect of simulator training on cadaveric post-test performance. The Levene test for equality of variances did not show any significant difference in variance between groups (F = 2.023, P = .167). No harm or unintended consequences were reported for either group.

Discussion

The principal findings of this study suggest that (1) there is a significant improvement from pretest to post-test completion time on the simulator model in both groups, (2) a ceiling effect exists in simulator training that can be rapidly achieved, (3) additional training after reaching the point of the ceiling effect does not have a significant impact on performance on diagnostic arthroscopy, and (4) improved performance on simulator testing did not correlate significantly with the ASSET score on cadaveric arthroscopy. Our investigation showed that there was a significant improvement from pretest to post-test completion time on the simulator model in both groups. The training group showed significantly more improvement than the control group from simulator pretest to simulator post-test. However, the training group showed a ceiling effect after 3 sessions with the simulator for most tasks, with no significant differences found between trials 4 and 5 for all tasks; most of the gains in performance occurred between the initial exposure to the simulator (pretest) and the second exposure to the simulator, possibly because of the low complexity of the simulated task. It was also noted that a single session on the simulator enabled a rapid acquisition of skills that persisted for at least 1 month (control group). In summary, we have shown that simulator training results in improved simulator performance but does not significantly improve performance in diagnostic arthroscopy. Previous investigations of simulator training have shown transfer validity of skills between simulator models,7, 17 cadaveric models,6, 8, 11, 12, 14, 15, 17 and live arthroscopy,10, 13, 16 whereas 2 studies found no direct transfer validity between simulator models.19, 20 A recent systematic review and meta-analysis of studies investigating the impact of simulator training on performance showed a strong effect on simulator performance and a moderate effect on human-model (live or cadaveric) performance. Camp et al. showed significantly greater improvement in final, cadaveric post-test ASSET scores for cadaveric training compared with a similar amount of simulator training; however, simulator training was superior to no training. Butler et al. reported diminishing marginal returns (less improvement with each subsequent trial) on a benchtop knee simulator. In their investigation, subjects continued to train until they no longer showed improvement on the simulator, showing that proficiency-based simulator training combined with cadaveric training resulted in fewer trials to achieve proficiency on a cadaver than cadaveric training alone. Together, these investigations support the assertion that arthroscopy simulator training is transferable to arthroscopy on a human model by targeting the component skills required in arthroscopic surgery, such as bimanual dexterity and spatial awareness. However, other investigations have challenged the generalizability of current simulators and the notion of skill transfer.19, 20 They support the argument that current simulators cannot yet reproduce the higher-level integration of many cognitive domains including anatomic knowledge, instrument handling, and anticipation of next steps required in real-life surgery. Ferguson et al. showed diminishing marginal returns with additional training and examined the effects of over-training (continued practice after no observable improvement is shown) on 2 checklist-guided benchtop simulators, showing no transfer of skills between different simulator models based on completion time and score on a global rating scale. However, in this investigation, novice trainees may have improved their understanding of joint-specific anatomy and movement patterns with these simulators without developing more basic component skills. Ström et al. used multiple, focused simulator models but only allowed subjects to train for a short duration on each model. These investigations raise the concern that transfer validity should not be assumed on simulator models: Device design, task applicability, context of training, and trainee integration of knowledge are all important contributors to skill transference, and current simulator models need further development. The difference in novice trainee skill acquisition on the simulator (as determined by simulator completion time) versus the cadaver (as determined by ASSET score) in this investigation is likely attributable to incomplete skill transference owing to a combination of these factors. Our study shows that simulator training to the point of diminishing marginal returns does not significantly improve performance on cadaveric arthroscopy. This finding supports the argument that the skills gained on these simulator models are not directly transferable to an operative setting. Simulator proficiency is rapidly acquired (in just a few hours) to the point at which the opportunity cost of additional training on these simulators (less time spent with cadavers or in the operating room) may not justify additional practice time. Previous literature has suggested that a proficiency-based simulation curriculum can provide improved performance over a traditional curriculum6, 8; however, our data suggest that the time required to reach proficiency on a simulator is not extensive, and the return on further training on the simulator after this point may not translate into significant gains on human models. This should be explored in future research and considered when designing simulation training curricula.

Limitations

Our investigation was initially intended to provide insight into the impact of simulator training versus no training on cadaveric arthroscopy performance in a simulator-naive population. However, it faces several limitations. The sample size, as well as power, of this study was limited by the number of volunteers. This is a limitation faced by much of the previous literature in this area, and our sample size is comparable to that of other investigations. The lack of a true control group (simulator-naive subjects with no exposure to the simulator during the study) limits our interpretation of the impact of our simulator training program on subject performance on cadaveric arthroscopy. In this study, the differences in pretest and post-test simulator scores show that our pretest study design allowed for significant skill acquisition on the simulator, and thus, the simple act of performing a pretest on the simulator could be considered “training.” In fact, some studies have used a total simulator exposure time of 1 hour as a simulator training curriculum.11, 16, 19 Thus, although it was not our intent on study design, this study may be better classified as a trial of “trained” versus “over-trained” subjects as opposed to “trained” versus “untrained.” Because of the lack of a true, simulator-naive control group, the interpretation of our results must be made with caution. Our study lacked a pretest session prior to cadaveric training because the additional use of cadavers was cost prohibitive in this investigation. Another limitation is that only a single senior author evaluated the ASSET scores of participants. The ASSET score has been previously validated to show high inter-rater reliability, but inclusion of an additional rater would allow for score averaging. In addition, the properties of the simulator itself limited our analyses; there was only 1 consistent measurement between tasks. In the development of future simulator models, more consistency between tasks should be incorporated. Although our research was internally consistent, its external validity cannot be assumed. Our research was conducted on only a single simulator model, and our study population is not fully representative of the target population: novice orthopaedic trainees. Medical students were enrolled as novice trainees in this study in lieu of orthopaedic trainees to increase the number of potential volunteers. However, the limited baseline anatomic knowledge and technical training in our study population may limit the ability to show improvement; more advanced novices who have personally observed arthroscopy and have a solid understanding of fundamental anatomy may benefit more from simulator skill development.

Conclusions

Our study suggests that an early ceiling effect is shown on the evaluated arthroscopic simulator model and that additional training past the point of proficiency on modern arthroscopic simulator models does not provide additional transferable benefits on a cadaveric model.
  22 in total

1.  Training in tasks with different visual-spatial components does not improve virtual arthroscopy performance.

Authors:  P Ström; A Kjellin; L Hedman; T Wredmark; L Felländer-Tsai
Journal:  Surg Endosc       Date:  2003-11-21       Impact factor: 4.584

2.  Overlearning, fluency, and automaticity.

Authors:  K M Dougherty; J M Johnston
Journal:  Behav Anal       Date:  1996

3.  Assessing Diagnostic Arthroscopy Performance in the Operating Room Using the Arthroscopic Surgery Skill Evaluation Tool (ASSET).

Authors:  Ryan J Koehler; John P Goldblatt; Michael D Maloney; Ilya Voloshin; Gregg T Nicandri
Journal:  Arthroscopy       Date:  2015-08-28       Impact factor: 4.772

Review 4.  Orthopaedic Surgeon Burnout: Diagnosis, Treatment, and Prevention.

Authors:  Alan H Daniels; J Mason DePasse; Robin N Kamal
Journal:  J Am Acad Orthop Surg       Date:  2016-04       Impact factor: 3.020

5.  Improving Resident Performance in Knee Arthroscopy: A Prospective Value Assessment of Simulators and Cadaveric Skills Laboratories.

Authors:  Christopher L Camp; Aaron J Krych; Michael J Stuart; Terry D Regnier; Karen M Mills; Norman S Turner
Journal:  J Bone Joint Surg Am       Date:  2016-02-03       Impact factor: 5.284

6.  Do the skills acquired by novice surgeons using anatomic dry models transfer effectively to the task of diagnostic knee arthroscopy performed on cadaveric specimens?

Authors:  Aaron Butler; Tyson Olson; Ryan Koehler; Gregg Nicandri
Journal:  J Bone Joint Surg Am       Date:  2013-02-06       Impact factor: 5.284

7.  Utility of Modern Arthroscopic Simulator Training Models: A Meta-analysis and Updated Systematic Review.

Authors:  Rachel M Frank; Kevin C Wang; Annabelle Davey; Eric J Cotter; Brian J Cole; Anthony A Romeo; Charles A Bush-Joseph; Bernard R Bach; Nikhil N Verma
Journal:  Arthroscopy       Date:  2018-01-20       Impact factor: 4.772

8.  Ankle Arthroscopy Simulation Improves Basic Skills, Anatomic Recognition, and Proficiency During Diagnostic Examination of Residents in Training.

Authors:  Kevin D Martin; David Patterson; Phinit Phisitkul; Kenneth L Cameron; John Femino; Annunziato Amendola
Journal:  Foot Ankle Int       Date:  2015-03-11       Impact factor: 2.827

9.  Shoulder arthroscopy simulator training improves shoulder arthroscopy performance in a cadaveric model.

Authors:  R Frank Henn; Neel Shah; Jon J P Warner; Andreas H Gomoll
Journal:  Arthroscopy       Date:  2013-04-13       Impact factor: 4.772

10.  Fundamental arthroscopic skill differentiation with virtual reality simulation.

Authors:  Kelsey Rose; Robert Pedowitz
Journal:  Arthroscopy       Date:  2014-10-11       Impact factor: 4.772

View more
  3 in total

1.  The frequency of assessment tools in arthroscopic training: a systematic review.

Authors:  Haixia Zhou; Chengyao Xian; Kai-Jun Zhang; Zhouwen Yang; Wei Li; Jing Tian
Journal:  Ann Med       Date:  2022-12       Impact factor: 5.348

2.  Practicing Procedural Skills Is More Effective Than Basic Psychomotor Training in Knee Arthroscopy: A Randomized Study.

Authors:  Mads Emil Jacobsen; Amandus Gustafsson; Per Gorm Jørgensen; Yoon Soo Park; Lars Konge
Journal:  Orthop J Sports Med       Date:  2021-02-23

3.  In Response to COVID-19: Current Trends in Orthopaedic Surgery Sports Medicine Fellowships.

Authors:  Jordan L Liles; Richard Danilkowicz; Jeffrey R Dugas; Marc Safran; Dean Taylor; Annunziato Ned Amendola; Meredith Herzog; Matthew T Provencher; Brian C Lau
Journal:  Orthop J Sports Med       Date:  2021-02-09
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.