Literature DB >> 29349329

Using Learner-Centered, Simulation-Based Training to Improve Medical Students' Procedural Skills.

Serkan Toy1, Robert Sf McKay1, James L Walker1, Scott Johnson1, Jacob L Arnett1.   

Abstract

PURPOSE: To evaluate the effectiveness of a learner-centered, simulation-based training developed to help medical students improve their procedural skills in intubation, arterial line placement, lumbar puncture, and central line insertion.
METHOD: The study participants were second and third year medical students. Anesthesiology residents provided the training and evaluated students' procedural skills. Two residents were present at each station to train the medical students who rotated through all 4 stations. Pre/posttraining assessment of confidence, knowledge, and procedural skills was done using a survey, a multiple-choice test, and procedural checklists, respectively.
RESULTS: In total, 24 students were trained in six 4-hour sessions. Students reported feeling significantly more confident, after training, in performing all 4 procedures on a real patient (P < .001). Paired-samples t tests indicated statistically significant improvement in knowledge scores for intubation, t(23) = -2.92, P < .001, and arterial line placement, t(23) = -2.75, P < .001. Procedural performance scores for intubation (t(23) = -17.29, P < .001), arterial line placement (t(23) = -19.75, P < .001), lumbar puncture (t(23) = -16.27, P < .001), and central line placement (t(23) = -17.25, P < .001) showed significant improvement. Intraclass correlation coefficients indicated high reliability in checklist scores for all procedures.
CONCLUSIONS: The simulation sessions allowed each medical student to receive individual attention from 2 residents for each procedure. Students' written comments indicated that this training modality was well received. Results showed that medical students improved their self-confidence, knowledge, and skills in the aforementioned procedures.

Entities:  

Keywords:  Simulation; anesthesia; learner centered; medical education; procedural training

Year:  2017        PMID: 29349329      PMCID: PMC5736291          DOI: 10.1177/2382120516684829

Source DB:  PubMed          Journal:  J Med Educ Curric Dev        ISSN: 2382-1205


Introduction

Third year medical students face a challenging task of shifting gears from a curriculum heavy in basic sciences to applying this vast amount of medical knowledge into caring for patients. However, 3 decades of research into expertise development demonstrates that mastery in content knowledge does not guarantee a successful application of this knowledge in the form of procedural skills.[1-3] Based on the recommendations by the Association of American Medical Colleges, every graduating medical student should be able to perform a number of basic procedures, such as venipuncture, intravenous catheter insertion, nasogastric tube insertion, and foley catheter insertion.[4] However, medical students have experienced difficulty in developing self-confidence as well as competency in procedural skills due to lack of opportunities to practice within a safe and supervised environment.[5,6] Educational researchers indicate that individuals’ performance in real-life settings depend on their domain knowledge that combines the knowledge of facts and concepts (content knowledge) and the knowledge of how to perform certain operations and procedures (procedural knowledge) in a specific domain such as medicine.[7-11] Therefore, an educational program must provide medical students with the necessary content knowledge and procedural skills to perform as expected in diverse clinical settings. Use of simulation-based training shows improvements in learners’ knowledge, skills, attitudes, and performance.[12-15] Moreover, simulation-based training for medical education leads to effective learning as it provides “repetitive practice, ability to integrate into curriculum, ability to alter the degree of difficulty, ability to capture clinical variation, immediate feedback, and approximation of clinical practice.”[16] Several studies indicate that medical students typically do not receive standardized hands-on training on advanced procedural skills[17-19] (our target population is no exception). Among these are intubation, arterial line placement, central line insertion, and lumbar puncture.[17] These are advanced and somewhat invasive procedures that can cause discomfort and/or complications for patients. In this study, we chose to provide second and third year medical students with a student-centered, simulation-based training on these advanced procedures. The literature suggests that residents can make a significant contribution to medical student education by providing hands-on learning opportunities and constructive feedback in a safe learning environment and by influencing students’ subsequent career choice, professional growth, and clerkship performance.[20-22] In this study, anesthesiology residents provided training and debriefing and also served as the judges of medical students’ procedural skills. We hypothesized that providing simulation-based training would improve second and third year medical students’ confidence, content knowledge, as well as their performance on intubation, arterial line placement, central line insertion, and lumbar puncture.

Methods

We designed the simulation training to include 4 procedural skills: intubation, arterial line placement, lumbar puncture, and central line insertion. A total of 8 anesthesiology residents provided the simulation training and served as the judges of the students’ skills. Each resident received a detailed content outline with specific learning objectives and key teaching points for the procedures. Two train-the-trainer sessions were held for residents that included a standardized teaching for each procedure using simulation equipment. The residents also received instruction on how to use the assessment checklists and went through a calibration process to maximize the interrater consistency. The calibration process included residents evaluating each other using the checklists and comparing their scoring behavior and resolving any discrepancies.

Setting and participants

The University of Kansas Medical Center Institutional Review Board approved the study. The study participants were second and third year medical students at the University of Kansas School of Medicine—Wichita. This training was not part of the regular medical school curriculum, which currently consists of 2 phases. Phase I includes 12 learning modules where year 1 and year 2 medical students learn about core basic science disciplines as well as basic clinical skills (standardized patient encounters), preventive medicine, ethics, and behavioral sciences. Learning modules in this phase use problem-based learning where students engage in a clinical case pertinent to each module in a student-centered, small-group setting. During phase II, year 3 and year 4 medical students complete their required clerkships in core clinical areas and take additional clerkships and electives to help provide well-rounded clinical exposure. Participation in this training was on voluntary basis. Time constraints limited this experience to 24 medical students. More students volunteered than we could accommodate, so an equal number of students from each year were randomly selected. For each procedural skill, a station was developed with the appropriate simulation model and other equipment necessary for each specific procedure.

Data sources/measurement tools

Pre/posttraining assessments of confidence, knowledge, and procedural skills were performed using a survey with a 5-point Likert scale, a 40-item multiple-choice test (same items were used in pre/posttest), and procedural checklists, respectively. In addition, we documented whether or not students had previously performed any of the procedures on a real patient and/or on a simulator/task trainer. Study participants filled out a postintervention satisfaction survey designed to measure the effectiveness of the simulation training from the student’s perspective. Finally, 2 open-ended responses documented the most valuable aspects of the training for students as well as their suggestions for improvement (Appendix 1 includes items from pre/postintervention self-confidence and postintervention satisfaction surveys). A content outline, including goals and objectives for the simulation training, provided the blueprint for the multiple-choice test (see Appendix 2 for sample test items). This helped ensure content validity for the knowledge test. Items were developed by 4 experienced anesthesiologists to measure pertinent information for each of the procedures. The test was pilot tested with 5 anesthesiology residents for clarity, accuracy, and difficulty level. One of the intubation items and 2 of the lumbar puncture items were found to rely heavily on rote memorization and were replaced with more clinically applicable critical-thinking questions. Before and immediately after the training, demonstration of actual skills was measured using procedural skills checklists (see Appendix 3 for the procedural skills checklists.) Checklists were comprehensive and included all pertinent critical steps for each of the procedures. There were a total of 25 tasks for intubation, 11 for arterial line placement, 15 for central line insertion, and 15 for lumbar puncture. The total number of completed tasks was calculated for each of the procedures for statistical analyses. Checklists were filled out by the anesthesiology residents supervising the simulated experience. Two residents observed each student performing the procedure at their station and independently filled out the checklists. The average ratings were used for statistical analyses.

Statistical analyses

In this study, medical students served as their own controls for statistical analyses. Comparisons between baseline and postintervention ratings of medical students regarding their confidence in performing the included procedures were made using Wilcoxon signed rank test. Paired-samples t tests were used to analyze whether or not there was a statistically significant improvement in students’ knowledge and procedural skills over the baseline scores. G*Power was used to estimate that our sample of 24 would be sufficient to detect an effect size of Cohen’s d value of .45 (α = .05) with 80% power for Wilcoxon signed rank tests and paired-samples t tests. For all tests, P < .05 was accepted as statistically significant. Intraclass correlation coefficients (ICC) were used as a measure of interrater reliability. The IBM Statistical Package for the Social Sciences 19 (IBM Corp., Armonk, NY, USA) was used for statistical analyses.

Results

In total, 24 second and third year medical students (12 from each year) were trained in six 4-hour sessions. At the beginning of each session, one of the authors (S.T.) explained the study protocol and obtained medical students’ consent for participation. Students spent on average 40 minutes on pre/postconfidence questionnaire and knowledge tests. A total of 3 hours were devoted to actual simulation-based training activities. Each student spent on average 40 minutes at each of the 4 stations: intubation, arterial line placement, lumbar puncture, and central line insertion. Baseline data for procedural skills were obtained while medical students tried performing each procedure before receiving any training from residents. There were 2 residents present at each station to train 1 medical student at a time. One resident explained the critical steps, whereas another demonstrated how to perform the procedure. Then, students were given, on average, 25 minutes to practice each procedure while they received feedback from residents. Each medical student rotated through all 4 stations. Posttest took place immediately after the training session. As to prior experience, most students reported not having performed any of the procedures on a real patient. Only 3 students (12.5%) reported performing intubation, and 1 student (4.2%) reported performing central line insertion on a real patient (see Table 1 for students’ self-reported prior experience). Although 17 students (70.8%) reported having a simulated experience in intubation and lumbar puncture, most did not have any exposure to arterial line placement or central line insertion.
Table 1.

Students’ self-reported prior experience on given procedures.

Have you ever performedOn a real patient
On a simulator/task trainer
Yes (%)No (%)Yes (%)No (%)
Intubation3 (12.5)21 (87.5)17 (70.8)7 (29.2)
Arterial line placement0 (0)24 (100)1 (4.2)23 (95.8)
Lumbar puncture0 (0)24 (100)17 (70.8)7 (29.2)
Central line insertion1 (4.2)23 (95.8)2 (8.3)22 (91.7)
Students’ self-reported prior experience on given procedures.

Students’ self-reported confidence scores

Medical students felt significantly more confident, after training, in performing all 4 procedures on a task trainer/simulator as well as on a real patient as indicated by Wilcoxon signed rank tests (P < .001) (see Tables 2 and 3 for mean, standard deviation, median, and Wilcoxon signed rank test results for pre and post self-confidence survey scores.
Table 2.

Mean, standard deviation, median, and Wilcoxon signed rank test results for pre- and postsurvey scores[a] regarding self-confidence for performing on a task trainer/simulator.

PretestPosttestZ scoreP value
MeanMedianNSDMeanMedianNSD
Intubation2.713.00241.164.384.50240.77−3.97<.001*
Arterial line placement1.832.00240.964.294.00240.86−4.18<.001*
Lumbar puncture2.633.00241.104.334.00240.76−3.89<.001*
Central line insertion1.791.50240.934.134.00240.99−4.19<.001*

Prompt: I am confident in my skills performing the following procedures on a task trainer/simulator (5-point Likert scale from 5= strongly agree to 1= strongly disagree).

Just denotes statistical significance.

Table 3.

Mean, standard deviation, median, and Wilcoxon signed rank test results for pre- and postsurvey scores[a] regarding self-confidence for performing on a real patient.

Pretest
Posttest
Z scoreP value
MeanMedianNSDMeanMedianNSD
Intubation1.711.50240.863.333.50240.96−3.90<.001*
Arterial line placement1.131.00240.343.083.00241.02−4.09<.001*
Lumbar puncture1.711.00241.003.333.50240.96−3.90<.001*
Central line insertion1.171.00240.382.923.00241.02−4.07<.001*

Prompt: I am confident in my skills performing the following procedures on a real patient (5-point Likert scale from 5= strongly agree to 1= strongly disagree).

Just denotes statistical significance.

Mean, standard deviation, median, and Wilcoxon signed rank test results for pre- and postsurvey scores[a] regarding self-confidence for performing on a task trainer/simulator. Prompt: I am confident in my skills performing the following procedures on a task trainer/simulator (5-point Likert scale from 5= strongly agree to 1= strongly disagree). Just denotes statistical significance. Mean, standard deviation, median, and Wilcoxon signed rank test results for pre- and postsurvey scores[a] regarding self-confidence for performing on a real patient. Prompt: I am confident in my skills performing the following procedures on a real patient (5-point Likert scale from 5= strongly agree to 1= strongly disagree). Just denotes statistical significance.

Knowledge scores

Participating in the simulation-based training helped medical students improve their knowledge scores immediately after training on all included procedures, although this improvement did not reach significance for lumbar puncture or central line insertion. Paired-samples t tests indicated statistically significant improvement in knowledge scores for intubation (pre-mean = 5.50, SD = 1.22 vs post-mean = 6.58, SD = 1.14), t(23) = −2.92, P < .001, and arterial line placement (pre-mean = 5.71, SD = 1.20 vs post-mean = 6.67, SD = 1.24), t(23) = −2.75, P < .001 (see Table 4).
Table 4.

Mean, standard deviation, and t-test results for pre- and postknowledge test scores.[a]

Pretest
Posttest
t P value
MeanNSDMeanNSD
Intubation5.50241.226.58241.14−2.92.008*
Arterial line placement5.71241.206.67241.24−2.75.01*
Lumbar puncture4.67241.444.75241.36−.23.82
Central line insertion6.71241.237.25241.48−1.80.09

Perfect score for each procedure is 10.

Just denotes statistical significance.

Mean, standard deviation, and t-test results for pre- and postknowledge test scores.[a] Perfect score for each procedure is 10. Just denotes statistical significance.

Procedural skills

Beyond improvement in students’ self-reported confidence on performing the included procedures, we examined whether or not their actual hands-on performance also improved. Paired-samples t tests indicated statistically significant improvement in procedural performance scores for all 4 procedures: intubation (t(23) = −17.29, P < .001), arterial line placement (t(23) = −19.75, P < .001), lumbar puncture (t(23) = −16.27, P < .001), and central line placement (t(23) = −17.25, P < .001) (see Table 5 for mean, standard deviation, and t-test results for pre- and postprocedural checklist scores).
Table 5.

Mean, standard deviation, and t-test results for pre- and postprocedural checklist scores.[a].

Pretest
Posttest
t P
MeanNSDMeanNSD
Intubation6.90243.9119.80242.33−17.29<.001*
Arterial line placement2.40241.7010.19241.28−19.75<.001*
Lumbar puncture4.85242.2212.56241.08−16.27<.001*
Central line insertion3.79242.6313.15242.59−17.25<.001*

Perfect scores for each procedure are as follows: intubation: 25, arterial line placement: 11, lumbar puncture: 15, and central line insertion: 20.

Just denotes statistical significance.

Mean, standard deviation, and t-test results for pre- and postprocedural checklist scores.[a]. Perfect scores for each procedure are as follows: intubation: 25, arterial line placement: 11, lumbar puncture: 15, and central line insertion: 20. Just denotes statistical significance. A high degree of agreement was found in the raters’ pretest as well as posttest checklist scores for all procedures. The highest average ICC was found in pretest intubation scores, .928 with a 95% confidence interval from .834 to .969 (F23,23 = 13.926, P < .001). The lowest average ICC, though still a high degree of agreement, was in posttest lumbar puncture scores, which was .757 with a 95% confidence interval from .437 to .895 (F23,23 = 4.108, P = .001). We also stratified medical students by year to examine whether or not there were differences on knowledge and procedural scores based on the year of medical school. We did not find any major difference in the pretest or posttest knowledge scores between year 2 and year 3 medical students (see Figure 1). Year 2 and year 3 students also received similar pretest and posttest checklist scores for all procedures (see Figure 2).
Figure 1.

Medical students’ knowledge test scores sorted by year 2 medical students (dark bars) vs year 3 medical students (light bars). As indicated by overlapping standard error bars, year of medical school did not have an effect on the pretest or posttest knowledge test scores.

Figure 2.

Medical students’ procedural skills performance score percentages sorted by year 2 medical students (dark bars) vs year 3 medical students (light bars). As indicated by overlapping standard error bars, year of medical school did not have an effect on the pretest or posttest procedural skills scores.

Medical students’ knowledge test scores sorted by year 2 medical students (dark bars) vs year 3 medical students (light bars). As indicated by overlapping standard error bars, year of medical school did not have an effect on the pretest or posttest knowledge test scores. Medical students’ procedural skills performance score percentages sorted by year 2 medical students (dark bars) vs year 3 medical students (light bars). As indicated by overlapping standard error bars, year of medical school did not have an effect on the pretest or posttest procedural skills scores.

Overall medical student reaction to the simulation-based training

Most of the students (23 of 24) indicated that they were satisfied with the overall experience provided in the learner-centered, simulation-based training, and 1 student felt neutral regarding his or her overall satisfaction. The frequencies (percentages) of medical student responses to items included in the satisfaction survey are shown in Table 6. Students felt that this learning environment was conducive to learning and they felt better prepared for clerkships, thanks to this training experience. Responses also indicated that students would recommend this training to other medical students.
Table 6.

Medical student responses to satisfaction survey.

To what extent do you agree/disagree with the followingStrongly agreeAgreeNeutralDisagreeStrongly disagreeN
I was satisfied with the overall learning experience provided19 (79.2%)4 (16.7%)1 (4.2%)0 (0%)0 (0%)24
Learning environment was conducive to learning14 (58.3%)9 (37.5%)0 (0%)1 (4.2%)0 (0%)24
Thanks to this training, I feel better prepared for the clerkships15 (62.5%)7 (29.2%)2 (8.3%)0 (0%)0 (0%)24
I would recommend this training to others19 (79.2%)4 (16.7%)1 (4.2%)0 (0%)0 (0%)24
Medical student responses to satisfaction survey. In addition to reinforcing these findings, students’ written feedback also highlighted the importance of learning from residents, receiving one-on-one attention and individualized feedback on their performance (see Appendix 4 for written feedback samples organized by common themes). Students also suggested that there should be more time given for practice as well as to have multiple training sessions over a period of time. Some students also felt that a formal didactic on the included procedures would help them learn more about the subject matter.

Discussion

Our results showed that medical students improved their self-confidence, knowledge, and skills in the aforementioned procedures through the use of individualized instruction in a simulated environment. Written comments indicated that this training modality has been very well received by the students. This finding is in line with the literature on learner-centered educational modalities. Simply providing medical students with task trainers and expecting them to repeatedly practice clinical skills may not necessarily translate into competency. Deliberate practice focused on reaching a well-defined goal is needed for improving clinical skills.[23] Some have suggested that in the lack of clear performance guidelines, trainees may fail to assess their own skills accurately and to identify areas of improvement.[24] Residents in this study provided much needed feedback for performance improvement. One major lesson learned was that focusing on as many as 4 procedures in 1 training session limited the time spent on debriefing at the end of the actual hands-on skills training. This resulted in a smaller increase in students’ knowledge gain for some procedures. A current meta-analysis that examined the literature on the effects of teaching advanced airway management using simulation indicates similar results for other studies.[15] Moreover, as time is a major constraint for busy clinicians, this type of training may be hard to sustain as it requires extensive time commitment from students and instructors and is subject to scheduling conflicts.[25] Future studies exploring alternative educational modalities seem warranted given the need for learner-centered, simulation-based training for procedural skills. In this study, we used the evaluation framework of Kirkpatrick[26] as a guide while evaluating the effectiveness of this training. This framework suggests that evaluation of training effectiveness should address the (1) participants’ reaction to the training, (2) learning gain, (3) hands-on skill improvement, and (4) actual outcomes occurring as a result of the training. As demonstrated above, we addressed the first 3 steps of this framework. However, performance in simulated experience may not transfer to actual patient care. Future studies should try to measure the impact of simulation-based training on patient outcomes by using longitudinal research design. A significant limitation of this study might be that it involved a relatively small group of students from a single institution. A multi-institutional study with a larger sample might produce further insights into learner-centered, simulation-based training for improving medical students’ procedural skills. In addition, this study design relied on the same resident pairs to provide baseline and posttraining scores for medical students’ procedural skills. This could introduce a scoring bias because these residents also served as instructors. Ideally, we would have liked to videotape each student’s baseline and posttraining performance to have blinded raters provide the scores. However, there was 1 medical student at each of the 4 stations at any given time, which proved to be logistically challenging to capture each student’s performance in a high-quality video providing a detailed view of all necessary angles for an accurate procedural skills assessment. Given the constraints, real-time skills assessment was the most robust option.
  14 in total

1.  Do you know your students' basic clinical skills exposure?

Authors:  Scott A Engum
Journal:  Am J Surg       Date:  2003-08       Impact factor: 2.565

2.  Procedure lab used to improve confidence in the performance of rarely performed procedures.

Authors:  Leon D Sanchez; Jennifer Delapena; Sean P Kelly; Kevin Ban; Ricardo Pini; Avio M Perna
Journal:  Eur J Emerg Med       Date:  2006-02       Impact factor: 2.799

3.  Gaps in procedural experience and competency in medical school graduates.

Authors:  Susan B Promes; Saumil M Chudgar; Colleen O'Connor Grochowski; Philip Shayne; Jennifer Isenhour; Seth W Glickman; Charles B Cairns
Journal:  Acad Emerg Med       Date:  2009-12       Impact factor: 3.451

4.  Learning objectives for medical student education--guidelines for medical schools: report I of the Medical School Objectives Project.

Authors: 
Journal:  Acad Med       Date:  1999-01       Impact factor: 6.893

5.  Are medical students being taught invasive skills using simulation?

Authors:  Katherine T Berg; Kathleen J Mealey; Danielle E Weber; Dale D Berg; Albert G Crawford; Edward H Jasper; Michael J Vergare
Journal:  Simul Healthc       Date:  2013-04       Impact factor: 1.929

6.  Competence and confidence with basic procedural skills: the experience and opinions of fourth-year medical students at a single institution.

Authors:  Jeffrey J Dehmer; Keith D Amos; Timothy M Farrell; Anthony A Meyer; Warren P Newton; Michael O Meyers
Journal:  Acad Med       Date:  2013-05       Impact factor: 6.893

7.  Simulated pediatric trauma team management: assessment of an educational intervention.

Authors:  Elizabeth A Hunt; Margaret Heine; Susan M Hohenhaus; Xuemei Luo; Karen S Frush
Journal:  Pediatr Emerg Care       Date:  2007-11       Impact factor: 1.454

Review 8.  The utility of simulation in medical education: what is the evidence?

Authors:  Yasuharu Okuda; Ethan O Bryson; Samuel DeMaria; Lisa Jacobson; Joshua Quinones; Bing Shen; Adam I Levine
Journal:  Mt Sinai J Med       Date:  2009-08

9.  Simulation of pediatric trauma stabilization in 35 North Carolina emergency departments: identification of targets for performance improvement.

Authors:  Elizabeth A Hunt; Susan M Hohenhaus; Xuemei Luo; Karen S Frush
Journal:  Pediatrics       Date:  2006-03       Impact factor: 7.124

Review 10.  Advanced airway management simulation training in medical education: a systematic review and meta-analysis.

Authors:  Cassie C Kennedy; Eric K Cannon; David O Warner; David A Cook
Journal:  Crit Care Med       Date:  2014-01       Impact factor: 7.598

View more
  6 in total

1.  To sim or not to sim-choosing wisely for procedural skills training in paediatrics.

Authors:  Ali Al Maawali; Harish Amin; Krista Baerg; Mark Feldman; Fabian Gorodzinksy; Allan Puran; Adam Dubrowski; Zia Bismilla
Journal:  Paediatr Child Health       Date:  2022-04-25       Impact factor: 2.600

2.  Pediatric emergency medicine fellowship point-of-care ultrasound training in 2020.

Authors:  Natan Cramer; Lauren Cantwell; Hilary Ong; Shyam M Sivasankar; Danielle Graff; Simone L Lawson; Paria M Wilson; Kathleen A Noorbakhsh; Megan Mickley; Noel S Zuckerbraun; Brad Sobolewski; Jane K Soung; Devora B Azhdam; Desiree N Wagner Neville; Mark R Hincapie; Jennifer R Marin
Journal:  AEM Educ Train       Date:  2021-08-01

3.  Determinants of Confidence in Overall Knowledge About COVID-19 Among Healthcare Workers in South Africa: Results From an Online Survey.

Authors:  Thabang Manyaapelo; Tholang Mokhele; Sibusiso Sifunda; Philisiwe Ndlovu; Natisha Dukhi; Ronel Sewpaul; Inbarani Naidoo; Sean Jooste; Boikhutso Tlou; Mosa Moshabela; Musawenkosi Mabaso; Khangelani Zuma; Priscilla Reddy
Journal:  Front Public Health       Date:  2021-04-29

4.  Learning Styles in Pathology: A Comparative Analysis and Implications for Learner-Centered Education.

Authors:  Aadil Ahmed; Eva M Wojcik; Vijayalakshmi Ananthanarayanan; Lotte Mulder; Kamran M Mirza
Journal:  Acad Pathol       Date:  2019-06-10

5.  Peer assessment platform of clinical skills in undergraduate medical education.

Authors:  Liling Chen; Hong Chen; Di Xu; Yan Yang; Huiming Li; Dong Hua
Journal:  J Int Med Res       Date:  2019-09-18       Impact factor: 1.671

6.  Medical Student Comfort With Procedural Skills Performance Based on Elective Experience and Career Interest.

Authors:  Bright Huo; Wyatt MacNevin; Michael Smyth; Stephen G Miller
Journal:  Cureus       Date:  2020-12-30
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.