Literature DB >> 34128876

High-fidelity simulation is associated with good discriminability in emergency medicine residents' in-training examinations.

Shou-Yen Chen1, Chung-Hsien Chaou1,2, Shiuan-Ruey Yu2, Yu-Che Chang1,2, Chip-Jin Ng1, Pin Liu1,3.   

Abstract

ABSTRACT: In-training examinations (ITEs), arranged during residency training, evaluate the residents' performances periodically. There is limited literature focusing on the effectiveness of resident ITEs in the format of simulation-based examinations, as compared to traditional oral or written tests. Our primary objective is to investigate the effectiveness and discriminative ability of high-fidelity simulation compared with other measurement formats in emergency medicine (EM) residency training program.This is a retrospective cohort study. During the 5-year study period, 8 ITEs were administered to 68 EM residents, and 253 ITE measurements were collected. Different ITE scores were calculated and presented as mean and standard deviation. The ITEs were categorized into written, oral, or high-fidelity simulation test forms. Discrimination of ITE scores between different training years of residency was examined using a one-way analysis of variance test.The high-fidelity simulation scores correlated to the progression of EM training, and residents in their fourth training year (R4) had the highest scores consistently, followed by R3, R2, and then R1. The oral test scores had similar results but not as consistent as the high-fidelity simulation tests. The written test scores distribution failed to discriminate the residents' seniority. The high-fidelity simulation test had the best discriminative ability and better correlation between different EM residency training years comparing to other forms.High-fidelity simulation tests had the good discriminative ability and were well correlated to the EM training year. We suggest high-fidelity simulation should be a part of ITE in training programs associated with critical or emergency patient cares.
Copyright © 2021 the Author(s). Published by Wolters Kluwer Health, Inc.

Entities:  

Mesh:

Year:  2021        PMID: 34128876      PMCID: PMC8213238          DOI: 10.1097/MD.0000000000026328

Source DB:  PubMed          Journal:  Medicine (Baltimore)        ISSN: 0025-7974            Impact factor:   1.889


Introduction

The goal of postgraduate medical education is to facilitate the resident's acquisition of medical knowledge and clinical skills and nurture the competency required to practice in a medical specialty.[ It is imperative to evaluate the residents’ performance periodically to help them overcome their weaknesses and ensure the quality of the residency program. The Accreditation Council for Graduate Medical Education (ACGME) and the Council of Residency Directors in Emergency Medicine emphasize the value of practical and reliable assessment tools to evaluate residents’ ability and the effectiveness of the residency programs.[ In-training examination (ITE), also known as in-service examination, was first introduced in 1963 to residents in the medical specialty by the American Academy of Orthopedic Surgeons.[ In either oral or written forms, ITE has been adopted by various medical specialties as a powerful and multi-functional assessment tool to measure the performance of residents.[ For EM residents, the ITE is not merely a signboard of their current academic performance but also a chance to prepare for the board exam. The ITE offers an opportunity for them to review their deficiencies in medical knowledge and to improve themselves.[ Prior studies have found a positive correlation between EM ITE scores and the American Board of Emergency Medicine written board certification scores.[ A similar correlation of the ITE and broad exam score was also documented in the field of internal medicine and its subspecialty of cardiology.[ Most of the ITEs and board-certification qualifying examinations usually consist of written and oral tests.[ Some of these types of examinations measure the degree of medical knowledge and clinical skills. However, they may not correlate to the residents’ overall clinical performance, which is supposed to advance with the residency training year. The use of oral and written tests is especially limited for programs conducted in a busy and rushing clinical environment, such as the emergency department (ED). The ED is notorious for the time limits, various independent tasks with undetermined priorities, and a rapidly changing environment full of interruptions.[ Written and oral test scores may not directly reflect the progress of clinical experience and multitasking ability by residency training year. High-fidelity simulations, using computer-controlled mannequins, are being used throughout medical education to emulate real patient encounters since the 1960s.[ Besides, simulation is increasingly being used as an assessment tool to evaluate the performance of procedures.[ Simulation-based teaching was also integrated into the curriculum addressing the systems-based practice core competency and communication skill training course for residents.[ Simulation-based assessment can be used to evaluate the residents’ competency in differential diagnosis, resuscitation, and procedures, both formatively and summatively.[ Some literature supports the use of simulation-based assessment tools to evaluate residents, as in anesthesiology residency.[ Some even advocated for the use of simulation-based tests in board certification exams.[ There is currently limited literature focusing on the effectiveness of resident ITEs in the format of simulation-based examinations compared to traditional oral or written tests. Our primary objective is to compare the effectiveness and discriminative ability of high-fidelity simulation with written and oral tests in an EM training program.

Method

Study setting

This is a retrospective cohort study of EM resident physicians. The study was conducted at a university-affiliated tertiary teaching hospital with a 3600-bed capacity and an estimated annual ED volume of 180,000 patient visits. There are 63 board-certified EM faculty members within the department. The residency program accepts 7 to 10 resident physicians each year. The study was approved by our institutional review board (IRB no. 202000099B0).

Participants and data collection

In the EM program, ITEs were administered to all residents biannually, usually in February and August. In February, examinations usually involved 35 or 36 EM residents, and examinations in August had 26 to 29 residents each time. All residents needed to participate in the examinations unless specific conditions were met, such as severe illness. The first-year resident (R1) physicians would not take the examination in August because they usually registered in the same month. The ITEs were supervised by the residency program director and organized by the education committee within the department. The ITE consists of 3 different forms: written examination, oral examination, and high-fidelity examination (oral and high-fidelity examination was canceled in 2018 due to equipment problems). The written examination involved multiple-choice questions (MCQs) and short answer question (SAQ) stations. MCQs had mixed content of EM. SAQs stations included electrocardiogram reading, image test (radiograph, computed tomography, and ultrasound), and other different EM themes. Oral tests included 2 to 3 stations with different EM topics, such as internal medicine, pediatrics, toxicology, emergency medical services, neurology, or disaster medicine. The rater of each station was an EM board-certified faculty member. A high-fidelity simulation station was operated with previously validated scenarios, a computer-controlled mannequin, 2 standardized nurses, and 1 faculty member as a rater. All of the simulation encounters were video recorded for post-test inspection. The checklists of oral and simulation stations were validated before each ITE. The results of ITEs were regularly collected. Eight ITEs were included in the study. Only 1 ITE was administered in 2017 because of a change in the management level. Average scores from written, oral simulation, and high-fidelity exams were calculated. The written test, oral test, and high-fidelity examination scores were the mean scores across all stations.

Statistical analysis

Data were analyzed using SPSS software (version 13.0 for Windows; SPSS, Chicago, IL). In the descriptive analysis, categorical variables were presented as numbers and percentages. The reliability of ITE was evaluated using Cronbach's alpha coefficient. The discrimination of ITE scores between different training years of residency was examined using a one-way analysis of variance test. A P value <.05 was considered statistically significant.

Results

During the 5-year study period, 8 ITEs were administered to 68 EM residents training in our EM training program, and 253 ITE scores were collected. Table 1 reveals the characteristics of the ITEs and the participants. All 8 ITEs included written tests, and 7 of them contained oral tests and high-fidelity simulation tests.
Table 1

Characteristics of the ITEs and participants.

Characteristics of ITEsCount
Total tests8
 Containing written test8
 Containing oral simulation test7
 Containing high-fidelity simulation test7
Number of participating residents (lowest-highest)26–36
Gender of participants
 Male55
 Female13
Characteristics of the ITEs and participants. The analysis of ITE content was revealed in Table 2. The oral test's most common topics were internal medicine and toxicology, whereas trauma and critical care most frequently appeared in the high-fidelity simulation tests. All the stations of SAQs, oral tests, and high-fidelity simulation tests were analyzed. The most discriminative subject is internal medicine (85.71%), followed by pediatrics (80%) and toxicology (60%) (Table 3). Other topics revealed diverse results. The internal consistency of the ITE was 0.95, which indicated good reliability across different participants.
Table 2

Characteristics of in-training exams.

Characteristics of ITEsCount(%)
Test number
 Total tests8
 Written test8100
 Oral simulation test787.5
 High-fidelity simulation test787.5
Examination content
 Written test stations38
 MCQs718.42
 SAQs stations3181.58
 ECG821.05
 Image615.79
 Ultrasound615.79
 Critical care medicine410.53
 Gynecology25.26
 Others513.16
Oral test stations15
 Internal medicine533.33
 Pediatrics320
 Toxicology426.67
 Other320
High-fidelity simulation stations14
 Internal medicine214.29
 Trauma642.86
 Critical care321.43
 Pediatric214.29
 Toxicology17.14
Table 3

Discriminability of in-training examinations, differentiated according to examination forms and test domains.

Discriminability according to different examination forms
ExaminationWritten testOral simulation testHigh-fidelity simulation
Test 1P = .001P < .001P = .007
Test 2P = .967P = .730P = .249
Test 3P = .733P = .001P = .015
Test 4P = .384
Test 5P = .084P = .006P = .016
Test 6P = .032P < .001P < .001
Test 7P = .261P = .001P = .016
Test 8P < .001P < .001P = .004
Characteristics of in-training exams. Discriminability of in-training examinations, differentiated according to examination forms and test domains. Table 3 shows the discrimination of the tests according to the EM residency training year. Almost all oral and high-fidelity simulation tests of each exam were discriminative except one. The discrimination of the written tests was low. Although oral and high-fidelity simulation tests were both discriminative, the distribution of the scores of these 2 kinds of tests was different. Figure 1 demonstrates the average scores of different EM training year residents in each ITE. The high-fidelity simulation scores correlated to the EM training year, and R4 had the highest scores consistently, followed by R3, R2, and then R1 (Fig. 1C). The oral test scores (Fig. 1B) had similar results but not as consistent as the high-fidelity simulation tests. The written test scores distribution lacks similarity (Fig. 1B). The high-fidelity simulation test had the best discrimination and better correlation between different EM residency training years than the other 2 test forms.
Figure 1

Illustration of test scores from (A) the written tests, (B) the oral tests, and (C) the high-fidelity simulation tests of residents in different training years.

Illustration of test scores from (A) the written tests, (B) the oral tests, and (C) the high-fidelity simulation tests of residents in different training years.

Discussion

To the best of our knowledge, this is the first study to evaluate the utility of high-fidelity simulation in the ITEs of EM training programs. Our study demonstrated the good performance of a high-fidelity simulation test to discriminate the difference in EM residents’ competency in different training years. It may not be surprising that high-fidelity simulation tests had better results than written tests. As evaluated by written tests, medical knowledge can be obtained by increased reading, studying, and memorizing the core contents. Consequently, as long as he/she studies hard, a junior resident can outperform a senior one in a written test. Nonetheless, clinical experience obtained at the bedside gradually with EM residency training years cannot be rapidly obtained by goal-directed curriculum reading or short-term memory. A previous study also demonstrated improvement in written ITE scores after a structured board review program only in junior residents, but not senior ones.[ With the accumulation of “real-life” experience, medical knowledge should be integrated into the clinical scenario, including disease pattern recognition, applying the relevant algorithm, and immediate interventions to treat patients properly and timely.[ As a basis of competence in patient care, the clinical experience is much more than a simple memory recall. Rodgers’ study in advanced cardiac life support course concluded that written evaluation is a poor predictor of skill performance.[ Other studies also fail to use written and oral scores to predict residents’ clinical performance.[ Oral tests also showed good discrimination in our study. The junior EM residents (R1) almost always had inferior scores comparing to senior residents. Whereas the results were inconsistent in senior residents, and R2 or R3 physicians sometimes had better scores than R4 physicians in oral tests. One possible explanation is that oral tests may reflect both clinical experience and medical knowledge but not as realistic as high-fidelity simulation tests. Alternatively, high-fidelity simulation tests created a vivid scenario, which can evoke a physiologic response comparable to the real-life clinical situation.[ Therefore, high-fidelity simulation tests’ performance correlated better to the extent of clinical experience than oral tests. According to our result, assessment by high-fidelity simulation tests seems to be more appropriate to measure the EM residents’ clinical competency. The high-fidelity simulation tests also had several advantages over traditional written and oral tests in assessing specific ACGME core competencies, including interpersonal and communication skills, professionalism, patient care, and systems-based practice.[ Medical educators’ direct evaluation through simulation-based assessment provides a simultaneous evaluation of knowledge, clinical reasoning, and teamwork.[ The standardization, fidelity, and reproducibility of medical simulation scenarios make it especially suited to be used in ITEs.[ With the advantages mentioned above, high-fidelity simulation is possibly a better assessment tool for EM training programs.[ ITE in EM training program provides the evaluation of medical knowledge and clinical competency and an opportunity for the residents to know their advantage and deficiency.[ Contrary to text-based learning and written tests, EM residents prefer question-based learning, which can be evaluated by simulation tests.[ Assessment by simulation had higher satisfaction rates when compared to written tests.[ Although high-fidelity simulation test is more expensive than a standard written or oral test, it may be more cost-effective than other commonly used assessment methods like “standardized patient” in Objective Structured Clinical Examination. Our study demonstrated the excellent discrimination of the high-fidelity simulation tests in the ITE of EM training programs. Although further study is needed, it should be considered as a part of the EM board certification examination.

Limitations

As demonstrated above, this is a single-center study. Local contexts needed to be taken into consideration before generalizing the study results. With its relatively small sample size, the statistical power of this study was may be limited. A larger confirmatory study under a different educational context should be performed in the future for a more conclusive result. In addition, this is a single-specialty study. We focused on the effectiveness of high-fidelity simulation tests in the ITEs of a high tension specialty as the EM. The results might not be fully applicable to other medical specialties with different characteristics of work. Finally, although all of the ITE raters were members of our education faculty with previous experiences in medical education and resident assessment, they did not receive additional training to improve their consensus and accuracy in evaluating the residents’ performances for each ITE.

Conclusion

High-fidelity simulation test used in ITE had a good discriminative ability and well correlated to the EM training year. We suggest high-fidelity simulation should be a part of ITE in training programs associated with critical or emergency patient cares.

Author contributions

SYC contributed to the conceptualization, data collection and analysis, and draft writing of this research. CHC contributed to the research design, data acquisition, supervision of methodology, and data analysis. SRY contributed to the project administration, data acquisition, transcript coding, and participated in the analysis of themes. YCC contributed to the supervision of transcript coding and the emergence of themes. CJN contributed to the supervision of the project. PL contributed to the conceptualization of the research, software supervision, and qualitative data analysis. This is a unique submission and is not being considered for publication by any other source in any medium. All authors participated and contributed to the critical revision of the manuscript and gave final approval of the version submitted for publication. Conceptualization: Shou-Yen Chen, Pin Liu. Data curation: Shou-Yen Chen, Chung-Hsien Chaou, Shiuan-Ruey Yu. Formal analysis: Shou-Yen Chen, Chung-Hsien Chaou, Pin Liu. Methodology: Chung-Hsien Chaou. Project administration: Shiuan-Ruey Yu. Software: Pin Liu. Supervision: Chung-Hsien Chaou, Yu-Che Chang, Chip-Jin Ng, Pin Liu. Writing – original draft: Shou-Yen Chen. Writing – review & editing: Pin Liu.
  31 in total

1.  Validity of the in-training examination for predicting American Board of Internal Medicine certifying examination scores.

Authors:  R S Grossman; R M Fincher; R D Layne; C B Seelig; L R Berkowitz; M A Levine
Journal:  J Gen Intern Med       Date:  1992 Jan-Feb       Impact factor: 5.128

Review 2.  Simulation-based assessment in anesthesiology: requirements for practical implementation.

Authors:  John R Boulet; David J Murray
Journal:  Anesthesiology       Date:  2010-04       Impact factor: 7.892

Review 3.  Assessment in medical education.

Authors:  Ronald M Epstein
Journal:  N Engl J Med       Date:  2007-01-25       Impact factor: 91.245

Review 4.  Assessing competence in emergency medicine trainees: an overview of effective methodologies.

Authors:  Jonathan Sherbino; Glen Bandiera; Jason R Frank
Journal:  CJEM       Date:  2008-07       Impact factor: 2.410

5.  Radiation Oncology Resident In-Training Examination.

Authors:  Sandra S Hatch; Neha Vapiwala; Seth A Rosenthal; John P Plastaras; Albert L Blumberg; William Small; Matthew J Wenger; Marie E Taylor
Journal:  Int J Radiat Oncol Biol Phys       Date:  2015-07-01       Impact factor: 7.038

6.  The relationship between faculty performance assessment and results on the in-training examination for residents in an emergency medicine training program.

Authors:  James G Ryan; David Barlas; Simcha Pollack
Journal:  J Grad Med Educ       Date:  2013-12

7.  Enhancement of anesthesiology in-training exam performance with institution of an academic improvement policy.

Authors:  Julie A Joseph; Chris M Terry; Eva J Waller; Andrey V Bortsov; David A Zvara; David C Mayer; Susan M Martinelli
Journal:  J Educ Perioper Med       Date:  2014-01-01

8.  A comprehensive anesthesia simulation environment: re-creating the operating room for research and training.

Authors:  D M Gaba; A DeAnda
Journal:  Anesthesiology       Date:  1988-09       Impact factor: 7.892

9.  Simulation-Based Assessment Identifies Longitudinal Changes in Cognitive Skills in an Anesthesiology Residency Training Program.

Authors:  Avner Sidi; Nikolaus Gravenstein; Terrie Vasilopoulos; Samsun Lampotang
Journal:  J Patient Saf       Date:  2017-06-02       Impact factor: 2.844

10.  Training for Failure: A Simulation Program for Emergency Medicine Residents to Improve Communication Skills in Service Recovery.

Authors:  Alise Frallicciardi; Seth Lotterman; Matthew Ledford; Ilana Prenovitz; Rochelle Van Meter; Chia-Ling Kuo; Thomas Nowicki; Robert Fuller
Journal:  AEM Educ Train       Date:  2018-07-26
View more
  1 in total

1.  Multimodal In-training Examination in an Emergency Medicine Residency Training Program: A Longitudinal Observational Study.

Authors:  Pin Liu; Shou-Yen Chen; Yu-Che Chang; Chip-Jin Ng; Chung-Hsien Chaou
Journal:  Front Med (Lausanne)       Date:  2022-03-09
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.