| Literature DB >> 26955280 |
Tayne Ryall1, Belinda K Judd2, Christopher J Gordon3.
Abstract
INTRODUCTION: The use of simulation in health professional education has increased rapidly over the past 2 decades. While simulation has predominantly been used to train health professionals and students for a variety of clinically related situations, there is an increasing trend to use simulation as an assessment tool, especially for the development of technical-based skills required during clinical practice. However, there is a lack of evidence about the effectiveness of using simulation for the assessment of competency. Therefore, the aim of this systematic review was to examine simulation as an assessment tool of technical skills across health professional education.Entities:
Keywords: competency; health care; students; technical skills
Year: 2016 PMID: 26955280 PMCID: PMC4768888 DOI: 10.2147/JMDH.S92695
Source DB: PubMed Journal: J Multidiscip Healthc ISSN: 1178-2390
Figure 1Flow diagram of search.
Adapted critical appraisal tool
| Yes (=1) | No (=0) | Not addressed (=0) | Not applicable (=0) | |
|---|---|---|---|---|
| 1. Was the study purpose clearly stated? | ||||
| 2. Was relevant background literature reviewed? | ||||
| 3. Was the sample described in detail? | ||||
| 4. Was sample size justified? | ||||
| 5. Were the outcome measures reliable? | ||||
| 6. Were the outcome measures valid? | ||||
| 7. Was intervention described in detail? | ||||
| 8. Was contamination avoided? | ||||
| 9. Was cointervention avoided? | ||||
| 10. Were results reported in terms of statistical significance? | ||||
| 11. If multiple outcomes, was that taken into account in the statistical analysis? | ||||
| 12. Were the analysis methods appropriate? | ||||
| 13. Was educational importance reported? | ||||
| 14. Were dropouts reported? | ||||
| 15. Were the conclusions appropriate given study methods and results? |
Note: Data from Law et al.20
Summary of included articles
| Author (year) | Sample size/discipline | Primary outcome | Simulation type | Assessment type | Main findings |
|---|---|---|---|---|---|
| Asprey et al (2007) | 101 Medical students, 71 physician assistant students | Comparison of third year medical student scores with physician assistant scores on SP assessments - clinical skill checklist | SP | Checklists – completed by the SP | SP-based assessments were not able to distinguish differences between medical and physician assistant performance and are therefore measuring clinical experience as the two groups had comparable clinical experience. |
| Bick et al (2013) | 26 Anesthetists | Comparison of expert vs novice anesthetist performance on a simulated transesophageal echocardiography examination – time and accuracy checklist | Virtual reality | Checklists – completed via video by blinded (voice masking technology) experts | Simulated transesophageal echocardiography examination is able to discriminate between expert and novice performers. |
| Boulet et al (2003) | 24 Medical students, 13 medical residents | Reliability and validity of simulation assessment for final year medical students and residents | High-fidelity patient simulator | Checklists – scored via video by four raters (two unaware of the training levels of participants) | Simulation can validly and reliably assess acute care skills, but multiple encounters are required to predict performance. |
| Burns et al (2013) | 26 Medical doctors (16 interns, five PGY-2, two PGY-3, two chief residents, and three pediatric hematology/oncology fellows) | Performance scores of interns (<1-year experience in pediatrics) vs the rest of the residents and fellows (>1-year experience) | High-fidelity pediatric patient simulator | Checklists and global rating scale – scored via video by two raters | The manikin- based assessment provided reliable and valid measures of participants’ performance. |
| Edelstein et al (2000) | 147 Medical students | Comparison of three measures of medical student performances - SP examinations, computer- based case simulations, and traditional performance indicators (GPAs) | SP and computer- based simulations | Unclear how they were assessed on the SP examinations – rated by both the SP as well as the author | Differences were found between the SP assessments and computer simulations and traditional assessments. |
| Fehr et al (2011) | 27 Anesthetic residents and eight anesthetic fellows | Reliability and validity of simulation assessments for anesthesia residents and pediatric fellows | High-fidelity patient simulator | Checklists – rated by lead investigator and two experts | The multiple-scenario assessment could reliably assess pediatric residents and determine the skill level of participants. Further validation required, including comparing of clinical performance. |
| Gimpel et al (2003) | 121 Osteopathy students | Reliability and validity of simulation assessment the Comprehensive Osteopathic Medical Licensing Examination – USA performance-based clinical skills examination (COMPLEX-USA-PE) | SP | Checklists – rated by SPs on all stations and experts on the stations that the participants may have chosen to perform a manipulation technique as part of their treatment | Acceptable reliability and validity of the assessment. Use of the Complex-USA-PE is supported for future use to assess the readiness of osteopathic medical students for clinical practice. |
| Grantcharov et al (2005) | Ten medical students, ten medical residents, and eight medical doctors | Validity of virtual reality simulator for assessment of gastrointestinal endoscopy skills. | Virtual reality | Parameters were calculated and recorded by the computer | Virtual reality simulator was able to distinguish between the three levels of experience and therefore possessed acceptable construct validity. |
| Hawkins et al (2004) | 54 Medical doctors | Evaluation of computer- based case simulations and SP methods for patient management skills via checklists | Computer-based case simulations and SP | Computer-based scoring and checklists and a patient perception questionnaire for the SP station – rated by the SP | Computer-based case simulations and SP examinations were unable to distinguish between different experience levels; it appears that they may be useful in a multipronged assessment approach. |
| Iyer et al (2013) | 16 Medical residents | Validity of objective structured assessment of technical skills for neonatal lumbar puncture via competency-based scoring tool | Task trainer | Checklist and global rating scale – rated by six raters via video | Reasonable evidence of validity of this assessment tool. |
| Lammers et al (2009) | 212 Paramedics | Identifying performance of paramedics in simulated pediatric emergencies using prevalidated scoring checklists | Low-, medium-, and high-fidelity patient simulators | Checklist – rated by one evaluator at the time and an author via video later, who amended scores if required and made the final decision | Scores from manikin- based simulations objectively identified multiple performance deficiencies in paramedics. |
| Lipner et al (2010) | 115 Medical cardiology fellows and cardiologists | Evaluation of simulation- based assessment of technical and cognitive skills of physicians during coronary interventions by determining whether assessment differentiates performances of novice, skilled, and expert | Virtual reality | Computer-based scoring system based on the consensus of a committee rating that included rating of the potential risk of taking that action | This assessment approach was able to identify poor- performance physicians who are unlikely to be providing appropriate patient care. |
| McBride et al (2011) | 29 Medical residents (13 interns, nine second year residents, six third year residents, and one chief resident) | Multiple-scenario assessment (n=20) of residents’ acute pediatric management skills using action checklists and global rating scales | High-fidelity pediatric patient simulator | Checklists and global rating scale – rated by two raters: one at the time, and one via video later | Residents’ scores in the assessments were found to be reliable and valid measures of their ability as long as multiple scenarios were used. Scores were able to discriminate between different skill levels. |
| Murray et al (2007) | 64 Medical residents and 35 anesthesiologists | Evaluation of anesthesia resident performances in a simulation- based intraoperative environment using an item score checklist | High-fidelity patient simulator | Checklists and time taken to perform the key actions – rated by two blinded raters via video; where there were large discrepancies, a third rater scored the performance | The simulation-based assessment was found to be a reliable and valid measure of performance and was able to distinguish more experienced from early training participants. |
| Nagoshi et al (2004) | 39 Medical students, 49 medical residents, and eleven medical fellows | Evaluation of a simulation- based standardized assessment of geriatric management skills by examining reliability and validity of the tool and scored using checklists | SP | Checklists – rated by the SP | The assessment was found to be reliable but did not discriminate between levels of training of participants. |
| Nunnink et al (2010) | 45 Medical doctors (ICU trainees) | Validity of simulation- based assessments of intensive care trainees procedural skills by comparing with written exams and oral viva formats | Low-high fidelity patient simulators and SP’s | Templates for written responses and checklists for all other – one rater (also rated the written questions on the same topic) | There was a lack of correlation between exam formats suggesting a multi- modal approach to assessment is favorable; and that simulation may be more useful in assessment of procedural skills. |
| Panzarella and Manyon (2008) | 34 Physical therapy students | Evaluation of an Integrated Standardized Patient Examination for Physical therapy students | SP | Checklists for the subjective and objective assessment, four-point rubric for the integration question – rated by four raters: two in the room (the SP and an expert rater), one via video (the participant) and a “criterion” rater (one of the authors blinded to the others’ score) | Student scores need comparison to other forms of assessment of performance. |
| Penprase et al (2012) | 70 Registered nurses | Evaluation of simulation- based assessment of applicants to Nurse Anesthesia programs by comparing scores to candidate interview scores | High-fidelity patient simulator | Checklists – rated by one of two trained raters concealed behind a one-way mirror | Simulation-based assessment may provide a useful adjunct to the admission process. |
| von Wyl et al (2009) | 30 Paramedics | Determine the interrater reliability of assessing technical and nontechnical skills of paramedics during simulated emergency scenarios using previously validated checklists | Medium-fidelity patient simulator | Checklists – rated by two raters (an experienced emergency physician and a psychologist) via video | The assessment of technical and nontechnical skills using simulation-based assessment was shown to be feasible and reliable. There was a significant positive correlation between both skill types. |
| Waldrop et al (2009) | 12 Medical interns and 44 medical residents | Evaluate the reliability and validity of simulation- based assessment in the management of intraoperative equipment-related errors in anesthesia | High-fidelity patient simulator | Checklists – rated initially by two raters for the first four participants; as scores were 100% in agreement, only one rater was used for the remaining participants | The assessment was found to be an effective, reliable, and valid method to determine individual performance. |
| Weller et al (2005) | 21 Medical doctors | Evaluation of the psychometric properties of a simulation- based assessment for anesthetists | High-fidelity patient simulator | Global rating scale - rated by four blinded raters via video | The results show that 12–15 cases were required for acceptable reliability in this assessment modality. |
Abbreviations: GPA, grade point average, ICU, intensive care unit; PGY, postgraduate year; SP, standardized patient.