| Literature DB >> 20046456 |
Alice A Edler1, Ruth G Fanning, Michael I Chen, Rebecca Claure, Dondee Almazan, Brain Struyk, Samuel C Seiden.
Abstract
High-fidelity patient simulation (HFPS) has been hypothesized as a modality for assessing competency of knowledge and skill in patient simulation, but uniform methods for HFPS performance assessment (PA) have not yet been completely achieved. Anesthesiology as a field founded the HFPS discipline and also leads in its PA. This project reviews the types, quality, and designated purpose of HFPS PA tools in anesthesiology. We used the systematic review method and systematically reviewed anesthesiology literature referenced in PubMed to assess the quality and reliability of available PA tools in HFPS. Of 412 articles identified, 50 met our inclusion criteria. Seventy seven percent of studies have been published since 2000; more recent studies demonstrated higher quality. Investigators reported a variety of test construction and validation methods. The most commonly reported test construction methods included "modified Delphi Techniques" for item selection, reliability measurement using inter-rater agreement, and intra-class correlations between test items or subtests. Modern test theory, in particular generalizability theory, was used in nine (18%) of studies. Test score validity has been addressed in multiple investigations and shown a significant improvement in reporting accuracy. However the assessment of predicative has been low across the majority of studies. Usability and practicality of testing occasions and tools was only anecdotally reported. To more completely comply with the gold standards for PA design, both shared experience of experts and recognition of test construction standards, including reliability and validity measurements, instrument piloting, rater training, and explicit identification of the purpose and proposed use of the assessment tool, are required.Entities:
Keywords: Anesthesiology; High-Fidelity Patient Simulation; Patient Simulation; Performance Assessment; Systemic Review; Test Theory
Year: 2009 PMID: 20046456 PMCID: PMC2796725 DOI: 10.3352/jeehp.2009.6.3
Source DB: PubMed Journal: J Educ Eval Health Prof ISSN: 1975-5937
Inclusion and exclusion criteria
Reliability estimations: methods of agreement/reliability estimations and their uses
Fig. 1Methods of item selection.
Fig. 2Test refinement methods.
Fig. 3Score reliability measures.