Literature DB >> 29065018

Validity Evidence and Scoring Guidelines for Standardized Patient Encounters and Patient Notes From a Multisite Study of Clinical Performance Examinations in Seven Medical Schools.

Yoon Soo Park1, Abbas Hyderi, Nancy Heine, Win May, Andrew Nevins, Ming Lee, Georges Bordage, Rachel Yudkowsky.   

Abstract

PURPOSE: To examine validity evidence of local graduation competency examination scores from seven medical schools using shared cases and to provide rater training protocols and guidelines for scoring patient notes (PNs).
METHOD: Between May and August 2016, clinical cases were developed, shared, and administered across seven medical schools (990 students participated). Raters were calibrated using training protocols, and guidelines were developed collaboratively across sites to standardize scoring. Data included scores from standardized patient encounters for history taking, physical examination, and PNs. Descriptive statistics were used to examine scores from the different assessment components. Generalizability studies (G-studies) using variance components were conducted to estimate reliability for composite scores.
RESULTS: Validity evidence was collected for response process (rater perception), internal structure (variance components, reliability), relations to other variables (interassessment correlations), and consequences (composite score). Student performance varied by case and task. In the PNs, justification of differential diagnosis was the most discriminating task. G-studies showed that schools accounted for less than 1% of total variance; however, for the PNs, there were differences in scores for varying cases and tasks across schools, indicating a school effect. Composite score reliability was maximized when the PN was weighted between 30% and 40%. Raters preferred using case-specific scoring guidelines with clear point-scoring systems.
CONCLUSIONS: This multisite study presents validity evidence for PN scores based on scoring rubric and case-specific scoring guidelines that offer rigor and feedback for learners. Variability in PN scores across participating sites may signal different approaches to teaching clinical reasoning among medical schools.

Entities:  

Mesh:

Year:  2017        PMID: 29065018     DOI: 10.1097/ACM.0000000000001918

Source DB:  PubMed          Journal:  Acad Med        ISSN: 1040-2446            Impact factor:   6.893


  2 in total

1.  Comparing Students' Clinical Grades to Scores on a Standardized Patient Note-Writing Task.

Authors:  Benjamin D Gallagher; Saman Nematollahi; Henry Park; Salila Kurra
Journal:  J Gen Intern Med       Date:  2020-07-13       Impact factor: 5.128

2.  Evaluator Agreement in Medical Student Assessment Across a Multi-Campus Medical School During a Standardized Patient Encounter.

Authors:  Sherri A Braksick; Yunxia Wang; Suzanne L Hunt; William Cathcart-Rake; Jon P Schrage; Gary S Gronseth
Journal:  Med Sci Educ       Date:  2020-02-05
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.