Literature DB >> 26296407

Relatively speaking: contrast effects influence assessors' scores and narrative feedback.

Peter Yeates1, Jenna Cardell2, Gerard Byrne3, Kevin W Eva4.   

Abstract

CONTEXT: In prior research, the scores assessors assign can be biased away from the standard of preceding performances (i.e. 'contrast effects' occur).
OBJECTIVES: This study examines the mechanism and robustness of these findings to advance understanding of assessor cognition. We test the influence of the immediately preceding performance relative to that of a series of prior performances. Further, we examine whether assessors' narrative comments are similarly influenced by contrast effects.
METHODS: Clinicians (n = 61) were randomised to three groups in a blinded, Internet-based experiment. Participants viewed identical videos of good, borderline and poor performances by first-year doctors in varied orders. They provided scores and written feedback after each video. Narrative comments were blindly content-analysed to generate measures of valence and content. Variability of narrative comments and scores was compared between groups.
RESULTS: Comparisons indicated contrast effects after a single performance. When a good performance was preceded by a poor performance, ratings were higher (mean 5.01, 95% confidence interval [CI] 4.79-5.24) than when observation of the good performance was unbiased (mean 4.36, 95% CI 4.14-4.60; p < 0.05, d = 1.3). Similarly, borderline performance was rated lower when preceded by good performance (mean 2.96, 95% CI 2.56-3.37) than when viewed without preceding bias (mean 3.55, 95% CI 3.17-3.92; p < 0.05, d = 0.7). The series of ratings participants assigned suggested that the magnitude of contrast effects is determined by an averaging of recent experiences. The valence (but not content) of narrative comments showed contrast effects similar to those found in numerical scores.
CONCLUSIONS: These findings are consistent with research from behavioural economics and psychology that suggests judgement tends to be relative in nature. Observing that the valence of narrative comments is similarly influenced suggests these effects represent more than difficulty in translating impressions into a number. The extent to which such factors impact upon assessment in practice remains to be determined as the influence is likely to depend on context.
© 2015 John Wiley & Sons Ltd.

Entities:  

Mesh:

Year:  2015        PMID: 26296407     DOI: 10.1111/medu.12777

Source DB:  PubMed          Journal:  Med Educ        ISSN: 0308-0110            Impact factor:   6.251


  5 in total

1.  Advancing Our Understanding of Narrative Comments Generated by Direct Observation Tools: Lessons From the Psychopharmacotherapy-Structured Clinical Observation.

Authors:  John Q Young; Rebekah Sugarman; Eric Holmboe; Patricia S O'Sullivan
Journal:  J Grad Med Educ       Date:  2019-10

2.  Standardized examinees: development of a new tool to evaluate factors influencing OSCE scores and to train examiners.

Authors:  Petra Zimmermann; Martina Kadmon
Journal:  GMS J Med Educ       Date:  2020-06-15

3.  Determining influence, interaction and causality of contrast and sequence effects in objective structured clinical exams.

Authors:  Peter Yeates; Alice Moult; Natalie Cope; Gareth McCray; Richard Fuller; Robert McKinley
Journal:  Med Educ       Date:  2022-01-11       Impact factor: 7.647

4.  Does faculty development influence the quality of in-training evaluation reports in pharmacy?

Authors:  Kerry Wilbur
Journal:  BMC Med Educ       Date:  2017-11-21       Impact factor: 2.463

5.  A randomised trial of the influence of racial stereotype bias on examiners' scores, feedback and recollections in undergraduate clinical exams.

Authors:  Peter Yeates; Katherine Woolf; Emyr Benbow; Ben Davies; Mairhead Boohan; Kevin Eva
Journal:  BMC Med       Date:  2017-10-25       Impact factor: 8.775

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.