Literature DB >> 32810334

Examiners' decision-making processes in observation-based clinical examinations.

Bunmi S Malau-Aduli1, Richard B Hays1, Karen D'Souza2, Amy M Smith1, Karina Jones1, Richard Turner3, Lizzi Shires3, Jane Smith4, Shannon Saad5, Cassandra Richmond5, Antonio Celenza6, Tarun Sen Gupta1.   

Abstract

BACKGROUND: Objective structured clinical examinations (OSCEs) are commonly used to assess the clinical skills of health professional students. Examiner judgement is one acknowledged source of variation in candidate marks. This paper reports an exploration of examiner decision making to better characterise the cognitive processes and workload associated with making judgements of clinical performance in exit-level OSCEs.
METHODS: Fifty-five examiners for exit-level OSCEs at five Australian medical schools completed a NASA Task Load Index (TLX) measure of cognitive load and participated in focus group interviews immediately after the OSCE session. Discussions focused on how decisions were made for borderline and clear pass candidates. Interviews were transcribed, coded and thematically analysed. NASA TLX results were quantitatively analysed.
RESULTS: Examiners self-reported higher cognitive workload levels when assessing a borderline candidate in comparison with a clear pass candidate. Further analysis revealed five major themes considered by examiners when marking candidate performance in an OSCE: (a) use of marking criteria as a source of reassurance; (b) difficulty adhering to the marking sheet under certain conditions; (c) demeanour of candidates; (d) patient safety, and (e) calibration using a mental construct of the 'mythical [prototypical] intern'. Examiners demonstrated particularly higher mental demand when assessing borderline compared to clear pass candidates.
CONCLUSIONS: Examiners demonstrate that judging candidate performance is a complex, cognitively difficult task, particularly when performance is of borderline or lower standard. At programme exit level, examiners intuitively want to rate candidates against a construct of a prototypical graduate when marking criteria appear not to describe both what and how a passing candidate should demonstrate when completing clinical tasks. This construct should be shared, agreed upon and aligned with marking criteria to best guide examiner training and calibration. Achieving this integration may improve the accuracy and consistency of examiner judgements and reduce cognitive workload.
© 2020 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

Entities:  

Year:  2020        PMID: 32810334     DOI: 10.1111/medu.14357

Source DB:  PubMed          Journal:  Med Educ        ISSN: 0308-0110            Impact factor:   6.251


  3 in total

1.  "Could You Work in My Team?": Exploring How Professional Clinical Role Expectations Influence Decision-Making of Assessors During Exit-Level Medical School OSCEs.

Authors:  Bunmi S Malau-Aduli; Richard B Hays; Karen D'Souza; Karina Jones; Shannon Saad; Antonio Celenza; Richard Turner; Jane Smith; Helena Ward; Michelle Schlipalius; Rinki Murphy; Nidhi Garg
Journal:  Front Med (Lausanne)       Date:  2022-05-06

2.  Pass/fail decisions and standards: the impact of differential examiner stringency on OSCE outcomes.

Authors:  Matt Homer
Journal:  Adv Health Sci Educ Theory Pract       Date:  2022-03-01       Impact factor: 3.629

3.  Patient involvement in assessment: How useful is it?

Authors:  Bunmi S Malau-Aduli
Journal:  Med Educ       Date:  2022-03-30       Impact factor: 7.647

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.