Literature DB >> 17610034

Undesired variance due to examiner stringency/leniency effect in communication skill scores assessed in OSCEs.

Peter H Harasym1, Wayne Woloschuk, Leslie Cunning.   

Abstract

Physician-patient communication is a clinical skill that can be learned and has a positive impact on patient satisfaction and health outcomes. A concerted effort at all medical schools is now directed at teaching and evaluating this core skill. Student communication skills are often assessed by an Objective Structure Clinical Examination (OSCE). However, it is unknown what sources of error variance are introduced into examinee communication scores by various OSCE components. This study primarily examined the effect different examiners had on the evaluation of students' communication skills assessed at the end of a family medicine clerkship rotation. The communication performance of clinical clerks from Classes 2005 and 2006 were assessed using six OSCE stations. Performance was rated at each station using the 28-item Calgary-Cambridge guide. Item Response Theory analysis using a Multifaceted Rasch model was used to partition the various sources of error variance and generate a "true" communication score where the effects of examiner, case, and items are removed. Variance and reliability of scores were as follows: communication scores (.20 and .87), examiner stringency/leniency (.86 and .91), case (.03 and .96), and item (.86 and .99), respectively. All facet scores were reliable (.87-.99). Examiner variance (.86) was more than four times the examinee variance (.20). About 11% of the clerks' outcome status shifted using "true" rather than observed/raw scores. There was large variability in examinee scores due to variation in examiner stringency/leniency behaviors that may impact pass-fail decisions. Exploring the benefits of examiner training and employing "true" scores generated using Item Response Theory analyses prior to making pass/fail decisions are recommended.

Entities:  

Mesh:

Year:  2007        PMID: 17610034     DOI: 10.1007/s10459-007-9068-0

Source DB:  PubMed          Journal:  Adv Health Sci Educ Theory Pract        ISSN: 1382-4996            Impact factor:   3.853


  22 in total

1.  MRCGP CSA: are the examiners biased, favouring their own by sex, ethnicity, and degree source?

Authors:  Mei Ling Denney; Adrian Freeman; Richard Wakeford
Journal:  Br J Gen Pract       Date:  2013-11       Impact factor: 5.386

2.  Making students' marks fair: standard setting, assessment items and post hoc item analysis.

Authors:  Mohsen Tavakol; Gillian A Doody
Journal:  Int J Med Educ       Date:  2015-02-28

3.  Interpreting multisource feedback: online study of consensus and variation among GP appraisers.

Authors:  Christine Wright; John Campbell; Luke McGowan; Martin J Roberts; Di Jelley; Arunangsu Chatterjee
Journal:  Br J Gen Pract       Date:  2016-03-10       Impact factor: 5.386

4.  The use of global rating scales for OSCEs in veterinary medicine.

Authors:  Emma K Read; Catriona Bell; Susan Rhind; Kent G Hecker
Journal:  PLoS One       Date:  2015-03-30       Impact factor: 3.240

5.  Assessing communication quality of consultations in primary care: initial reliability of the Global Consultation Rating Scale, based on the Calgary-Cambridge Guide to the Medical Interview.

Authors:  Jenni Burt; Gary Abel; Natasha Elmore; John Campbell; Martin Roland; John Benson; Jonathan Silverman
Journal:  BMJ Open       Date:  2014-03-06       Impact factor: 2.692

6.  Exploration of a possible relationship between examiner stringency and personality factors in clinical assessments: a pilot study.

Authors:  Yvonne Finn; Peter Cantillon; Gerard Flaherty
Journal:  BMC Med Educ       Date:  2014-12-31       Impact factor: 2.463

7.  Stakeholder perspectives on workplace-based performance assessment: towards a better understanding of assessor behaviour.

Authors:  Laury P J W M de Jonge; Angelique A Timmerman; Marjan J B Govaerts; Jean W M Muris; Arno M M Muijtjens; Anneke W M Kramer; Cees P M van der Vleuten
Journal:  Adv Health Sci Educ Theory Pract       Date:  2017-02-02       Impact factor: 3.853

8.  Detecting rater bias using a person-fit statistic: a Monte Carlo simulation study.

Authors:  André-Sébastien Aubin; Christina St-Onge; Jean-Sébastien Renaud
Journal:  Perspect Med Educ       Date:  2018-04

9.  Comparison of the medical students' perceived self-efficacy and the evaluation of the observers and patients.

Authors:  Jette Ammentorp; Janus Laust Thomsen; Dorte Ejg Jarbøl; René Holst; Anne Lindebo Holm Øvrehus; Poul-Erik Kofoed
Journal:  BMC Med Educ       Date:  2013-04-08       Impact factor: 2.463

Review 10.  Assessing Communication Skills of Medical Students in Objective Structured Clinical Examinations (OSCE)--A Systematic Review of Rating Scales.

Authors:  Musa Cömert; Jördis Maria Zill; Eva Christalle; Jörg Dirmaier; Martin Härter; Isabelle Scholl
Journal:  PLoS One       Date:  2016-03-31       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.