Literature DB >> 29318646

Improving the validity of script concordance testing by optimising and balancing items.

Michael Sh Wan1, Elina Tor1, Judith Nicky Hudson2.   

Abstract

BACKGROUND: A script concordance test (SCT) is a modality for assessing clinical reasoning. Concerns had been raised about the plausible validity threat to SCT scores if students deliberately avoided the extreme answer options to obtain higher scores. The aims of the study were firstly to investigate whether students' avoidance of the extreme answer options could result in higher scores, and secondly to determine whether a 'balanced approach' by careful construction of SCT items (to include extreme as well as median options as model responses) would improve the validity of an SCT.
METHODS: Using the paired sample t-test, the actual average student scores for 10 SCT papers from 2012-2016 were compared with simulated scores. The latter were generated by recoding all '-2' responses to '-1' and '+2' responses to '+1' for the whole and bottom 10% of the cohort (simulation 1), and scoring as if all students had chosen '0' for their responses (simulation 2). The actual average and simulated average scores in 2012 (before the 'balanced approach') were compared with those from 2013-2016, when papers had a good balance of modal responses from the expert reference panel.
RESULTS: In 2012, a score increase was seen in simulation 1 in the third-year cohort, from 50.2 to 55.6% (t [10] = 4.818; p = 0.001). Since 2013, with the 'balanced approach', the actual SCT scores (57.4%) were significantly higher than scores in both simulation 1 and simulation 2 (46.7% and 23.9% respectively).
CONCLUSIONS: When constructing SCT examinations, apart from the rigorous pre-examination optimisation, it is desirable to achieve a balance between items that attract extreme responses and those that attract median response options. This could mitigate the validity threat to SCT scores, especially for the low-performing students who have previously been shown to only select median responses and avoid the extreme responses.
© 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

Mesh:

Year:  2018        PMID: 29318646     DOI: 10.1111/medu.13495

Source DB:  PubMed          Journal:  Med Educ        ISSN: 0308-0110            Impact factor:   6.251


  5 in total

1.  Assessing clinical reasoning in airway related cases among anesthesiology fellow residents using Script Concordance Test (SCT).

Authors:  Andy Omega; Andi Ade Wijaya Ramlan; Ratna Farida Soenarto; Aldy Heriwardito; Adhrie Sugiarto
Journal:  Med Educ Online       Date:  2022-12

2.  Construct validity of script concordance testing: progression of scores from novices to experienced clinicians.

Authors:  Michael Siu Hong Wan; Elina Tor; Judith N Hudson
Journal:  Int J Med Educ       Date:  2019-09-20

3.  Impact of panelists' experience on script concordance test scores of medical students.

Authors:  Olivier Peyrony; Alice Hutin; Jennifer Truchot; Raphaël Borie; David Calvet; Adrien Albaladejo; Yousrah Baadj; Pierre-Emmanuel Cailleaux; Martin Flamant; Clémence Martin; Jonathan Messika; Alexandre Meunier; Mariana Mirabel; Victoria Tea; Xavier Treton; Sylvie Chevret; David Lebeaux; Damien Roux
Journal:  BMC Med Educ       Date:  2020-09-17       Impact factor: 2.463

4.  Examining response process validity of script concordance testing: a think-aloud approach.

Authors:  Michael Siu Hong Wan; Elina Tor; Judith N Hudson
Journal:  Int J Med Educ       Date:  2020-06-24

Review 5.  Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review.

Authors:  Jennie Brentnall; Debbie Thackray; Belinda Judd
Journal:  Int J Environ Res Public Health       Date:  2022-01-14       Impact factor: 3.390

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.