Literature DB >> 22775780

Comparing a script concordance examination to a multiple-choice examination on a core internal medicine clerkship.

William Kelly1, Steven Durning, Gerald Denton.   

Abstract

BACKGROUND: Script concordance (SC) questions, in which a learner is given a brief clinical scenario then asked if additional information makes one hypothesis more or less likely, with answers compared to a panel of experts, are designed to reflect a learner's clinical reasoning.
PURPOSE: The purpose is to compare reliability, validity, and learner satisfaction between a three-option modified SC examination to a multiple-choice question (MCQ) examination among medical students during a 3rd-year internal medicine clerkship, to compare reliability and learner satisfaction of SC between medical students and a convenience sample of house staff, and to compare learner satisfaction with SC between 1st- and 4th-quarter medical students.
METHODS: Using a prospective cohort design, we compared the reliability of 20-item SC and MCQ examinations, sequentially administered on the same day. To measure validity, scores were compared to scores on the National Board of Medical Examiners (NBME) subject examination in medicine and to a clinical performance measure. SC and MCQ were also administered to a convenience sample of internal medicine house staff. Medical student and house staff were anonymously surveyed regarding satisfaction with the examinations.
RESULTS: There were 163 students who completed the examinations. With students, the initial reliability of the SC was half that of MCQ (KR20 0.19 vs. 0.41), but with house staff (n = 15), reliability was the same (KR20 = 0.52 for both examinations). SC performance correlated with student clinical performance, whereas MCQ did not (r = .22, p = .005 vs. .11, p = .159). Students reported that SC questions were no more difficult and were answered more quickly than MCQ questions. Both exams were considered easier than NBME, and all 3 were considered equally fair. More students preferred MCQ over SC (55.8% vs. 18.0%), whereas house staff preferred SC (46% vs. 23%; p = .03).
CONCLUSIONS: This SC examination was feasible and was more valid than the MCQ examination because of better correlation with clinical performance, despite being initially less reliable and less preferred by students. SC was more reliable and preferred when administered to house staff.

Entities:  

Mesh:

Year:  2012        PMID: 22775780     DOI: 10.1080/10401334.2012.692239

Source DB:  PubMed          Journal:  Teach Learn Med        ISSN: 1040-1334            Impact factor:   2.414


  4 in total

1.  Accuracy of script concordance tests in fourth-year medical students.

Authors:  Saad Nseir; Ahmed Elkalioubie; Philippe Deruelle; Dominique Lacroix; Didier Gosset
Journal:  Int J Med Educ       Date:  2017-02-25

2.  Script concordance test acceptability and utility for assessing medical students' clinical reasoning: a user's survey and an institutional prospective evaluation of students' scores.

Authors:  Jean-Daniel Kün-Darbois; Cédric Annweiler; Nicolas Lerolle; Souhil Lebdai
Journal:  BMC Med Educ       Date:  2022-04-13       Impact factor: 2.463

3.  Script Concordance Tests for Formative Clinical Reasoning and Problem-Solving Assessment in General Pediatrics.

Authors:  Pranshu Bhardwaj; Erik W Black; Joseph C Fantone; Meghan Lopez; Maria Kelly
Journal:  MedEdPORTAL       Date:  2022-09-20

Review 4.  Evaluating the Clinical Reasoning of Student Health Professionals in Placement and Simulation Settings: A Systematic Review.

Authors:  Jennie Brentnall; Debbie Thackray; Belinda Judd
Journal:  Int J Environ Res Public Health       Date:  2022-01-14       Impact factor: 3.390

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.