Sören Huwendiek1, Friedrich Reichert2, Cecilia Duncker3, Bas A de Leng4, Cees P M van der Vleuten5, Arno M M Muijtjens5, Hans-Martin Bosse6, Martin Haag7, Georg F Hoffmann8, Burkhard Tönshoff8, Diana Dolmans5. 1. a Department of Assessment and Evaluation, Institute of Medical Education Bern , University of Bern , Bern, Switzerland. 2. b Department of Pediatric Cardiology and Intensive Care Medicine , Klinikum Stuttgart, Stuttgart , Germany. 3. c Clinic for Child and Adolescent Psychiatry, University Hospital Kiel , Germany. 4. d Institute of Medical Education (IfAS), Faculty of Medicine, University of Muenster, Münster , Germany. 5. e Department of Educational Development and Research , Maastricht University, Maastricht , the Netherlands. 6. f Clinic for General Paediatrics, Neonatology and Paediatric Cardiology, University Children's Hospital Düsseldorf, Düsseldorf , Germany. 7. g GECKO Institute of Medicine, Informatics & Economics, Heilbronn University, Heilbronn , Germany. 8. h Clinic I, University Children's Hospital Heidelberg, Heidelberg , Germany.
Abstract
BACKGROUND: It remains unclear which item format would best suit the assessment of clinical reasoning: context-rich single best answer questions (crSBAs) or key-feature problems (KFPs). This study compared KFPs and crSBAs with respect to students' acceptance, their educational impact, and psychometric characteristics when used in a summative end-of-clinical-clerkship pediatric exam. METHODS: Fifth-year medical students (n = 377) took a computer-based exam that included 6-9 KFPs and 9-20 crSBAs which assessed their clinical reasoning skills, in addition to an objective structured clinical exam (OSCE) that assessed their clinical skills. Each KFP consisted of a case vignette and three key features using a "long-menu" question format. We explored students' perceptions of the KFPs and crSBAs in eight focus groups and analyzed statistical data of 11 exams. RESULTS: Compared to crSBAs, KFPs were perceived as more realistic and difficult, providing a greater stimulus for the intense study of clinical reasoning, and were generally well accepted. The statistical analysis revealed no difference in difficulty, but KFPs resulted more reliable and efficient than crSBAs. The correlation between the two formats was high, while KFPs correlated more closely with the OSCE score. CONCLUSIONS: KFPs with long-menu exams seem to bring about a positive educational effect without psychometric drawbacks.
BACKGROUND: It remains unclear which item format would best suit the assessment of clinical reasoning: context-rich single best answer questions (crSBAs) or key-feature problems (KFPs). This study compared KFPs and crSBAs with respect to students' acceptance, their educational impact, and psychometric characteristics when used in a summative end-of-clinical-clerkship pediatric exam. METHODS: Fifth-year medical students (n = 377) took a computer-based exam that included 6-9 KFPs and 9-20 crSBAs which assessed their clinical reasoning skills, in addition to an objective structured clinical exam (OSCE) that assessed their clinical skills. Each KFP consisted of a case vignette and three key features using a "long-menu" question format. We explored students' perceptions of the KFPs and crSBAs in eight focus groups and analyzed statistical data of 11 exams. RESULTS: Compared to crSBAs, KFPs were perceived as more realistic and difficult, providing a greater stimulus for the intense study of clinical reasoning, and were generally well accepted. The statistical analysis revealed no difference in difficulty, but KFPs resulted more reliable and efficient than crSBAs. The correlation between the two formats was high, while KFPs correlated more closely with the OSCE score. CONCLUSIONS: KFPs with long-menu exams seem to bring about a positive educational effect without psychometric drawbacks.
Authors: Stefan K Schauber; Stefanie C Hautz; Juliane E Kämmer; Fabian Stroben; Wolf E Hautz Journal: Adv Health Sci Educ Theory Pract Date: 2021-05-11 Impact factor: 3.853