BACKGROUND: Instruction in evidence-based medicine (EBM) has been widely incorporated into medical school curricula with little evidence of its effectiveness. Our goal was to create, implement, and validate a computer-based assessment tool that measured medical students' EBM skills. DESCRIPTION: As part of a required objective structured clinical examination, we developed a specific case scenario in which students (a) asked a structured clinical question using a standard framework, (b) generated effective MEDLINE search terms to answer a specific question, and (c) elected the most appropriate of 3 abstracts generated from a search justifying which best applies to the patient scenario. EVALUATION: Between the 3 blinded raters, there was very good interrater reliability with 84, 94, and 96% agreement on the scoring for each component, respectively (k = .64, .82, and .91, respectively). In addition, students found the station appropriately difficult for their level of training. CONCLUSIONS: This computer-based tool appears to measure several EBM skills independently and combines simple administration and scoring. Its generalizability to other cases and settings requires further study.
BACKGROUND: Instruction in evidence-based medicine (EBM) has been widely incorporated into medical school curricula with little evidence of its effectiveness. Our goal was to create, implement, and validate a computer-based assessment tool that measured medical students' EBM skills. DESCRIPTION: As part of a required objective structured clinical examination, we developed a specific case scenario in which students (a) asked a structured clinical question using a standard framework, (b) generated effective MEDLINE search terms to answer a specific question, and (c) elected the most appropriate of 3 abstracts generated from a search justifying which best applies to the patient scenario. EVALUATION: Between the 3 blinded raters, there was very good interrater reliability with 84, 94, and 96% agreement on the scoring for each component, respectively (k = .64, .82, and .91, respectively). In addition, students found the station appropriately difficult for their level of training. CONCLUSIONS: This computer-based tool appears to measure several EBM skills independently and combines simple administration and scoring. Its generalizability to other cases and settings requires further study.
Authors: D Michael Elnicki; Meenakshy K Aiyer; Maria L Cannarozzi; Alexander Carbo; Paul R Chelminski; Shobhina G Chheda; Saumil M Chudgar; Heather E Harrell; L Chad Hood; Michelle Horn; Karnjit Johl; Gregory C Kane; Diana B McNeill; Marty D Muntz; Anne G Pereira; Emily Stewart; Heather Tarantino; T Robert Vu Journal: J Gen Intern Med Date: 2017-06-20 Impact factor: 5.128
Authors: Rachel Stark; Ira M Helenius; Laura M Schimming; Nogusa Takahara; Ian Kronish; Deborah Korenstein Journal: J Gen Intern Med Date: 2007-10-06 Impact factor: 5.128
Authors: Craig A Umscheid; Matthew J Maenner; Nikhil Mull; Angela F Veesenmeyer; John T Farrar; Stanley Goldfarb; Gail Morrison; Mark A Albanese; John G Frohna; David A Feldstein Journal: Med Teach Date: 2016-04-13 Impact factor: 3.650