Literature DB >> 22803753

Using automatic item generation to create multiple-choice test items.

Mark J Gierl1, Hollis Lai, Simon R Turner.   

Abstract

CONTEXT: Many tests of medical knowledge, from the undergraduate level to the level of certification and licensure, contain multiple-choice items. Although these are efficient in measuring examinees' knowledge and skills across diverse content areas, multiple-choice items are time-consuming and expensive to create. Changes in student assessment brought about by new forms of computer-based testing have created the demand for large numbers of multiple-choice items. Our current approaches to item development cannot meet this demand.
METHODS: We present a methodology for developing multiple-choice items based on automatic item generation (AIG) concepts and procedures. We describe a three-stage approach to AIG and we illustrate this approach by generating multiple-choice items for a medical licensure test in the content area of surgery.
RESULTS: To generate multiple-choice items, our method requires a three-stage process. Firstly, a cognitive model is created by content specialists. Secondly, item models are developed using the content from the cognitive model. Thirdly, items are generated from the item models using computer software. Using this methodology, we generated 1248 multiple-choice items from one item model.
CONCLUSIONS: Automatic item generation is a process that involves using models to generate items using computer technology. With our method, content specialists identify and structure the content for the test items, and computer technology systematically combines the content to generate new test items. By combining these outcomes, items can be generated automatically. © Blackwell Publishing Ltd 2012.

Mesh:

Year:  2012        PMID: 22803753     DOI: 10.1111/j.1365-2923.2012.04289.x

Source DB:  PubMed          Journal:  Med Educ        ISSN: 0308-0110            Impact factor:   6.251


  7 in total

1.  Using Automatic Item Generation to Create Solutions and Rationales for Computerized Formative Testing.

Authors:  Mark J Gierl; Hollis Lai
Journal:  Appl Psychol Meas       Date:  2017-08-26

2.  Pattern recognition as a concept for multiple-choice questions in a national licensing exam.

Authors:  Tilo Freiwald; Madjid Salimi; Ehsan Khaljani; Sigrid Harendza
Journal:  BMC Med Educ       Date:  2014-11-14       Impact factor: 2.463

3.  Re-using questions in classroom-based assessment: An exploratory study at the undergraduate medical education level.

Authors:  Sébastien Xavier Joncas; Christina St-Onge; Sylvie Bourque; Paul Farand
Journal:  Perspect Med Educ       Date:  2018-12

4.  What Technology Can and Cannot Do to Support Assessment of Non-cognitive Skills.

Authors:  Vanessa R Simmering; Lu Ou; Maria Bolsinova
Journal:  Front Psychol       Date:  2019-09-25

5.  Adapting cognitive diagnosis computerized adaptive testing item selection rules to traditional item response theory.

Authors:  Miguel A Sorrel; Juan R Barrada; Jimmy de la Torre; Francisco José Abad
Journal:  PLoS One       Date:  2020-01-10       Impact factor: 3.240

Review 6.  Feasibility assurance: a review of automatic item generation in medical assessment.

Authors:  Filipe Falcão; Patrício Costa; José M Pêgo
Journal:  Adv Health Sci Educ Theory Pract       Date:  2022-03-01       Impact factor: 3.629

7.  Analysis of question text properties for equality monitoring.

Authors:  Daniel Zahra; Steven A Burr
Journal:  Perspect Med Educ       Date:  2018-12
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.