Literature DB >> 29881111

Using Automatic Item Generation to Create Solutions and Rationales for Computerized Formative Testing.

Mark J Gierl1, Hollis Lai1.   

Abstract

Computerized testing provides many benefits to support formative assessment. However, the advent of computerized formative testing has also raised formidable new challenges, particularly in the area of item development. Large numbers of diverse, high-quality test items are required because items are continuously administered to students. Hence, hundreds of items are needed to develop the banks necessary for computerized formative testing. One promising approach that may be used to address this test development challenge is automatic item generation. Automatic item generation is a relatively new but rapidly evolving research area where cognitive and psychometric modeling practices are used to produce items with the aid of computer technology. The purpose of this study is to describe a new method for generating both the items and the rationales required to solve the items to produce the required feedback for computerized formative testing. The method for rationale generation is demonstrated and evaluated in the medical education domain.

Keywords:  automatic item generation; item development; technology and assessment; test development

Year:  2017        PMID: 29881111      PMCID: PMC5978592          DOI: 10.1177/0146621617726788

Source DB:  PubMed          Journal:  Appl Psychol Meas        ISSN: 0146-6216


  5 in total

1.  Using Automatic Item Generation to Improve the Quality of MCQ Distractors.

Authors:  Hollis Lai; Mark J Gierl; Claire Touchie; Debra Pugh; André-Philippe Boulais; André De Champlain
Journal:  Teach Learn Med       Date:  2016       Impact factor: 2.414

2.  Using automatic item generation to create multiple-choice test items.

Authors:  Mark J Gierl; Hollis Lai; Simon R Turner
Journal:  Med Educ       Date:  2012-08       Impact factor: 6.251

3.  Three Modeling Applications to Promote Automatic Item Generation for Examinations in Dentistry.

Authors:  Hollis Lai; Mark J Gierl; B Ellen Byrne; Andrew I Spielman; David M Waldschmidt
Journal:  J Dent Educ       Date:  2016-03       Impact factor: 2.264

4.  Item modelling procedure for constructing content-equivalent multiple choice questions.

Authors:  A LaDuca; W I Staples; B Templeton; G B Holzman
Journal:  Med Educ       Date:  1986-01       Impact factor: 6.251

5.  The Medical Council of Canada's key features project: a more valid written examination of clinical decision-making skills.

Authors:  G Page; G Bordage
Journal:  Acad Med       Date:  1995-02       Impact factor: 6.893

  5 in total
  3 in total

1.  Building an intelligent recommendation system for personalized test scheduling in computerized assessments: A reinforcement learning approach.

Authors:  Jinnie Shin; Okan Bulut
Journal:  Behav Res Methods       Date:  2021-06-15

Review 2.  Feasibility assurance: a review of automatic item generation in medical assessment.

Authors:  Filipe Falcão; Patrício Costa; José M Pêgo
Journal:  Adv Health Sci Educ Theory Pract       Date:  2022-03-01       Impact factor: 3.629

Review 3.  Challenges and Future Directions of Big Data and Artificial Intelligence in Education.

Authors:  Hui Luan; Peter Geczy; Hollis Lai; Janice Gobert; Stephen J H Yang; Hiroaki Ogata; Jacky Baltes; Rodrigo Guerra; Ping Li; Chin-Chung Tsai
Journal:  Front Psychol       Date:  2020-10-19
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.