Literature DB >> 19117221

Setting and maintaining standards in multiple choice examinations: AMEE Guide No. 37.

Raja C Bandaranayake1.   

Abstract

The process of setting a standard when pass/fail decisions have to be made inevitably involves judgment about the point on the test score scale where performance is deemed to be adequate for the purpose for which the examination is set. As with any process which involves human judgment, setting this standard is likely to include a certain degree of error, which may result in some false positive and false negative decisions. The customary practice of maintaining a constant point on the test score scale at which pass/fail separations are made cannot be justified, as examinations vary in difficulty. The aim of standard setting procedures is to minimize such errors while accounting for the varying difficulty of examinations. A standard may be norm-referenced, where it is dependent on the performance of the particular group of examinees, or criterion-referenced, where it is based on predetermined criteria, irrespective of examinee performance. Where certification of competence is the primary purpose of an examination, the latter is preferred as the decision to be made is whether an individual is competent to practise rather than competent compared to peers. Several methods of standard setting have been used, some of which are based solely on predetermined criteria, while others compromise between norm- and criterion-referenced standards. This guide examines the more commonly used methods of standard setting, illustrates the procedure used in each with the help of an example, and discusses the advantages and disadvantages associated with the use of each. The common errors made by judges in the standard setting process are pointed out and the manner in which judges should be selected, trained and instructed emphasized. A method used for equating similar tests set at different times with the intention of maintaining standards from one examination to the next is illustrated with an example. Finally, the guide proposes a practical method for arriving at a pre-determined standard by the proportionate selection of test-items of known relative difficulties in relation to minimally competent examinees.

Entities:  

Mesh:

Year:  2008        PMID: 19117221     DOI: 10.1080/01421590802402247

Source DB:  PubMed          Journal:  Med Teach        ISSN: 0142-159X            Impact factor:   3.650


  10 in total

1.  Development and Validation of a Machine Learning Model for Automated Assessment of Resident Clinical Reasoning Documentation.

Authors:  Verity Schaye; Benedict Guzman; Jesse Burk-Rafel; Marina Marin; Ilan Reinstein; David Kudlowitz; Louis Miller; Jonathan Chun; Yindalon Aphinyanaphongs
Journal:  J Gen Intern Med       Date:  2022-06-16       Impact factor: 6.473

2.  Implementing the Angoff method of standard setting using postgraduate students: Practical and affordable in resource-limited settings.

Authors:  A G Mubuuke; C Mwesigwa; S Kiguli
Journal:  Afr J Health Prof Educ       Date:  2017-12-06

3.  Development of a Clinical Reasoning Documentation Assessment Tool for Resident and Fellow Admission Notes: a Shared Mental Model for Feedback.

Authors:  Verity Schaye; Louis Miller; David Kudlowitz; Jonathan Chun; Jesse Burk-Rafel; Patrick Cocks; Benedict Guzman; Yindalon Aphinyanaphongs; Marina Marin
Journal:  J Gen Intern Med       Date:  2021-05-04       Impact factor: 5.128

4.  The reliability of the pass/fail decision for assessments comprised of multiple components.

Authors:  Andreas Möltner; Sevgi Tımbıl; Jana Jünger
Journal:  GMS Z Med Ausbild       Date:  2015-10-15

5.  The role of the assessment policy in the relation between learning and performance.

Authors:  Rob Kickert; Karen M Stegers-Jager; Marieke Meeuwisse; Peter Prinzie; Lidia R Arends
Journal:  Med Educ       Date:  2017-12-11       Impact factor: 6.251

6.  Determining whether Community Health Workers are 'Deployment Ready' Using Standard Setting.

Authors:  Celia Taylor; Basimenye Nhlema; Emily Wroe; Moses Aron; Henry Makungwa; Elizabeth L Dunbar
Journal:  Ann Glob Health       Date:  2018-11-05       Impact factor: 2.462

7.  Asynchronous online lecture may not be an effective method in teaching cardiovascular physiology during the COVID-19 pandemic.

Authors:  Weerapat Kositanurit; Sarocha Vivatvakin; Kasiphak Kaikaew; Pachara Varachotisate; Chuti Burana; Maneerat Chayanupatkul; Sekh Thanprasertsuk; Danai Wangsaturaka; Onanong Kulaputana
Journal:  BMC Med Educ       Date:  2022-03-09       Impact factor: 2.463

8.  Standard setting Very Short Answer Questions (VSAQs) relative to Single Best Answer Questions (SBAQs): does having access to the answers make a difference?

Authors:  Amir H Sam; Kate R Millar; Rachel Westacott; Colin R Melville; Celia A Brown
Journal:  BMC Med Educ       Date:  2022-08-23       Impact factor: 3.263

9.  Modifying Hofstee standard setting for assessments that vary in difficulty, and to determine boundaries for different levels of achievement.

Authors:  Steven A Burr; John Whittle; Lucy C Fairclough; Lee Coombes; Ian Todd
Journal:  BMC Med Educ       Date:  2016-01-28       Impact factor: 2.463

10.  Measurement precision at the cut score in medical multiple choice exams: Theory matters.

Authors:  Felicitas-Maria Lahner; Stefan Schauber; Andrea Carolin Lörwald; Roger Kropf; Sissel Guttormsen; Martin R Fischer; Sören Huwendiek
Journal:  Perspect Med Educ       Date:  2020-08
  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.