Literature DB >> 33612937

Item analysis of multiple choice questions: A quality assurance test for an assessment tool.

Dharmendra Kumar1, Raksha Jaipurkar2, Atul Shekhar2, Gaurav Sikri3, V Srinivas4.   

Abstract

BACKGROUND: The item analysis of multiple choice questions (MCQs) is an essential tool that can provide input on validity and reliability of items. It helps to identify items which can be revised or discarded, thus building a quality MCQ bank.
METHODS: The study focussed on item analysis of 90 MCQs of three tests conducted for 150 first year Bachelor of Medicine and Bachelor of Surgery (MBBS) physiology students. The item analysis explored the difficulty index (DIF I) and discrimination index (DI) with distractor effectiveness (DE). Statistical analysis was performed by using MS Excel 2010 and SPSS, version 20.0.
RESULTS: Of total 90 MCQs, the majority, that is, 74 (82%) MCQs had a good/acceptable level of difficulty with a mean DIF I of 55.32 ± 7.4 (mean ± SD), whereas seven (8%) were too difficult and nine (10%) were too easy. A total of 72 (80%) items had an excellent to acceptable DI and 18 (20%) had a poor DI with an overall mean DI of 0.31 ± 0.12. There was significant weak correlation between DIF I and DI (r = 0.140, p < .0001). The mean DE was 32.35 ± 31.3 with 73% functional distractors in all. The reliability measure of test items by Cronbach alpha was 0.85 and Kuder-Richardson Formula 20 was 0.71, which is good. The standard error of measurement was 1.22.
CONCLUSION: Our study helped teachers identify good and ideal MCQs which can be part of the question bank for future and those MCQs which needed revision. We recommend that item analysis must be performed for all MCQ-based assessments to determine validity and reliability of the assessment.
© 2020 Director General, Armed Forces Medical Services. Published by Elsevier, a division of RELX India Pvt. Ltd.

Entities:  

Keywords:  Difficulty index; Discrimination index; Distractor effectiveness; Item analysis; MCQs

Year:  2021        PMID: 33612937      PMCID: PMC7873707          DOI: 10.1016/j.mjafi.2020.11.007

Source DB:  PubMed          Journal:  Med J Armed Forces India        ISSN: 0377-1237


  9 in total

Review 1.  Competency-based assessment: making it a reality.

Authors:  Margery H Davis; Ronald M Harden
Journal:  Med Teach       Date:  2003-11       Impact factor: 3.650

2.  The misinterpretation of the standard error of measurement in medical education: a primer on the problems, pitfalls and peculiarities of the three different standard errors of measurement.

Authors:  I C McManus
Journal:  Med Teach       Date:  2012-04-18       Impact factor: 3.650

Review 3.  Improving the fairness of multiple-choice questions: a literature review.

Authors:  Paul McCoubrie
Journal:  Med Teach       Date:  2004-12       Impact factor: 3.650

Review 4.  Assessment in medical education.

Authors:  Ronald M Epstein
Journal:  N Engl J Med       Date:  2007-01-25       Impact factor: 91.245

5.  Univariate and multivariate skewness and kurtosis for measuring nonnormality: Prevalence, influence and estimation.

Authors:  Meghan K Cain; Zhiyong Zhang; Ke-Hai Yuan
Journal:  Behav Res Methods       Date:  2017-10

6.  The approximate sampling distribution of Kuder-Richardson reliability coefficient twenty.

Authors:  L S Feldt
Journal:  Psychometrika       Date:  1965-09       Impact factor: 2.500

7.  An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis.

Authors:  Marie Tarrant; James Ware; Ahmed M Mohammed
Journal:  BMC Med Educ       Date:  2009-07-07       Impact factor: 2.463

8.  Student assessment: Moving over to programmatic assessment.

Authors:  Tejinder Singh
Journal:  Int J Appl Basic Med Res       Date:  2016 Jul-Sep

9.  Descriptive statistics and normality tests for statistical data.

Authors:  Prabhaker Mishra; Chandra M Pandey; Uttam Singh; Anshul Gupta; Chinmoy Sahu; Amit Keshri
Journal:  Ann Card Anaesth       Date:  2019 Jan-Mar
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.