OBJECTIVE: To determine whether postgraduate students are able to assess the quality of undergraduate medical examinations and to establish whether faculty can use their results to troubleshoot the curriculum in terms of its content and evaluation. SUBJECTS: First and second year family medicine postgraduate students. MATERIALS: A randomly generated sample of undergraduate medical examination questions. METHODS: Postgraduate students were given two undergraduate examinations which included questions with an item difficulty (ID) > 0.60. The students answered and then rated each question on a scale of 1-7. RESULTS: The percentage of postgraduate students answering each question correctly correlated significantly with the average perceived relevance (Examination 1: r=0.372; P < 0.05; Examination 2: r=0.458; P < 0.05). Questions plotted for average postgraduate/undergraduate performance ratio versus the average perceived relevance were significantly correlated (Examination 1: r=0.462; P < 0.01; Examination 2: r=0.458; P < 0.05). CONCLUSIONS: This study offers a method of validating question appropriateness prior to examination administration. The design has the potential to be used as a model for determining the relevancy of a medical curriculum.
OBJECTIVE: To determine whether postgraduate students are able to assess the quality of undergraduate medical examinations and to establish whether faculty can use their results to troubleshoot the curriculum in terms of its content and evaluation. SUBJECTS: First and second year family medicine postgraduate students. MATERIALS: A randomly generated sample of undergraduate medical examination questions. METHODS: Postgraduate students were given two undergraduate examinations which included questions with an item difficulty (ID) > 0.60. The students answered and then rated each question on a scale of 1-7. RESULTS: The percentage of postgraduate students answering each question correctly correlated significantly with the average perceived relevance (Examination 1: r=0.372; P < 0.05; Examination 2: r=0.458; P < 0.05). Questions plotted for average postgraduate/undergraduate performance ratio versus the average perceived relevance were significantly correlated (Examination 1: r=0.462; P < 0.01; Examination 2: r=0.458; P < 0.05). CONCLUSIONS: This study offers a method of validating question appropriateness prior to examination administration. The design has the potential to be used as a model for determining the relevancy of a medical curriculum.
Authors: Bunmi S Malau-Aduli; Adrian Ys Lee; Nick Cooling; Marianne Catchpole; Matthew Jose; Richard Turner Journal: BMC Med Educ Date: 2013-10-08 Impact factor: 2.463