Literature DB >> 10607279

The effect of a Structured Question Grid on the validity and perceived fairness of a medical long case assessment.

L G Olson1, J Coughlan, I Rolfe, M J Hensley.   

Abstract

PROBLEM: A perception that the reliability of our oral assessments of clinical competence was vitiated by lack of consistency in questioning.
DESIGN: Parallel group controlled trial of a Structured Question Grid for use in clinical assessments. The Structured Question Grid required assessors to see the patient personally in advance of the student and to write down for each case the points they wished to examine. The Structured Question Grid limited assessors to two questions on each point, one designated a pass question and one at a higher level. Three basic science and three clinical reasoning issues were required, so that a total of 12 questions was allowed.
SETTING: Small (70 students/year) undergraduate medical school with an integrated, problem-based curriculum.
SUBJECTS: Sixty-seven students in the fourth year of a 5-year course were assessed, each seeing one patient and being examined by a pair of assessors. Assessor pairs were allocated to use the Structured Question Grid or to assess according to their usual practice.
RESULTS: After the assessment but before being informed of the result the students completed a questionnaire on their experience and gave their performance a score between 0 and 100. The questions asked were based on focus group discussions with a previous student cohort, and concerned principally the perceived fairness and subjective validity of the assessment. The assessors independently completed a similar questionnaire, gave the student's performance a score between 0 and 100, and assigned an overall pass/fail grade.
CONCLUSIONS: No difference was detected between students' or assessors' views of the fairness of the assessment for assessors who had used the Structured Question Grid compared to those who had not. Students whose assessors used the Structured Question Grid considered the assessment less representative of their ability. No difference was detected in the chance of students being assessed as failing or on the likelihood of a discrepancy between students' and assessors' ratings of students as passing or failing.

Entities:  

Mesh:

Year:  2000        PMID: 10607279     DOI: 10.1046/j.1365-2923.2000.00465.x

Source DB:  PubMed          Journal:  Med Educ        ISSN: 0308-0110            Impact factor:   6.251


  5 in total

1.  What is being assessed in the MRCGP oral examination? A qualitative study.

Authors:  Robin G Simpson; Karen D Ballard
Journal:  Br J Gen Pract       Date:  2005-06       Impact factor: 5.386

2.  Assessment methods in undergraduate medical education.

Authors:  Nadia M Al-Wardy
Journal:  Sultan Qaboos Univ Med J       Date:  2010-07-19

3.  Perceived educational impact of the medical student long case: a qualitative study.

Authors:  Corinne Tey; Neville Chiavaroli; Anna Ryan
Journal:  BMC Med Educ       Date:  2020-08-07       Impact factor: 2.463

4.  "Evaluation of a best practice approach to assess undergraduate clinical skills in Paediatrics".

Authors:  Fabiola Stollar; Bernard Cerutti; Susanne Aujesky; Mathieu Nendaz; Annick Galetto-Lacour
Journal:  BMC Med Educ       Date:  2020-02-11       Impact factor: 2.463

5.  Faculty's perspective on skill assessment in undergraduate medical education: Qualitative online forum study.

Authors:  Meenakshi P Khapre; Harshal Sabane; Sonia Singh; Rashmi Katyal; Anil Kapoor; Dinesh K Badyal
Journal:  J Educ Health Promot       Date:  2020-01-30
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.