Literature DB >> 8484850

The triple-jump examination as an assessment tool in the problem-based medical curriculum at the University of Hawaii.

R M Smith1.   

Abstract

BACKGROUND: The three-step triple jump (TJ) examination aims to assess students' clinical problem-solving processes predominantly by means of subjective assessments administered by faculty. But training TJ administrators to ensure interrater reliability is both time- and cost-intensive, and difficult at best--hence the desire to test a more objective system of scoring students' TJ performances.
METHOD: The sample was the 58 first-year students of the class of 1995, who in March 1992 were finishing the second 13-week unit in the problem-based curriculum at the University of Hawaii John A. Burns School of Medicine. To determine how well the school had succeeded in standardizing and objectifying its TJ examination (used for all units in the first two years), scores were correlated for various objective examinations independent of the TJ, the TJ administrators' subjective assessments, and the TJ objective assessments (regarding number of problem-based hypotheses generated and number of hypothesis-testing clinical database items elicited). The statistical methods used were linear regression, Student's unpaired t-test, chi-square, and the z-test.
RESULTS: The TJ scores and independent objective examinations did not correlate significantly (suggesting that they assess different aspects of student achievement), but the TJ subjective and objective scores did correlate significantly. There were large standard deviations on the TJ objective scores, largely because one problem was significantly more difficult than the others (each student works on only one of several problems). However, the administrators' subjective scores for all problems were comparable.
CONCLUSION: Because problems vary in difficulty, objective scores cannot be used across problems as a major component of all students' grades; but when a student has received an unsatisfactory score, an external reviewer can evaluate the appropriateness of the subjective score by comparing the student's objective performance with those of students who had the same problem and received higher subjective scores. That the administrators' subjective assessments for all problems were comparable not only suggests that the administrators were able to adjust for problem variability but also reinforces the appropriateness of using subjective assessments for the TJ examination.

Mesh:

Year:  1993        PMID: 8484850     DOI: 10.1097/00001888-199305000-00020

Source DB:  PubMed          Journal:  Acad Med        ISSN: 1040-2446            Impact factor:   6.893


  6 in total

Review 1.  [Dilemmas and alternatives in the evaluation of family doctor training].

Authors:  J R Loayssa Lara
Journal:  Aten Primaria       Date:  2003-10-15       Impact factor: 1.137

2.  The semi-structured triple jump--a new assessment tool reflects qualifications of tutors in a PBL course on basic pharmacology.

Authors:  Jan Matthes; Alexander Look; Amina K Hahne; Ara Tekian; Stefan Herzig
Journal:  Naunyn Schmiedebergs Arch Pharmacol       Date:  2008-01-11       Impact factor: 3.000

3.  Improving in-training evaluation programs.

Authors:  J Turnbull; J Gray; J MacFadyen
Journal:  J Gen Intern Med       Date:  1998-05       Impact factor: 5.128

Review 4.  Recent developments in assessing medical students.

Authors:  S L Fowell; J G Bligh
Journal:  Postgrad Med J       Date:  1998-01       Impact factor: 2.401

Review 5.  Clinical reasoning assessment through medical expertise theories: past, present and future directions.

Authors:  Elham Boushehri; Kamran Soltani Arabshahi; Alireza Monajemi
Journal:  Med J Islam Repub Iran       Date:  2015-06-15

6.  Educating medical students in the era of ubiquitous information.

Authors:  Charles P Friedman; Katherine M Donaldson; Anna V Vantsevich
Journal:  Med Teach       Date:  2016-03-30       Impact factor: 3.650

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.