Literature DB >> 10940147

Clinical work sampling A new approach to the problem of in-training evaluation.

J Turnbull1, J MacFadyen, C Van Barneveld, G Norman.   

Abstract

OBJECTIVE: Existing systems of in-training evaluation (ITE) have been criticized as being unreliable and invalid methods for assessing student performance during clinical education. The purpose of this study was to assess the feasibility, reliability, and validity of a clinical work sampling (CWS) approach to ITE. This approach focused on the following: (1) basing performance data on observed behaviors, (2) using multiple observers and occasions, (3) recording data at the time of performance, and (4) allowing for a feasible system to receive feedback. PARTICIPANTS: Sixty-two third-year University of Ottawa students were assessed during their 8-week internal medicine inpatient experience.
MEASUREMENTS AND MAIN RESULTS: Four performance rating forms (Admission Rating Form, Ward Rating Form, Multidisciplinary Team Rating Form, and Patient's Rating Form) were introduced to document student performance. Voluntary participation rates were variable (12%-64%) with patients excluded from the analysis because of low response rate (12%). The mean number of evaluations per student per rotation (19) exceeded the number of evaluations needed to achieve sufficient reliability. Reliability coefficients were high for the Ward Form (.86) and the Admission Form (.73) but not for the Multidisciplinary Team (.22) Form. There was an examiner effect (rater leniency), but this was small relative to real differences between students. Correlations between the Ward Form and the Admission Form were high (.47), while those with the Multidisciplinary Team Form were lower (.37 and .26, respectively). The CWS approach ITE was considered to be content valid by expert judges.
CONCLUSIONS: The collection of ongoing performance data was reasonably feasible, reliable, and valid.

Entities:  

Mesh:

Year:  2000        PMID: 10940147      PMCID: PMC1495580          DOI: 10.1046/j.1525-1497.2000.06099.x

Source DB:  PubMed          Journal:  J Gen Intern Med        ISSN: 0884-8734            Impact factor:   5.128


  15 in total

1.  The use of nurses to evaluate houseofficers' humanistic behavior.

Authors:  C B Kaplan; R M Centor
Journal:  J Gen Intern Med       Date:  1990 Sep-Oct       Impact factor: 5.128

2.  A new rating form for use by nurses in assessing residents' humanistic behavior.

Authors:  P S Butterfield; E L Mazzaferri
Journal:  J Gen Intern Med       Date:  1991 Mar-Apr       Impact factor: 5.128

Review 3.  Pitfalls in the pursuit of objectivity: issues of reliability.

Authors:  C P Van der Vleuten; G R Norman; E De Graaff
Journal:  Med Educ       Date:  1991-03       Impact factor: 6.251

4.  Functional and dysfunctional characteristics of the prevailing model of clinical evaluation systems in North American medical schools.

Authors:  D D Hunt
Journal:  Acad Med       Date:  1992-04       Impact factor: 6.893

5.  Positive effects of a clinical performance assessment program.

Authors:  P L Stillman; H L Haley; M B Regan; M M Philbin
Journal:  Acad Med       Date:  1991-08       Impact factor: 6.893

6.  Validity of three clinical performance assessments of internal medicine clerks.

Authors:  A L Hull; S Hodder; B Berger; D Ginsberg; N Lindheim; J Quan; M E Kleinhenz
Journal:  Acad Med       Date:  1995-06       Impact factor: 6.893

7.  Use of peer ratings to evaluate physician performance.

Authors:  P G Ramsey; M D Wenrich; J D Carline; T S Inui; E B Larson; J P LoGerfo
Journal:  JAMA       Date:  1993-04-07       Impact factor: 56.272

8.  Evaluation of the noncognitive professional traits of medical students.

Authors:  S Phelan; S S Obenshain; W R Galey
Journal:  Acad Med       Date:  1993-10       Impact factor: 6.893

9.  Assessing clinical performance. Where do we stand and what might we expect?

Authors:  W D Dauphinee
Journal:  JAMA       Date:  1995-09-06       Impact factor: 56.272

10.  The importance of strong evaluation standards and procedures in training residents.

Authors:  J P Short
Journal:  Acad Med       Date:  1993-07       Impact factor: 6.893

View more
  7 in total

1.  Reliable, valid, and educational in-training medical student evaluation overlooked.

Authors:  P A Hemmer; T Jamieson; L N Pangaro
Journal:  J Gen Intern Med       Date:  2001-01       Impact factor: 5.128

2.  How are we doing? The problem of in-training evaluation.

Authors:  N Ryan Lowitt
Journal:  J Gen Intern Med       Date:  2000-08       Impact factor: 5.128

3.  The East Anglian specialist registrar assessment tool.

Authors:  Susan Robinson; Katharine Boursicot; Catherine Hayhurst
Journal:  Emerg Med J       Date:  2007-03       Impact factor: 2.740

4.  The reliability of in-training assessment when performance improvement is taken into account.

Authors:  Mirjam T van Lohuizen; Jan B M Kuks; Elisabeth A van Hell; A N Raat; Roy E Stewart; Janke Cohen-Schotanus
Journal:  Adv Health Sci Educ Theory Pract       Date:  2010-03-28       Impact factor: 3.853

5.  Assessment toolbox for Indian medical graduate competencies.

Authors:  T Singh; S Saiyad; A Virk; J Kalra; R Mahajan
Journal:  J Postgrad Med       Date:  2021 Apr-Jun       Impact factor: 1.476

Review 6.  In-training assessment using direct observation of single-patient encounters: a literature review.

Authors:  E A M Pelgrim; A W M Kramer; H G A Mokkink; L van den Elsen; R P T M Grol; C P M van der Vleuten
Journal:  Adv Health Sci Educ Theory Pract       Date:  2010-06-18       Impact factor: 3.853

Review 7.  Structured continuous objective-based assessment of resident's performance at point of care (SCOPA).

Authors:  Mohammed Hijazi
Journal:  Ann Saudi Med       Date:  2005 May-Jun       Impact factor: 1.526

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.