Literature DB >> 7786372

Validity of three clinical performance assessments of internal medicine clerks.

A L Hull1, S Hodder, B Berger, D Ginsberg, N Lindheim, J Quan, M E Kleinhenz.   

Abstract

PURPOSE: To analyze the construct validity of three methods to assess the clinical performances of internal medicine clerks.
METHOD: A multitrait-multimethod (MTMM) study was conducted at the Case Western Reserve University School of Medicine to determine the convergent and divergent validity of a clinical evaluation form (CEF) completed by faculty and residents, an objective structured clinical examination (OSCE), and the medicine subject test of the National Board of Medical Examiners. Three traits were involved in the analysis: clinical skills, knowledge, and personal characteristics. A correlation matrix was computed for 410 third-year students who completed the clerkship between August 1988 and July 1991.
RESULTS: There was a significant (p < .01) convergence of the four correlations that assessed the same traits by using different methods. However, the four convergent correlations were of moderate magnitude (ranging from .29 to .47). Divergent validity was assessed by comparing the magnitudes of the convergence correlations with the magnitudes of correlations among unrelated assessments (i.e., different traits by different methods). Seven of nine possible coefficients were smaller than the convergent coefficients, suggesting evidence of divergent validity. A significant CEF method effect was identified.
CONCLUSION: There was convergent validity and some evidence of divergent validity with a significant method effect. The findings were similar for correlations corrected for attenuation. Four conclusions were reached: (1) the reliability of the OSCE must be improved, (2) the CEF ratings must be redesigned to further discriminate among the specific traits assessed, (3) additional methods to assess personal characteristics must be instituted, and (4) several assessment methods should be used to evaluate individual student performances.

Mesh:

Year:  1995        PMID: 7786372     DOI: 10.1097/00001888-199506000-00013

Source DB:  PubMed          Journal:  Acad Med        ISSN: 1040-2446            Impact factor:   6.893


  8 in total

Review 1.  [Dilemmas and alternatives in the evaluation of family doctor training].

Authors:  J R Loayssa Lara
Journal:  Aten Primaria       Date:  2003-10-15       Impact factor: 1.137

2.  Improving in-training evaluation programs.

Authors:  J Turnbull; J Gray; J MacFadyen
Journal:  J Gen Intern Med       Date:  1998-05       Impact factor: 5.128

Review 3.  Recent developments in assessing medical students.

Authors:  S L Fowell; J G Bligh
Journal:  Postgrad Med J       Date:  1998-01       Impact factor: 2.401

Review 4.  Surgical Education, Simulation, and Simulators-Updating the Concept of Validity.

Authors:  Mitchell Goldenberg; Jason Y Lee
Journal:  Curr Urol Rep       Date:  2018-05-17       Impact factor: 3.092

5.  Medical student competence in eliciting a history for "chronic fatigue".

Authors:  K K Papp; B Erokwu; M Decker; K P Strohl
Journal:  Sleep Breath       Date:  2001-09       Impact factor: 2.816

6.  Clinical work sampling A new approach to the problem of in-training evaluation.

Authors:  J Turnbull; J MacFadyen; C Van Barneveld; G Norman
Journal:  J Gen Intern Med       Date:  2000-08       Impact factor: 5.128

7.  Family physicians clinical aptitude for the nutritional management of type 2 diabetes mellitus in Guadalajara, Mexico.

Authors:  C E Cabrera Pivaral; E A Gutiérrez Roman; G Gonzalez Pérez; F Gonzalez Reyes; F Valadez Toscano; C Gutiérrez Ruvalcaba; C D Rios Riebeling
Journal:  J Nutr Health Aging       Date:  2008-02       Impact factor: 4.075

8.  Reliability analysis of the objective structured clinical examination using generalizability theory.

Authors:  Juan Andrés Trejo-Mejía; Melchor Sánchez-Mendiola; Ignacio Méndez-Ramírez; Adrián Martínez-González
Journal:  Med Educ Online       Date:  2016-08-18
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.