Literature DB >> 11112723

A comparison of physician examiners', standardized patients', and communication experts' ratings of international medical graduates' English proficiency.

A I Rothman1, M Cusimano.   

Abstract

PURPOSE: To assess the quality of ratings of interviewing skills and oral English proficiency provided on a clinical skills OSCE by physician examiners, standardized patients (SPs), and communication skills experts.
METHOD: In 1998, 73 candidates to the Ontario International Medical Graduate (OIMG) Program completed a 29-station OSCE-type clinical skills selection examination. Physician examiners, SPs, and communication skills experts assessed components of oral English proficiency and interview performance. Based on these results, the frequency and generalizability of English-language flags, physician examiners' indications that spoken English skills were bad enough to significantly impede communication with patients; the reliability of the OIMG's Interview and Oral Performance Scales and generalizability of overall interview and oral performance ratings; and comparisons of repeated assessments by experts were calculated. Principal-components analysis was applied to the panels' ratings to determine a more economical expression of the language proficiency and interview communication skills results.
RESULTS: The mean number of English-language flags per candidate was 2.1, the median was 1.0, and Cronbach's alpha of the ratings was 0.63. Means, SDs, and alphas of the physician examiners' and SPs' ratings of the interview performance scale were 9.15/10, 0.43, 0.36, and 9.30/10, 0. 56, 0.50, respectively. Corresponding values for overall interview performance ratings were 3.08/4, 0.30, 0.33, and 3.34/4, 0.32, 0.47. Means, SDs, and alphas of the physician examiners' and SPs' ratings of the oral performance scale were 8.54/10, 0.74, 0.78, and 8.74/10, 1.00, 0.76. Corresponding values for overall ratings of oral performance were 3.85/5, 0.51, 0.68, and 4.08/5, 0.60, 0.68. For the two experts' ratings of two contiguous five-minute interview stations, internal consistencies were 0.88 and 0.78. For the two experts' ratings of standardized ten-minute interviews, internal consistencies were 0.81 and 0.92. Correlations between the mean values of the experts' ratings of the ten- and five-minute stations were 0.45 and 0.51. Three factors emerged from the PCA, language proficiency, physician examiners' ratings of interview proficiency, and SPs' ratings of interview proficiency.
CONCLUSIONS: Consistency between the physician examiners' and SPs' ratings of English proficiency was observed; less agreement was observed in their ratings of interviewing skills, and little agreement was observed between the experts' ratings. Communication skills results may be validly expressed by three measures: one overall global rating of language proficiency provided by physician examiners or SPs, and overall global ratings of interview proficiency provided separately by physician examiners and SPs.

Entities:  

Mesh:

Year:  2000        PMID: 11112723     DOI: 10.1097/00001888-200012000-00018

Source DB:  PubMed          Journal:  Acad Med        ISSN: 1040-2446            Impact factor:   6.893


  5 in total

1.  Can standardized patients replace physicians as OSCE examiners?

Authors:  Kevin McLaughlin; Laura Gregor; Allan Jones; Sylvain Coderre
Journal:  BMC Med Educ       Date:  2006-02-27       Impact factor: 2.463

2.  Comparison of the sensitivity of the UKCAT and A Levels to sociodemographic characteristics: a national study.

Authors:  Paul A Tiffin; John C McLachlan; Lisa Webster; Sandra Nicholson
Journal:  BMC Med Educ       Date:  2014-01-08       Impact factor: 2.463

3.  Assessing clinical communication skills in physicians: are the skills context specific or generalizable.

Authors:  Lubna A Baig; Claudio Violato; Rodney A Crutcher
Journal:  BMC Med Educ       Date:  2009-05-15       Impact factor: 2.463

4.  Annual Review of Competence Progression (ARCP) performance of doctors who passed Professional and Linguistic Assessments Board (PLAB) tests compared with UK medical graduates: national data linkage study.

Authors:  Paul A Tiffin; Jan Illing; Adetayo S Kasim; John C McLachlan
Journal:  BMJ       Date:  2014-04-17

5.  Contextualization and validation of the interprofessional collaborator assessment rubric (ICAR) through simulation: Pilot investigation.

Authors:  Fatemeh Keshmiri; Sari Ponzer; AmirAli Sohrabpour; Shervin Farahmand; Farhad Shahi; Shahram Bagheri-Hariri; Kamran Soltani-Arabshahi; Mandana Shirazi; Italo Masiello
Journal:  Med J Islam Repub Iran       Date:  2016-08-01
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.