Literature DB >> 31460937

Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review.

Mirja W van der Meulen1, Alina Smirnova, Sylvia Heeneman, Mirjam G A Oude Egbrink, Cees P M van der Vleuten, Kiki M J M H Lombarts.   

Abstract

PURPOSE: To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance.
METHOD: In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications.
RESULTS: They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing.
CONCLUSIONS: Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.

Entities:  

Mesh:

Year:  2019        PMID: 31460937     DOI: 10.1097/ACM.0000000000002767

Source DB:  PubMed          Journal:  Acad Med        ISSN: 1040-2446            Impact factor:   7.840


  4 in total

1.  Effect of Insufficient Interaction on the Evaluation of Anesthesiologists' Quality of Clinical Supervision by Anesthesiology Residents and Fellows.

Authors:  Rachel A Hadler; Franklin Dexter; Bradley J Hindman
Journal:  Cureus       Date:  2022-03-26

Review 2.  The impact of mindfulness-based interventions on doctors' well-being and performance: A systematic review.

Authors:  Renée A Scheepers; Helga Emke; Ronald M Epstein; Kiki M J M H Lombarts
Journal:  Med Educ       Date:  2019-12-22       Impact factor: 6.251

3.  Variability of residents' ratings of faculty's teaching performance measured by five- and seven-point response scales.

Authors:  Maarten P M Debets; Renée A Scheepers; Benjamin C M Boerebach; Onyebuchi A Arah; Kiki M J M H Lombarts
Journal:  BMC Med Educ       Date:  2020-09-22       Impact factor: 2.463

4.  Rethinking Our Annual Congress-Meeting the Needs of Specialist Physicians by Partnering With Provincial Simulation Centers.

Authors:  Sam J Daniel; Marie-Josée Bouchard; Martin Tremblay
Journal:  J Contin Educ Health Prof       Date:  2022-01-01       Impact factor: 2.190

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.