Suzanne Schut1, Lauren A Maggio2, Sylvia Heeneman3, Jan van Tartwijk4, Cees van der Vleuten5, Erik Driessen5. 1. School of Health Professions Education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. s.schut@maastrichtuniversity.nl. 2. Department of Medicine, Uniformed Services, University of the Health Sciences, Bethesda, MD, USA. 3. School of Health Professions Education, Department of Pathology, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, The Netherlands. 4. Department of Education, Utrecht University, Utrecht, The Netherlands. 5. School of Health Professions Education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands.
Abstract
INTRODUCTION: Programmatic assessment was introduced as an approach to design assessment programmes with the aim to simultaneously optimize the decision-making and learning function of assessment. An integrative review was conducted to review and synthesize results from studies investigating programmatic assessment in health care professions education in practice. METHODS: The authors systematically searched PubMed, Web of Science, and ERIC to identify studies published since 2005 that reported empirical data on programmatic assessment. Characteristics of the included studies were extracted and synthesized, using descriptive statistics and thematic analysis. RESULTS: Twenty-seven studies were included, which used quantitative methods (n = 10), qualitative methods (n = 12) or mixed methods (n = 5). Most studies were conducted in clinical settings (77.8%). Programmatic assessment was found to enable meaningful triangulation for robust decision-making and used as a catalyst for learning. However, several problems were identified, including overload in assessment information and the associated workload, counterproductive impact of using strict requirements and summative signals, lack of a shared understanding of the nature and purpose of programmatic assessment, and lack of supportive interpersonal relationships. Thematic analysis revealed that the success and challenges of programmatic assessment were best understood by the interplay between quantity and quality of assessment information, and the influence of social and personal aspects on assessment perceptions. CONCLUSION: Although some of the evidence may seem compelling to support the effectiveness of programmatic assessment in practice, tensions will emerge when simultaneously stimulating the development of competencies and assessing its result. The identified factors and inferred strategies provide guidance for navigating these tensions.
INTRODUCTION: Programmatic assessment was introduced as an approach to design assessment programmes with the aim to simultaneously optimize the decision-making and learning function of assessment. An integrative review was conducted to review and synthesize results from studies investigating programmatic assessment in health care professions education in practice. METHODS: The authors systematically searched PubMed, Web of Science, and ERIC to identify studies published since 2005 that reported empirical data on programmatic assessment. Characteristics of the included studies were extracted and synthesized, using descriptive statistics and thematic analysis. RESULTS: Twenty-seven studies were included, which used quantitative methods (n = 10), qualitative methods (n = 12) or mixed methods (n = 5). Most studies were conducted in clinical settings (77.8%). Programmatic assessment was found to enable meaningful triangulation for robust decision-making and used as a catalyst for learning. However, several problems were identified, including overload in assessment information and the associated workload, counterproductive impact of using strict requirements and summative signals, lack of a shared understanding of the nature and purpose of programmatic assessment, and lack of supportive interpersonal relationships. Thematic analysis revealed that the success and challenges of programmatic assessment were best understood by the interplay between quantity and quality of assessment information, and the influence of social and personal aspects on assessment perceptions. CONCLUSION: Although some of the evidence may seem compelling to support the effectiveness of programmatic assessment in practice, tensions will emerge when simultaneously stimulating the development of competencies and assessing its result. The identified factors and inferred strategies provide guidance for navigating these tensions.
Entities:
Keywords:
Health Care Professions Education; Knowledge synthesis; Programmatic Assessment
Authors: Erik W Driessen; Jan van Tartwijk; Marjan Govaerts; Pim Teunissen; Cees P M van der Vleuten Journal: Med Teach Date: 2012 Impact factor: 3.650
Authors: Sylvia Heeneman; Andrea Oudkerk Pool; Lambert W T Schuwirth; Cees P M van der Vleuten; Erik W Driessen Journal: Med Educ Date: 2015-05 Impact factor: 6.251
Authors: Samantha R Hauff; Laura R Hopson; Eve Losman; Marcia A Perry; Monica L Lypson; Jonathan Fischer; Sally A Santen Journal: Acad Emerg Med Date: 2014-06 Impact factor: 3.451
Authors: Jessica V Rich; Sue Fostaty Young; Catherine Donnelly; Andrew K Hall; J Damon Dagnone; Kristen Weersink; Jaelyn Caudle; Elaine Van Melle; Don A Klinger Journal: J Eval Clin Pract Date: 2019-12-09 Impact factor: 2.431
Authors: Andrew S Parsons; Kelley Mark; James R Martindale; Megan J Bray; Ryan P Smith; Elizabeth Bradley; Maryellen Gusic Journal: J Gen Intern Med Date: 2022-06-16 Impact factor: 6.473
Authors: Eva K Hennel; Andrea Trachsel; Ulrike Subotic; Andrea C Lörwald; Sigrid Harendza; Sören Huwendiek Journal: Med Educ Date: 2022-03-16 Impact factor: 7.647