| Literature DB >> 28120259 |
Abstract
Assessment of complex tasks integrating several competencies calls for a programmatic design approach. As single instruments do not provide the information required to reach a robust judgment of integral performance, 73 guidelines for programmatic assessment design were developed. When simultaneously applying these interrelated guidelines, it is challenging to keep a clear overview of all assessment activities. The goal of this study was to provide practical support for applying a programmatic approach to assessment design, not bound to any specific educational paradigm. The guidelines were first applied in a postgraduate medical training setting, and a process analysis was conducted. This resulted in the identification of four steps for programmatic assessment design: evaluation, contextualisation, prioritisation and justification. Firstly, the (re)design process starts with sufficiently detailing the assessment environment and formulating the principal purpose. Key stakeholders with sufficient (assessment) expertise need to be involved in the analysis of strengths and weaknesses and identification of developmental needs. Central governance is essential to balance efforts and stakes with the principal purpose and decide on prioritisation of design decisions and selection of relevant guidelines. Finally, justification of assessment design decisions, quality assurance and external accountability close the loop, to ensure sound underpinning and continuous improvement of the assessment programme.Entities:
Keywords: Assessment quality; Instructional design; Medical education; Programmatic assessment; Quality assurance
Mesh:
Year: 2017 PMID: 28120259 PMCID: PMC5663798 DOI: 10.1007/s10459-017-9756-3
Source DB: PubMed Journal: Adv Health Sci Educ Theory Pract ISSN: 1382-4996 Impact factor: 3.853
Promises and advantages of a programmatic approach to assessment.
Adapted from: Dijkstra et al. 2010; Van der Vleuten et al. 2012
| Promises/purposes | Advantages |
|---|---|
| Overview of what is and what is not being measured | Promote the validity of content and prevent emphasis on easy-to –measure elements (over- and underrepresentation) |
| Compensation for deficiencies of instruments by strengths of other instruments | Diverse spectrum of complementary measurement instruments capturing competence as a whole |
| Matching instruments to free space and time for the assessment of other subjects | Increase efficiency by reducing redundancy in information gathering |
| Combine information from different sources (tests/instruments) in high-stakes assessment | Reach better-informed and highly defensible high-stakes decisions |
| Multiple individual assessment points that are maximally informative to the learning process | Optimise the learning function of assessment (assessment |
| Aggregated data used for high-stakes pass/fail and remediation decisions | Optimise the certification function (assessment |
| Reducing bias in assessment of complex tasks through smart sampling strategies and procedural measures | Expert judgment of competence in performing daily tasks becomes valid and reliable |
Fig. 1Methodological approach to case study, process analysis and deconstruction
Stepwise approach for designing an assessment programme
|
|
| 1. Collect evaluation data in an iterative way: Combine formal assessment documents and interview stakeholders about current assessment practices |
| 2. Keep an overview of the big picture by using sufficiently broad information including the assessment infrastructure: stakeholders’ expertise and roles, educational curriculum, stakes, resources, and legislation |
| 3. Confirm the principal purpose of the assessment programme and ensure it is clearly formulated. Evaluate whether this is being adequately shown in actual assessment practices |
|
|
| 4. Use data collected on guidelines in the programme in action and documenting the programme dimensions as a contextual frame of reference for outlining the assessment environment |
| 5. Organise several meetings and involve key stakeholders with assessment expertise to prepare an analysis of strengths and weaknesses of current assessment practices for identification of developmental needs |
| 6. Re-order the guidelines to suit the context which should be guiding (instead of the framework) in a programmatic design approach |
|
|
| 7. Translate identified needs into related investments (i.e. infrastructure, expertise, finances) and balance these with the stakes and principal purpose of the assessment programme |
| 8. Data collected on guidelines in the dimensions supporting and improving the programme need to be used to reach consensus about the prioritisation of needed changes |
| 9. Take an iterative approach when applying the selected guidelines, develop a working strategy involving key stakeholders to support central governance and avoid inefficient or counterproductive changes |
|
|
| 10. Document design decisions in a consistent way, taking into account legal regulations preferably based on scientific and/or at least practice based evidence |
| 11. Develop a clear and concise implementation plan, including faculty development, to foster acceptability in the assessment context and serve external accountability |
| 12. Take care that the feedback loop is being closed and schedule regular evaluation meetings, based on evaluation criteria in the implementation plan |