| Literature DB >> 35261532 |
Milbert Gawaya1, Desiree Terrill1, Eleanor Williams1.
Abstract
The COVID-19 pandemic required large-scale service delivery changes for government, and provided the opportunity for evaluators to step up and support decision makers to understand the impact of these changes. Rapid evaluation methods (REM) provide a pragmatic approach for generating timely information for evidence-based policy and decision-making. Grounded in developmental and utilisation-focused evaluation theory, REM incorporates a team-based, mixed methods design, executed over a 6-8-week period. Customised rubrics were used to rigorously assess effectiveness and scalability of practice changes to inform COVID-19 response planning. REM is an alternative approach to full-scale evaluation models frequently implemented to assess policies and programs. Adapted use of REM suggests that meaningful insights can be gained through use of smaller scale evaluations. This article shares lessons learned from a novel rapid evaluation method applied in the context of the COVID-19 pandemic. The rapid evaluation approach was implemented to provide real-time insights and evaluative conclusions for 15 program and practice adaptations across Victorian health and human service settings. The article shares insights about the practical applicability of balancing rigour and timeliness when implementing a rapid evaluation, and strengths and limitations of working within a fast-paced evaluation framework. Findings can inform evaluative practice in resource and time-limited settings.Entities:
Keywords: COVID-19 pandemic; evaluation methods; rapid evaluation; real-time evaluation; team-based analysis
Year: 2022 PMID: 35261532 PMCID: PMC8891248 DOI: 10.1177/1035719X211057630
Source DB: PubMed Journal: Eval J Australas ISSN: 1035-719X
Figure 1.Overview of centre for evaluation and research evidence rapid evaluations implemented during COVID-19.
Figure 2.Evaluation steps implementing centre for evaluation and research evidence rapid evaluations.
Figure 3.Theory of change for use of telehealth in pregnancy and childbirth.
Figure 4.Perception of change rubric – Telehealth delivery in pregnancy and afterbirth during COVID-19.
Comparison of conventional evaluation approaches and rapid evaluation methods.
| Conventional evaluation methods | Rapid evaluation methods | |
|---|---|---|
| Strengths | Weaknesses | |
| For rapidly changing interventions, limited documentation can impose difficulty to inform changes or document lessons for future interventions | Real-time data collection to provide evidence that can be used immediately and in future for other interventions | Managing a potentially emerging set of evaluation questions and purposes can be difficult |
| Delay between data collection and evaluation reporting | Rapid feedback in real time reporting of evaluation data with rapid analysis and tailored end products such as 1-page summaries shared with stakeholders | Risk of providing a false view/inaccurate findings based on interim evidence or limitation of data collection within tight timeliness (e.g., small and limited samples) |
| Project reporting without systematic use of evaluative evidence in between | Multiple timings of evaluative activity completed at a number of points throughout evaluation implementation | Addressing casual inference in real time, often without credible counterfactuals |
| Mid-term evaluations support single loop learning and end of project evaluations support double-loop learning where modifications can be made to theories of change | Includes 3 looped processes for sensemaking and learning. Can identify inconsistencies, support modification of assumptions/theory of change and review of evidence being used to support decision-making | Requires adequate content knowledge to support triple-loop learning informed by evidence rather than simply documenting assumptions |
| Evaluators drawing conclusions and making recommendations that are reported back for action | Explicitly bringing a range of key stakeholders together to validate data, and jointly develop recommendations for action | Managing different perspectives of evaluative criteria and what are appropriate ways to synthesise evidence about performance across different dimensions – such as trade-offs between conflicting objectives |