| Literature DB >> 36098952 |
Agbessi Amouzou1, Jennifer Bryce1, Neff Walker1.
Abstract
A full understanding of the pathways from efficacious interventions to population impact requires rigorous effectiveness evaluations conducted under realistic scale-up conditions at country level. In this paper, we introduce a deductive framework that underpins effectiveness evaluations. This framework forms the theoretical and conceptual basis for the 'Real Accountability: Data Analysis for Results' (RADAR) project, intended to address gaps in guidance and tools for the evaluation of projects being implemented at scale to reduce mortality among women and children. These gaps include needs for a framework to guide decisions about evaluations and practical measurement tools, as well as increased capacity in evaluation practice among donors and program planners at global, national and project levels. RADAR aimed to improve the evidence base for program and policy decisions in reproductive, maternal, newborn and child health and nutrition (RMNCH&N). We focus on five linked methodological steps - presented as core evaluation questions - for designing and implementing effectiveness evaluation of large-scale programs that support both the needs of program managers to improve their programs and the needs of donors to meet their accountability responsibilities. RADAR has operationalized each step with a tool to facilitate its application. We also describe cross-cutting methodological issues and broader contextual factors that affect the planning and implementation of such evaluations. We conclude with proposals for how the global RMNCH&N community can support rigorous program evaluations and make better use of the resulting evidence.Entities:
Keywords: Reproductive; accountability; adolescent health; child; effectiveness evaluation; maternal; newborn; nutrition; program evaluation
Mesh:
Year: 2022 PMID: 36098952 PMCID: PMC9481099 DOI: 10.1080/16549716.2021.2006423
Source DB: PubMed Journal: Glob Health Action ISSN: 1654-9880 Impact factor: 2.996
The RADAR project.
| RADAR was designed in 2015 to address three specific technical gaps in the evaluation of programs for women and children in LMICs. The need for a clear and practical framework to guide decisions about evaluations: what core questions need to be addressed, which priority indicators should be measured, and what are ‘right-sized’ evaluation design options given time and resource limitations. The absence of simple, focused, practical tools specifically designed to generate sound evidence responding to the core evaluation questions. Major survey programs supported by global partners are time- and resource-intensive, and produce far more information than needed or able to be fully analyzed and reported by LMIC evaluation teams. Efforts by individual projects to generate Insufficient capacity in evaluation thinking, design and implementation among donors and program planners at global, national and project levels.The RADAR project was designed in collaboration with Government Affairs Canada (GAC) to fill these gaps. The RADAR aim was to develop, apply and refine tools and approaches to increase the availability of accurate data in LMICs for evaluating country programs in reproductive, maternal, newborn and child health and nutrition (RMNCH&N) that can be ’rolled up’ to respond to GAC accountability needs.The RADAR team has developed a suite of compatible tools for use in large-scale evaluations in LMICs designed to produce answers to a set of core evaluation questions about RMNCH&N programs (Fig 1b). RADAR collaborated with partner organizations in Malawi, Mali, Mozambique and Tanzania (selected countries where Canada has RMNCH&N investments) to field test and refine the use of these tools in generating high-quality, complete, gender-sensitive, and relevant data. The RADAR team has developed a set of on-line Coursera classes covering the fundamentals of RMNCH&N program evaluation, and specific tools and methods for addressing the five core questions.More information on RADAR and access to RADAR tools is available at |
Figure 1a.A common framework for evaluating the scale-up for maternal and child survival.
Figure 1b.Core questions about RMNCH&N programs for program managers, governments and donors, related to the common framework for evaluating the scale-up for maternal and child survival.
Methodological challenges of evaluations of RMNCH&N programs being implemented at scale.
| Characteristic of large-scale evaluations | Implications for evaluation design and implementation |
|---|---|
| Evaluators rarely control the location, timing, and strength of program implementation. | Limits use of randomized designs or designs that require a planned schedule for program implementation (e.g. randomized stepped-wedge designs). Definition of true comparison groups is often difficult and sometimes impossible. Reinforces the importance of measuring the strength of implementation and quality of services. In prospective evaluations, early evidence of inadequate implementation in a stepwise design may call into question the need to conduct assessments of later impact model elements, including impact. Schedule for evaluation activities must be sufficiently flexible to adapt to unforeseen changes in the program calendar. |
| Most programs consist of several interventions implemented simultaneously. | Evaluating multiple individual interventions within a program may require prohibitively large sample sizes. Different interventions within a program may require more or less time to achieve expected results, and may be synergistic or antagonistic relative to expected outcomes or impact. |
| The pathways from activities to outcomes and impact are often long and complex. | Adequacy evaluations [ Most evaluations will require multiple sources of data or information that must be coordinated over time and assessed for quality and relevance. |
| Feedback from evaluation results may lead to changes in the intervention over time. | Although program improvement as a result of interim evaluation findings is desirable, changing the intervention or strategy during the evaluation period is an important threat to both the internal and external validity of the findings. Careful, continuous documentation of program activities and timing, evaluation implementation and contextual factors is required to counter these threats to validity, and requires resources and assiduous attention. |
| Contextual factors may play a more important role than in more controlled evaluations. | Contextual factors must be included in the evaluation plan and design from the outset – whether documented using existing data sources or, if resources permit, measured as an integral part of the evaluation. Sample sizes may need to be enlarged to take into account the need to stratify by relevant contextual factors. |