| Literature DB >> 24330628 |
Alan Boyd1, Donald C Cole, Dan-Bi Cho, Garry Aslanyan, Imelda Bates.
Abstract
BACKGROUND: Health research capacity strengthening (RCS) projects are often complex and hard to evaluate. In order to inform health RCS evaluation efforts, we aimed to describe and compare key characteristics of existing health RCS evaluation frameworks: their process of development, purpose, target users, structure, content and coverage of important evaluation issues. A secondary objective was to explore what use had been made of the ESSENCE framework, which attempts to address one such issue: harmonising the evaluation requirements of different funders.Entities:
Mesh:
Year: 2013 PMID: 24330628 PMCID: PMC3878679 DOI: 10.1186/1478-4505-11-46
Source DB: PubMed Journal: Health Res Policy Syst ISSN: 1478-4505
Frameworks included in the analysis
| Ministry of Foreign Affairs of Denmark – Danida | Danida Evaluation Guidelines (2012) [ | Medium (14,000) | Generic | Intervention logic (input, output, outcome, impact) to inform evaluation design |
| Danish Development Cooperation in a Results Perspective: Danida’s Framework for Managing for Development Results 2011–2014 (2011) [ | Medium (9,000) | Generic | Logical framework/results chain forms conceptual basis | |
| ESSENCE on Health Research | Planning, Monitoring and Evaluation Framework for Capacity Strengthening in Health Research (2011) [ | Short (4,000) | Health RCS-specific | Matrix with example indicators for activities, outputs and outcomes |
| The Special Programme for Research and Training in Tropical Diseases co-sponsored by UNICEF, UNDP, World Bank, and WHO (TDR) | Monitor, evaluate, improve: TDR Performance Assessment Framework – Measuring results (2011) [ | Medium (13,000) | Health research-specific plus health RCS-specific | Matrix with example indicators based on expected results chain |
| National Institutes of Health: Fogarty International Center (FIC-NIH) | Framework for Program Assessment (Evaluation and Review) (2005) [ | Short (5,000) | Health research-specific plus training aspect of health RCS | Categories with example indicators |
| Netherlands Organisation for Scientific Research: WOTRO Science for Global Development | Mid Term Review (2005–2008) form: Testable goals (review questions) (2005) | Very short (2,000) | Health RCS-specific | Indicators for institutional capacity |
| International Development Research Centre (IDRC) | Framework for evaluating capacity development in IDRC (2005) [ | Long (24,000) | Capacity strengthening-specific | Conceptual model for the intervention |
| The Corporate Assessment Framework (2004) [ | Very short (2,000) | Implies capacity strengthening-specific plus research-specific | None | |
| Canadian International Development Agency (CIDA) | CIDA Evaluation Guide: Overcoming challenges; Delivering results; Meeting expectations; Making a contribution (2004) [ | Very long (36,000) | Generic | A logical model should inform data collection for outputs, outcomes and impacts |
Purpose of frameworks and their intended users
| Danida (2012) [ | “Constitutes the basic framework for evaluations of Danish development cooperation” (p. 3) | “Those who have a professional engagement in evaluation of development cooperation, as well as others interested in evaluation. These include those who are parties to an evaluation process and the users of evaluations. Moreover, the guidelines may be of interest to a broader audience, such as students, researchers and policy makers, and the interested public” (p. 3) |
| “Do not constitute a manual in evaluation methods and techniques” (p. 3) | ||
| Danida (2011) [ | “Know[ing] more about results management and the … approach Danida uses” (p. 1) | “The main audience for the framework is Danida staff” (p. 1) |
| “Clarify[ing] how Danida manages the process [of achieving and demonstrating results] towards this goal [of securing value for money and aid effectiveness]” (p. 1) | ||
| ESSENCE (2011) [ | “To improve harmonization among funders of health research capacity strengthening. Its use should make it easier for recipients of funding to fulfil the PM&E obligations of different funders and facilitate synergy, division of labour and sharing of knowledge among funders” (p. 4) | “[hopefully] ESSENCE members [typically funders] and other partners will have access” (p. 2) |
| TDR (2011) [ | “A tool … [that] promotes and guides systematic assessment of TDR’s strategic and technical relevance and contribution towards its vision” (p. 5) | “For use both by TDR staff and the broad range of stakeholders involved in the governance and implementation of TDR’s Ten Year Vision and Strategy” (p. 5) |
| “Guides TDR staff and stakeholders through a more systematic way of monitoring and evaluating the Programme’s performance” (p. 6) | ||
| FIC-NIH (2005) [ | Not explicitly stated. Describes roles and responsibilities in relation to organisational systems and suggests evaluation questions and indicators | Not explicitly stated. Program Officers, Principal Investigators, external evaluators and staff of partner institutions are among those whose roles in assessment are described |
| WOTRO (2005) | Not explicitly stated. Specifies data to be collected and presented in reviews | Not explicitly stated. The review committee [external evaluators] and programme partners are mentioned in the document |
| IDRC (2005) [ | “A generic guide for the assessment of any capacity development activity or project component supported by [IDRC]; and for any form of assessment (formative or summative; monitoring or evaluation)” (p. 2) | Not explicitly stated. Implicitly, anyone assessing any capacity development activity supported by IDRC. Refers to “the evaluator” at points |
| IDRC (2004) [ | “Promote coherence between the aims and objectives expressed at the corporate level and those expressed at the program level” (p. 4) | Managers within IDRC. Also briefly mentions roles for program teams, centre support units, and the Board of Governors |
| “Help managers make decisions that support programming efforts to achieve the IDRC mission” (p. 2) | ||
| “Provides a structure for organizing and reporting on results at the corporate level” (p. 2) | ||
| CIDA (2004) [ | “Ensure that the Agency’s staff, consultants and partners are properly informed about how evaluations of CIDA’s investments … are to be carried out, and what they are expected to achieve” (Foreword) | “Staff, consultants and partners” (Foreword) |
| “A thorough reading offers an in–depth understanding of the Agency’s evaluation activities. Or, individual items of interest can be quickly accessed. Uninitiated readers can learn about the fundamentals of the evaluation process, while seasoned practitioners can benefit from normative guidance to complete the task–at–hand” (p. 1) |
Framework development and proposed review processes
| Danida (2012) [ | Produced by the Foreign Ministry’s evaluation unit | Draws heavily on the OECD/DAC quality standards for development evaluation (2010), from which key statements are incorporated | “The guidelines will be updated as need arises, and comments and suggestions for improvements or clarifications are welcome” |
| Aspects may have been inspired by participation in peer reviews of other evaluation functions conducted by OECD/DAC and United Nations networks | Refers to its’ own study on conducting evaluations jointly with partner countries | May also learn from the Multilateral Organisations’ Performance Assessment Network (MOPAN) and the multilateral development banks’ Common Performance Assessment System (COMPAS) | |
| | Refers to a small number of academic publications | This 2012 document is a revised version of a document published in 2006 | |
| Signposts material produced by various international development related networks and World Bank initiatives | |||
| Danida (2011) [ | Not stated | Uses the OECD standard Managing for Development Results (MfDR) as its management strategy | Requests feedback from staff and external partners. Plans to review the performance measurement tools listed |
| This 2011 document replaces a document published in 2005 | |||
| ESSENCE (2011) [ | “Consultation, first between various ESSENCE members and secondly with a broader group of stakeholders (including African recipients of funding for health research)” | Five publications: one academic article; two reports related to other health RCS funder evaluation frameworks [TDR and IDRC]; two reports by independent policy/practice organisations | “The matrix is planned to be revised periodically. Funders are invited to adopt a learning attitude towards capacity strengthening and to contribute to the continuous improvement of the matrix, based on their own experiences with capacity strengthening Initiatives” |
| TDR (2011) [ | Developed by internal working groups, consulting with internal and external stakeholders and advised by an external advisory group. External input was mainly from research institutions, research funding institutions, and development agencies | Fifteen “related documents” are listed. These were produced by other development-related organisations: OECD/DAC, various United Nations programmes and the World Bank | “This framework will need to be continuously reviewed and refined in order to address the Programme needs” |
| FIC-NIH (2005) [ | Not stated | None | Not stated. This 2005 document is a revised version of an initial document published in 2002 |
| WOTRO (2005) | Not stated | None | Not stated |
| IDRC (2005) [ | Produced by two university-based international development consultants whose expertise included evaluation and monitoring | References a report on outcome mapping published by IDRC | Not stated |
| Based on a file review of capacity development in 40 IDRC projects | |||
| IDRC (2004) [ | Developed by the Senior Management Committee and the Evaluation Unit | None | “CAF is an experiment … and will require refinement on an ongoing basis” |
| “The evaluation unit, policy and planning group, and senior management committee will periodically assess the utility of the CAF performance areas, and decide how to make appropriate modifications” | |||
| CIDA (2004) [ | Prepared by the evaluation unit and an external consultant. | References documents drawn from government and other agencies in its own country, and OECD/DAC work | “We welcome any comments and/or suggestions that you may have” [email address provided] |
Characteristics of individual frameworks related to harmonisation and to building evaluation capacity
| Danida (2012) [ | Whole chapter on multilateral development coordination. Highlights benefits of using country systems and data, and of joint or coordinated PM&E | Mentions the need to assess team capacity for qualitative evaluation and the cultural competence of data collectors. Mentions that it may develop the capacity of country organisations it works with on evaluations | Five annexes cover key issues with regard to codes of conduct; quality control and assurance; project inception reporting; evaluation reporting; analytical quality |
| Danida (2011) [ | Some material about coordinating multilateral projects. Highlights benefits of using partners’ monitoring systems | Mentions the possible need to develop capacity for output monitoring among partners | Provides links to tools that funder staff may use, particularly for monitoring |
| ESSENCE (2011) [ | Emphasises need for harmonization of practices across different funders and them using the framework in partnership | Paragraph on general capacity strengthening for funders, but nothing specific to evaluation | Little practical detail. Some key concepts regarding indicators are clarified. There is a list of sources, but this does not indicate which provide practical guidance |
| TDR (2011) [ | Mentions need for partnership across funders | Not mentioned | Contains quite detailed instructions, plus a clear and fairly comprehensive glossary. There is a reading list, but this is not prominent and does not indicate which documents provide practical guidance |
| FIC-NIH (2005) [ | Emphasises stakeholder involvement in planning only | Training and support for funder staff is provided by the Evaluation Officer; support for other stakeholders not mentioned | Little detail. Provides most on indicators, giving examples, but not how to identify and construct an indicator |
| WOTRO (2005) | Not mentioned | Not mentioned | No information to support practice |
| IDRC (2005) [ | Not mentioned | Mentions that health RCS may need to address monitoring capacity | Explains the thinking behind CS evaluation, relationships between PM&E, and the types of questions to ask, providing examples of particular questions |
| IDRC (2004) [ | Not mentioned | Not mentioned | Provides a link to characteristics of good performance and associated monitoring questions. Nothing apart from this |
| CIDA (2004) [ | Not mentioned | It takes the form of a capacity building tool. Some discussion about building capacity among local recipients | The entire document focuses on providing detailed information to support the conduct of CIDA evaluations. There are checklists for each chapter, and a list of acronyms |
Relative strengths of frameworks
| Danida (2012) [ | References/links to further information, e.g., on coordination and alignment |
| Structured plan for reviewing/developing the framework | |
| Explicit use of OECD/DAC quality standards | |
| Addresses quality and validity | |
| Danida (2011) [ | No particular strengths identified |
| ESSENCE (2011) [ | Short |
| Some emphasis on planning | |
| Indicators are health RCS-specific; includes examples | |
| Stakeholder involvement in developing the framework | |
| TDR (2011) [ | Some health RCS-specific indicators; includes examples |
| Accessibility – glossary, diagrams | |
| Stakeholder involvement in developing the framework | |
| Some consideration of the impact of the funding agency’s own systems | |
| FIC-NIH (2005) [ | Short |
| Some consideration of the impact of the funding agency’s own systems | |
| WOTRO (2005) | Short |
| Indicators are health RCS-specific | |
| IDRC (2005) [ | Capacity-strengthening specific indicators |
| Based on consideration of the specific processes of capacity-strengthening, equivalent to a conceptual model. | |
| Based on in depth research of the agency’s experiences | |
| Provides detailed information to support practice | |
| IDRC (2004) [ | Short |
| CIDA (2004) [ | Emphasis on planning |
| Emphasis on building evaluation capacity | |
| Accessibility/checklists | |
| Provides detailed information to support practice | |
| Addresses stakeholder participation issues | |
| Addresses equity issues, including gender | |
| Guidance on data collection and quantitative measures/indicators | |
| Some guidance on qualitative data | |
| Guidance on making comparisons and judgements | |
| Addresses quality and validity | |
| Some use of theory | |
| Guidance on learning | |
| Guidance on timing and timescales |