| Literature DB >> 29610647 |
Dafrossa Lyimo1, Christopher Kamugisha2, Emmanuel Yohana1, Messeret Eshetu3, Aaron Wallace4, Kirsten Ward4, Carsten Mantel5, Karen Hennessey5.
Abstract
A National Immunization Program Review (NIP Review) is a comprehensive external assessment of the performance of a country's immunization programme. The number of recommended special-topic NIP assessments, such as those for vaccine introduction or vaccine management, has increased. These assessments often have substantial overlap with NIP reviews, raising concern about duplication. Innovative technical and management approaches, including integrating several assessments into one, were applied in the United Republic of Tanzania's 2015 NIP Review. These approaches and processes were documented and a post-Review survey and group discussion. The Tanzania Review found that integrating assessments so they can be conducted at one time was feasible and efficient. There are concrete approaches for successfully managing a Review that can be shared and practiced including having a well-planned desk review and nominating topic-leads. The use of tablets for data entry has the potential to improve Review data quality and timely analysis; however, careful team training is needed. A key area to improve was to better coordinate and link findings from the national-level and field teams.Entities:
Keywords: Immunization; evaluation; review
Mesh:
Substances:
Year: 2017 PMID: 29610647 PMCID: PMC5878854 DOI: 10.11604/pamj.2017.28.209.10466
Source DB: PubMed Journal: Pan Afr Med J
Figure 1The 10 regions and 20 districts selected for field assessment in the 2015 National Immunization Program (NIP) Review in Tanzania
Results from a survey completed by 22 (66%) reviewers participating in a comprehensive national immunization program review, United Republic of Tanzania, July 2015
| Area | Query | Percent in Agreement | Percent in Disagreement | Percent Unsure |
|---|---|---|---|---|
| Integrating multiple assessments | The number of objectives integrated in assessment was acceptable | 60% | 40% | 0% |
| Data collection tools | The length of data collection tools was acceptable | 65% | 35% | 0% |
| Questions were acceptable | 44% | 56% | 0% | |
| Balance of quantitative and qualitative questions was acceptable | 62% | 38% | 0% | |
| Use of Tablets | Tablet use was easy or acceptable | 100% | 0% | 0% |
| Tablets increased accuracy of data entry | 47% | 6% | 47% | |
| Tablets decreased work load | 47% | 29% | 24% | |
| Tablets disrupted flow of interviews | 24% | 76% | 0% | |
| Recommend tablets for future reviews | 82% | 18% | 0% | |
| Training | Felt adequately briefed on country policies/strategies prior to field review | 63% | 37% | 0% |
| Synthesis of Findings | Had sufficient time to synthesize field data and develop conclusions and recommendations | 86% | 14% | 0% |
Summary of approaches and lessons for improving the quality and efficiency of the national immunization programme (NIP) review and lessons learnt, United Republic of Tanzania, July 2015
| Activity | Challenge addressed | Approach taken | Lessons learnt |
|---|---|---|---|
| Phase 1: | NIP reviews often are externally driven, can lack focus on country priorities | Concept note was used to secure national participation and input | The concept note, desk review and tailored data collection tools helped engage the country program and shape the review to meet country needs. |
| Phase 1 &2: Integration of multiple assessments | NIPs must conduct multiple assessments with overlapping themes | The Review was designed to integrate multiple assessments | Reviewers found integrating assessments to be feasible; country NIP staff reported this to be highly desirable and efficient |
| Phase 2: | Asingle Review-lead often does not have the breadth of technical knowledge needed to lead all topics of the review | Identified reviewers with relevant expertise to serve as topic-leads to synthesizeconclusions and recommendations (C&Rs)in their respective topic areas | Expert topic-leads can significantly improve the quality and relevance of conclusions and recommendations |
| Phase 2: | Data collection tools are often too lengthy and lack focus on priorities | Tools cut by 30% from generic NIP data collection tools | A stream-lined tool helped focus on priorities and extra time allowed teams to have more time to explore barriers and solutions |
| Phase 2: | Review teams are often not well-briefed on country programme, status, data collection tools | The desk review guided the content of team training | A 3-day training provided adequate time to prepare participants for field review |
| Phase 3: | Field review data are often not ready in time to be used for debriefing session | Tablet was given to each team | Tablets allowed timely data analyses to support final presentation development |
| Phase 4: | Developing a reasonable number of meaningful and actionable recommendations can be challenging | Topic-leads facilitated discussion and drafted initial conclusions and recommendations | Topic-leads were important for leading discussion and generating meaningful recommendations |
| Phase 5: | Review recommendations may not lead to actionable work plans | The Review was intentionally scheduled to be completed some weeks before a workshop for developing the NIP’s new comprehensive multiyear plan (2016-2020) | Linking persons responsible for the NIP Review and the cMYP and timing both activities is critical for translating Review recommendations into NIP action plans. |