| Literature DB >> 29471840 |
Shishi Wu1, Helena Legido-Quigley1,2, Julia Spencer2, Richard James Coker1,2,3, Mishal Sameer Khan4.
Abstract
BACKGROUND: In light of the gap in evidence to inform future resource allocation decisions about healthcare provider (HCP) training in low- and middle-income countries (LMICs), and the considerable donor investments being made towards training interventions, evaluation studies that are optimally designed to inform local policy-makers are needed. The aim of our study is to understand what features of HCP training evaluation studies are important for decision-making by policy-makers in LMICs. We investigate the extent to which evaluations based on the widely used Kirkpatrick model - focusing on direct outcomes of training, namely reaction of trainees, learning, behaviour change and improvements in programmatic health indicators - align with policy-makers' evidence needs for resource allocation decisions. We use China as a case study where resource allocation decisions about potential scale-up (using domestic funding) are being made about an externally funded pilot HCP training programme.Entities:
Keywords: Evaluation; Framework; Healthcare provider training; Informing policy decisions
Mesh:
Year: 2018 PMID: 29471840 PMCID: PMC5824449 DOI: 10.1186/s12961-018-0292-2
Source DB: PubMed Journal: Health Res Policy Syst ISSN: 1478-4505
Participant characteristics
| Interviews | Focus group discussions | |
|---|---|---|
| Total participants | 10 | 20 |
| Female (%) | 3 (30%) | 13 (65%) |
| Organisation | ||
| Centre for Disease Control representatives | 4 (40%) | 16 (80%) |
| Chinese Medical Association representatives | 3 (30%) | 4 (20%) |
| Hospital managers | 3 (30%) | 0 |
| Geographical scope of work | ||
| National level | 6 (60%) | 11 (55%) |
| Provincial level | 4 (40%) | 9 (45%) |
The four levels of the Kirkpatrick model and their definitions
| Outcome level | Definition |
|---|---|
| Reaction | Assess how training participants react to the training and their perceived value of the training. |
| Learning | To what degree participants acquire intended knowledge, skills and attitudes based on participation in the learning event. |
| Behaviour | To what degree participants apply what they learned at training sessions on the job. |
| Programmatic results | Measure of the improvements that are expected in the team, programme or other context in which the trainee works. For example, successful treatment rate, case detection rate or patient satisfaction with services delivered by trained HCPs. |
Fig. 1Modified training evaluation framework
Definition of additional components and examples of information needed
| Additional elements in proposed framework | Definition | Example of information needed |
|---|---|---|
| Broader programmatic results (Specific programme elements) | Indirect benefits from the training programmes | Enlarged pool of trainers; lessons learned from management of training programmes |
| Resources required (Broader programmatic considerations) | Resources invested in the training programme, including both direct and indirect costs | Human resource time devoted; trainers’ salary; cost for trainees’ accommodation |
| Sustainability (Broader programmatic considerations) | Whether the training programme can continue in the future | Contextual factors (demand from stakeholders to continue training); political support from local or national government; sufficient resources and funding |
| Scalability (Broader programmatic considerations) | Whether the training programme can be scaled up in other regions to cover a larger population | Local needs for the same training programme in other regions; ease of adaptability to different contexts; feasible plans for scale-up in place |
| Evaluation methodology (Credibility of evaluation) | Robustness of evaluation design and level of details provided to help policy-makers determine if objective approaches are used by evaluators | Study methodology including control groups; confounders and biases acknowledged |
| Composition of evaluation team (Credibility of evaluation) | Qualification of evaluators, their perceived independence and their knowledge of local context. The reputation of institutions to which the evaluation team members are affiliated also plays a role. | Potential conflicts of interest of evaluators; reputation of evaluators’ institution; technical background of evaluators; local language proficiency; experience in the local context |
| • Knowledge assessment: All trained healthcare providers (HCPs) were asked to complete three structured questionnaires at the start of the training, immediately after the training and 6 months after the training. Scores from the pre-training test were compared to the scores from the first and second post-training tests. |
| • Practical assessment: Standardised patients who were trained to present with TB symptoms visited selected trained HCPs in their health facilities. The medical practice of trainees was assessed by standardised patients on a scale of 1–10. |
| • Cost-effectiveness projection: The total cost of the HCP training programme and estimated improvement in patient level outcomes were calculated and compared. |