| Literature DB >> 34991608 |
Samantha Y Rowe1, Dennis Ross-Degnan2,3, David H Peters4, Kathleen A Holloway5,6,7, Alexander K Rowe8.
Abstract
BACKGROUND: Although supervision is a ubiquitous approach to support health programs and improve health care provider (HCP) performance in low- and middle-income countries (LMICs), quantitative evidence of its effects is unclear. The objectives of this study are to describe the effect of supervision strategies on HCP practices in LMICs and to identify attributes associated with greater effectiveness of routine supervision.Entities:
Keywords: Developing countries; Health workers; Performance; Quality improvement; Supervision; Systematic review
Mesh:
Year: 2022 PMID: 34991608 PMCID: PMC8734232 DOI: 10.1186/s12960-021-00683-z
Source DB: PubMed Journal: Hum Resour Health ISSN: 1478-4491
Effectiveness of supervision strategies on the practices of professional health care providers
| Strategies testeda | No. of study comparisons (risk of bias: low, moderate, high, very high) | Outcome scale | Median MESb, in %-points (IQR; range) | |
|---|---|---|---|---|
| Intervention arm | Reference arm | |||
| Routine supervision | Controls | 9 (3, 1, 4, 1) | Percentage | 10.7 (6.9, 27.9; 2.1, 67.8) |
| Routine supervision | Controls | 2 (0, 1, 1, 0) | Continuous | –29.5 (NA; –90.4, 31.4) |
Routine supervision plus other strategy components | Other strategy components | 4 (0, 0, 2, 2) | Percentage | 4.1 (NA; 0, 7.1) |
Routine supervision plus other strategy components | Other strategy components | 1 (0, 0, 1, 0) | Continuous | 24.9 (NA; NA) |
| Routine supervision plus benchmarking plus other strategy componentsc | Other strategy components | 1 (0, 0, 1, 0) | Percentage | 2.2 (NA; NA)d |
| Continuous | –0.6 (NA; NA)d | |||
| Audit with in-person feedback | Controls | 4 (1, 1, 2, 0) | Percentage | 15.0 (NA; 2.4, 33.5) |
| Audit with in-person feedback | Controls | 1 (0, 0, 0, 1) | Continuous | –3.0 (NA; NA) |
Audit with in-person feedback plus other strategy components | Other strategy components | 1 (1, 0, 0, 0) | Percentage | 5.0 (NA; NA) |
Audit with in-person feedback plus peer review | Controls | 1 (0, 0, 1, 0) | Percentage | 19.0 (NA; NA) |
| Audit with written feedback | Controls | 2 (2, 0, 0, 0) | Continuous | 17.4 (NA; 17.3, 17.5) |
| Audit with written feedback plus benchmarking plus other strategy components | Other strategy components | 1 (0, 1, 0, 0) | Percentage | 0.2 (NA; NA)d |
| Continuous | 19.1 (NA; NA)d | |||
Audit with in-person feedback plus audit with written feedback | Controls | 2 (2, 0, 0, 0) | Percentage | 10.1 (NA; 8.5, 11.7) |
| Audit with in-person feedback | Audit with written feedback | 1 (0, 0, 0, 1) | Percentage | 22.2 (NA; NA)d |
| Continuous | 16.7 (NA; NA)d | |||
Peer review plus other strategy components | Other strategy components | 1 (0, 1, 0, 0) | Percentage | 3.6 (NA; NA)d |
| Continuous | 33.0 (NA; NA)d | |||
| Health care provider received support from non-supervisory staff plus other strategy components | Other strategy components | 2 (0, 2, 0, 0) | Percentage | –7.3 (NA; –16.9, 2.4) |
%-points percentage-points, IQR interquartile range, MES median effect size, NA not applicable
aSee Boxes 1 and 2 for descriptions of the strategies and the comparisons, respectively. This table only includes comparisons from non-equivalency studies
bEffect sizes calculated as the intervention arm improvement minus reference arm improvement
cOther strategy components include audit with in-person and written feedback
dResults for the percentage and continuous outcomes in this row are from the same study
eFor six study comparisons for percentage outcomes involving audit with in-person feedback alone or combined with written feedback: median MES = 10.1%-points; IQR = 6.2, 23.7; range = 2.4, 33.5. For seven study comparisons for percentage outcomes involving audit with in-person feedback alone or combined with either peer review or audit with written feedback: median MES = 11.7%-points; IQR = 6.2, 23.7; range = 2.4, 33.5
Fig. 1Effectiveness of supervision strategies for professional health care providers in low- and middle-income countries, as assessed with outcomes expressed as percentages. N = number of study comparisons. Red indicates results from a single study, which should be interpreted with caution. The numbers next to each spoke are the median of median effect sizes, in percentage-points, and (in parentheses) the number of study comparisons. For each comparison, the arrow points toward the study group with greater effectiveness. For example, routine supervision was more effective than controls by a median of 10.7 percentage-points. aThese are non-supervision strategy components (e.g., training) that could vary among study comparisons, but are the same for any two arms of a given study comparison (e.g., routine supervision plus training versus training)
1. 2. 3. 4. 5. 6.
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. |
aDetailed definitions in Appendix 1 (pages 39–44) of [1]
bOther strategy components (especially training) often include printed information for HCPs; and in these cases, the printed information was not considered a separate component
• Comparison of a supervision strategya aloneb versus a (no-intervention) control group • Comparison of one supervision strategya aloneb versus a different supervision strategya aloneb (e.g., “audit with in-person feedback” versus “audit with written feedback”) • Comparison of a supervision strategya combined with a specific group of other strategy componentsc versus that same specific group of other strategy componentsc (e.g., “routine supervision plus training” versus “training”) • Comparison of one supervision strategya combined with a specific group of other strategy componentsc versus a different supervision strategya combined with that same specific group of other strategy componentsc (e.g., “routine supervision plus training” versus “peer review plus training”) • Comparison in a supervision-related study of a strategy versus a “gold standard” comparison groupd |
aAny of the six supervision strategies listed in the top part of Box 1
bThat is, not combined with other strategy components listed in the bottom part of Box 1
cOne or more of the 11 other strategy components in the bottom part of Box 1
dOnly one equivalency study was included in the analysis. In that study, at baseline, lay HCPs in two study arms received routine supervision plus reminders about making home visits; during the intervention period, the gold standard control arm continued receiving both supervision and reminders, and the intervention arm received only the reminders
• Supervisors participated in a group process with health care providers that involved discussing a problem and collaborating to find a solution • Supervisors received supervision • Supervisors received training • An explicit element of supervisory visits was that supervisors gave feedback to health care providers • Supervisors used a standard checklist during supervisory visits • Supervision frequency (i.e., number of visits per year). For studies with a duration that was not a multiple of 12 months, frequency was estimated as: number of visits during the study intervention period divided by the intervention period (in months) times 12 • Number of supervision visits during the study’s intervention period (i.e., supervision “dose”) • Baseline performance level • Time since supervision was conducted (in months) |
• Replicating studies of promising strategies tested with few studies (e.g., audit with in-person feedback plus peer review) • Head-to-head comparisons of key supervision strategies (e.g., routine supervision versus audit with feedback), strategy combinations (e.g., audit with feedback plus peer review versus audit with feedback alone) and supervision attributes (e.g., different supervision methods, such as involving supervisors in group problem-solving with HCPs, supervision of supervisors, and frequencies). Understanding the optimal frequency or dose of supervision in different contexts is an especially critical topic • Rigorous studies of supervision strategies to improve the practices of lay or community health workers • Better quantitative and qualitative understanding of how context influences strategy effectiveness • Use standardized methods, especially for outcomes, strategy description, implementation (including dose and fidelity), and characterization of study context • Prioritize head-to-head studies, which provide stronger evidence for comparing different supervision approaches • Have rigorous study designs, such as interrupted time series with a randomized comparison group, which reduce bias and show how effectiveness changes over time • Have follow-up periods that match the timeframe that programs require for improvements to be meaningful (e.g., at least 12 months) and include multiple measures of effect so changes (reductions or further improvements) in effectiveness over time can be quantified • Include assessments of strategy cost and cost-effectiveness • Be designed to better contribute to filling gaps in the evidence base about strategy choice and combinations of componentsa |
aStudies directly comparing two supervision approaches without other components are the easiest to interpret. However, given the generally moderate effect of supervision as a sole strategy, studies should include other enhancing components in both study arms (e.g., supervision approach A + training versus supervision approach B + training)