| Literature DB >> 31699136 |
Shannon D Scott1, Thomas Rotter2, Rachel Flynn3, Hannah M Brooks3, Tabatha Plesuk3, Katherine H Bannar-Martin3, Thane Chambers4, Lisa Hartling5.
Abstract
BACKGROUND: Experimental designs for evaluating knowledge translation (KT) interventions can provide strong estimates of effectiveness but offer limited insight into how the intervention worked. Consequently, process evaluations have been used to explore the causal mechanisms at work; however, there are limited standards to guide this work. This study synthesizes current evidence of KT process evaluations to provide future methodological recommendations.Entities:
Keywords: Health interventions; KT interventions; Knowledge translation; Process evaluation; Research use; Systematic review
Year: 2019 PMID: 31699136 PMCID: PMC6836407 DOI: 10.1186/s13643-019-1161-y
Source DB: PubMed Journal: Syst Rev ISSN: 2046-4053
Process evaluation systematic review inclusion criteria
| Study design | Must be a primary research study. Research studies including all designs, e.g., experimental, quasi-experimental, and non-experimental designs (e.g., case study). Opinion pieces, commentaries, methodological papers, book chapters, books, dissertations, conference abstracts, protocols, and reviews will not be included. |
| Study criteria | The study Studies must have clearly defined A A trainee healthcare professional (not yet licensed/registered) either delivering or receiving the intervention will be excluded if: a. The intervention is mandatory curricula for finishing their degree/gaining licensing. b. The intervention has no licensed healthcare professional involved. Process evaluations may be separate ( |
| Outcome(s) | The process evaluation must be distinct from the primary outcomes of the KT/research implementation component. Where the paper is only reporting the process evaluation, this will be considered a distinct outcome. |
1Health is defined according to the WHO (1946) conceptualization of a state of complete physical and mental well-being and not merely the absence of disease or infirmity, including prevention components and mental health but not “social health”
Fig. 1PRISMA flow diagram (Adapted from Moher et al. 2009)
Types of research design and associated quality of included studies (n = 226)
| Study design | Number of studies (%) | MMAT score distribution | ||||
|---|---|---|---|---|---|---|
| 0 | 25 | 50 | 75 | 100 | ||
| Mixed methods | 25 (11.1) | 2 | 1 | 8 | 11 | 3 |
| Multi-methods | 55 (24.3) | 5 | 15 | 21 | 10 | 4 |
| Qualitative | 85 (37.6) | – | 1 | 11 | 42 | 31 |
| Quantitative descriptive | 44 (19.5) | – | 8 | 11 | 14 | 11 |
| Quantitative non-randomized | 3 (1.3) | – | 1 | – | 1 | 1 |
| Quantitative RCT | 14 (6.2) | – | 3 | – | 4 | 7 |
RCT randomized controlled trial
Process evaluation research design of included studies (n = 226)
| Process evaluation design | Number of studies (%) |
|---|---|
| Mixed methods | 21 (9.3) |
| Multi-methods | 56 (24.8) |
| Qualitative | 98 (43.4) |
| Quantitative descriptive | 51 (22.6) |
Thematic analysis of process evaluation terms used in included studies (n = 226)
| Process evaluation terms* | Number of studies |
|---|---|
| Acceptability | 46 |
| Adherence and fidelity | 65 |
| Attitudes | 17 |
| Barriers and facilitators | 113 |
| Barriers only | 43 |
| Contextual factors | 25 |
| Experiences and perceptions | 87 |
| Facilitators only | 7 |
| Feasibility | 39 |
| Feedback | 16 |
| Satisfaction | 30 |
| Sustainability and effectiveness | 31 |
*Some studies used multiple terms to describe the process evaluation and its focus
Methods of data collection of included studies (n = 226)
| Data collection methods* | Number of studies |
|---|---|
| Qualitative methods | |
| Individual interviews | 123 |
| Group interviews | 15 |
| Focus groups | 51 |
| Open-ended survey or questionnaires | 14 |
| Other | 35 |
| Quantitative methods | |
| Survey or questionnaire | 100 |
| Record Review | 14 |
| Other | 37 |
*Some studies had more than one method of data collection
Timing of data collection of included studies (n = 226)
| Time of data collection | Number of studies (%) |
|---|---|
| Pre-intervention | 3 (1.3) |
| Pre- and during intervention | 5 (2.2) |
| Pre- and post-intervention | 40 (17.7) |
| Pre-, during, and post-intervention | 18 (7.9) |
| During and post-intervention | 29 (12.8) |
| During intervention | 25 (11.1) |
| Post-intervention | 104 (46) |
| Unclear | 2 (0.9) |
| Total | 226 (100) |
Intervention details of included studies (n = 226)
| KT intervention type | Number of studies (%) |
| Professional | 218 (96.5) |
| Professional and organizational | 5 (2.2) |
| Professional and financial | 3 (1.3) |
| Total | 226 (100) |
| KT intervention recipient type | |
| HCP | 154 (68.1) |
| HCP and patients | 13 (5.8) |
| HCP and others | 59 (26.1) |
| Total | 226 (100) |
| Target behavior of KT intervention* | |
| General management of a problem | 132 |
| Clinical prevention services | 45 |
| Patient outcome | 35 |
| Procedures | 33 |
| Patient education/advice | 32 |
| Prescribing | 20 |
| Test ordering | 13 |
| Diagnosis | 11 |
| Referrals | 5 |
| Record keeping | 2 |
| Professional-patient communication | 1 |
| Total | 226 |
*Some studies had multiple targeted behaviors
Theories used by theory-guided studies (n = 86)
| Applied theories* | Number of studies |
|---|---|
| Roger’s theory/diffusion of innovation | 13 |
| Normalization process theory | 10 |
| Promoting Action on Research Implementation in Health Services framework | 9 |
| Theory of planned behavior | 9 |
| Plan-Do-Study-Act Framework | 7 |
| Theoretical Domains Framework | 6 |
| Consolidated Framework for Implementation Research | 5 |
| Reach, Effectiveness, Adoption, Implementation, and Maintenance Framework | 4 |
| Behavior Change Theory | 3 |
| Carrol et al. Framework for Intervention Fidelity | 3 |
| Grol and Wensing Theoretical Framework | 3 |
| Hulsher et al. Process Evaluation Framework | 3 |
| Medical Research Council Framework | 3 |
| Braun and Clarke Thematic Analysis in Psychology | 2 |
| Kirkpatrick and Kirkpatrick Training Program Evaluation Model | 2 |
| Ottawa Model of Research Use | 2 |
| Precede/Proceed Implementation Model | 2 |
| Prochaska and DiClemente Stages of Change Model | 2 |
| Other | 19 |
| Total | 86 |
*Some studies had multiple theories guiding the process evaluation
Distribution of MMAT scores (0 = lowest and 100 = highest score)
| MMAT score distribution | Number of studies (%) |
|---|---|
| 0 | 7 (3.1) |
| 25 | 29 (12.8) |
| 50 | 52 (23.1) |
| 75 | 81 (35.8) |
| 100 | 57 (25.2) |
| Total | 226 (100) |