| Literature DB >> 26346461 |
Margaret Cargo1, Ivana Stankov2, James Thomas3, Michael Saini4, Patricia Rogers5, Evan Mayo-Wilson6, Karin Hannes7.
Abstract
BACKGROUND: Several papers report deficiencies in the reporting of information about the implementation of interventions in clinical trials. Information about implementation is also required in systematic reviews of complex interventions to facilitate the translation and uptake of evidence of provider-based prevention and treatment programs. To capture whether and how implementation is assessed within systematic effectiveness reviews, we developed a checklist for implementation (Ch-IMP) and piloted it in a cohort of reviews on provider-based prevention and treatment interventions for children and young people. This paper reports on the inter-rater reliability, feasibility and reasons for discrepant ratings.Entities:
Mesh:
Year: 2015 PMID: 26346461 PMCID: PMC4562191 DOI: 10.1186/s12874-015-0037-7
Source DB: PubMed Journal: BMC Med Res Methodol ISSN: 1471-2288 Impact factor: 4.615
Fig. 1Conceptual framework for developing program theory. Source: Chen H-T. Practical Program Evaluation. Thousand Oaks, CA: Sage Publications, 2005. Reprinted with permission from Sage Publications
Domains and items within each domain for the Checklist for Implementation (Ch-IMP)
| Action model | ||
|---|---|---|
| Intervention & service delivery protocol | Intervention heterogeneity | Whether consideration was given to the range of strategies, elements, activities, types of components for the interventions included within the review. If relevant, please use the comment function to elaborate on the specific aspects of intervention heterogeneity captured in the review. |
| Target population | Age | Specific age or age range of participants |
| Gender | Gender of participants | |
| Grade | Specific grade or grade range of participants | |
| Ethnicity | Ethnic background of participants | |
| Socio-economic status | Income, highest level of education, occupation of caregivers of participants | |
| Implementers | Implementer identifieda | Identify who implements the intervention and interfaces with participants. |
| Qualifications | Consideration to different types of implementers; please consider whether reviews considered implementer’s education level, certifications, or past relevant experiences to assess their ability to do the job. | |
| Ethnicity | Ethnic background of the implementers. | |
| Age | Specific age or age range of implementers. | |
| Gender | Gender of implementers. | |
| SES | One or more of implementer’s income, highest level of education or occupation. | |
| Role of evaluator | Whether role of the evaluator was addressed. (i.e., roles in program delivery vs evaluation). | |
| Implementing organization | Leadership | Whether program champions and leaders provide instructions or guidance to staff/ implementers to facilitate the intervention delivery. |
| Resourcing | Resources includes having sufficient personnel/ staff, financial resources/ operational budget, space, buildings or sites (physical resources), and materials/ equipment (technological resources) to run the program. | |
| Intervention development | Intervention development can be strengthened through strategic program planning and program design processes including intervention mapping, needs assessment, pilot-testing, formative evaluation, evaluability assessment or other developmental work. | |
| Quality of materials | The quality of materials is commonly assessed in relation to the quality of the intervention materials (e.g., activity materials, curriculum) or the training materials/manual. | |
| Cultural sensitivity | Interventions that consider the language, socio-cultural values and traditions may be considered more appropriate to the cultural groups in which they are intended to benefit. | |
| Training | Assess whether any consideration has been given to training, the quality of training or any other aspect of training that acts to enhance the skills/ competency of service delivery staff. | |
| Program improvement processes | Information from intervention improvement processes such as performance monitoring, feedback, formative evaluation, intervention monitoring can improve delivery. | |
| Technical or supervisory guidance | Providing implementers with practical or expert support and guidance (unrelated to intervention improvement processes) during their implementation efforts to improve implementation quality. | |
| Associate organizations & community partners | Presence/absence of partnership | Note any formal partnerships or collaborations during intervention planning or implementation |
| Other partnership processes | Note one or more aspects of the collaboration or partnership such as pooling resources, dividing responsibilities for different aspects of complex intervention delivery. | |
| Ecological context | Settings considered | Please specify whether this review formally considered the setting in which the intervention was implemented. |
| Settingsb | Please specify whether the number of settings in which the interventions were implemented. | |
| Program implementation (process evalution) | ||
| Recruitment | Refers to specific information on the procedures used to recruit participants into or attract participants to the intervention. | |
| Attrition | Attrition is a measure of drop-out rates, or the proportion of participants lost during the course of an intervention or during follow up. | |
| Minimum attrition | Please determine whether the review considered a minimum attrition/drop-out rate. | |
| Reach | Reach refers to the degree to which the intended audi ence participates in an intervention by ‘their presence’. | |
| Minimum reach | Please determine whether the review considered implementation of minimum reach. | |
| Dose delivered | Dose delivered is established through the efforts and actions of implementers or implementing organization. This concept refers to the proportion or amount of an intervention (or the combined strategies) delivered to participants; often measured through frequency (e.g., twice per week), duration (e.g., duration of program in months) and intensity (e.g., total a program delivery hours). | |
| Minimum dose delivered | Please determine whether the review considered implementation of minimum dose delivered. | |
| Dose received | Dose received, also referred to as exposure, is a characteristic of the target populations’ engagement and active participation in an intervention. It is an objective measure of the extent to which participants actually utilise and interact with program strategies, materials, or resources. | |
| Minimum dose received | Please determine whether the review considered implementation of minimum dose received. | |
| Fidelity | Intervention fidelity is a commonly used measure in process evaluation. It has been conceptualised and measured in a variety of ways. Its essential definition reflects the extent to which an intervention is implemented as originally intended by program developers. It has been applied to assessing intervention strategies to the integrity of an implementing system (i.e., “the extent to which an intervention has been implemented as intended by those responsible for its development”; “closeness between the program-as-planned and the program-as-delivered”; “faithful replication”; the degree to which “specified procedures are implemented as planned”). Please use the comment function to provide the definition used in the review. | |
| Minimum fidelity | Please determine whether the review considered implementation of minimum fidelity. | |
| Adaptation | The extent to which program content is intentionally or purposefully changed during implementation, from the original standard, to enhance program effectiveness. Programs can be adapted to be situationally responsive to local needs and circumstances. Please note the reasons for adaptation. | |
| Minimum adaptation | Please determine whether the review considered implementation of minimum adaptation. | |
| Participant engagement | Refers to the subjective attributes that define their participation in, interaction with or receptivity to an intervention. This can include what they think of the program (cognitive orientation) such as satisfaction with the program, commitment, perceived relevance of the program of the outcomes or how they feel about the program (affective orientation) such as enthusiasm or enjoyment. | |
| Provider engagement | Implementer engagement refers to the subjective attributes of program staff that can influence their capacity to deliver intervention strategies. This can include: a) what staff think about the program content (cognitive orientation) such as satisfaction with the program, commitment, motivation, perceived importance/ buy-in, perceived relevance of the program of the outcomes; b) how staff feel when implementing the program (affective orientation) such as enthusiasm or enjoyment; or c) staff’s interpersonal style or the methods used to communicate concepts (e.g., warmth, empathy). | |
| Co-intervention | When interventions other than the treatment under study are applied differently to the treatment and control/comparison groups. | |
| Contamination | When an intervention is unintentionally delivered to participants in the control group or inadvertent failure to deliver the intervention to the experimental group. | |
| Change model | ||
| A priori change modelc | The Change Model links intervention strategies to a sequence of short, intermediate and longer-term observable and intended outcomes. This sequencing of outcomes is referred to as an | |
| Logic diagram usedc | Please specify whether the review provides a graphical depiction of how each intervention works to achieve its short, intermediate and long-term outcomes. These diagrams are also referred to as ‘logic models’ or ‘theory of change’ diagrams. The sequence of outcomes (short term to long-term) should be linked to intervention strategies or activities. | |
| Environment (external context) | ||
| Years | Years in which primary studies were published. Can be used as a proxy measure for historical/period effects. | |
| Country | Name of county of program delivery. May act as a proxy for political climate, availability of resources, social norms. | |
| Regions or areas within countries | Areas and regions within countries may be specified and may include remoteness or urbanicity indices (e.g., rural, remote/metropolitan, northern/southern). May act as a proxy for access to resources or some other measure. | |
aStem Question: Are the implementers clearly identified–dichotomous Yes or No response scale
b7 category nominal response scale: 1, 2, 3, 4, 5+, Not specified, Unclear
c4 category nominal response scale: No, Yes (articulation clear), Yes (articulation unclear), Other
Fig. 2Items in the Checklist for Implementation (Ch-IMP) that correspond with Chen's framework for program theory
Seven-category response scale used for 45 of 47 items in the checklist for implementation (Ch-IMP)
| a | No, not considered | The dimension is not formally considered in the review. |
| b | No, intended but unable | No, the review intended to address the dimension but was unable to on the basis of limited information provided in primary studies. |
| c | No, intended but not reported | No, the review intended to report on the measure of interest but no information is provided in the analysis or discussion section. |
| d | Yes, quantitative unsynthesised | Yes, descriptive information is provided on the dimension for one or more studies (e.g., in a narrative summary or table in an appendix) but the information is not synthesised across studies. |
| e | Yes, quantitative synthesised | Yes, descriptive quantitative information is synthesised across studies (e.g., percentage or range provided in a table or narratively). |
| f | Yes, linked to meta-analysis | Yes, the dimension is linked to meta-analysis; effect measures calculated. |
| g | Other | Information on the dimension is provided that is unclear, ambiguous or does not fit the above categories. The measure of interest has been considered but is homogenous. Please comment on why this response category was selected. |
Inter-rater reliability results for 47 items in the checklist for implementation (Ch-IMP) (n = 27 reviews)
| Percentage agreement | Kappa (95 % CI) | AC1 statistic (95 % CI) | |
|---|---|---|---|
| Action model | |||
| Intervention and service delivery protocols | |||
| Intervention Heterogeneity | 82 | 0.74 (0.56–0.93) | 0.79 (0.62–0.96) |
| Target population | |||
| Age | 85 | 0.80 (0.63–0.98) | 0.83 (0.68–0.98) |
| Gender | 89 | 0.85 (0.68–1.00) | 0.87 (0.74–1.00) |
| Grade | 85 | 0.75 (0.57–0.92) | 0.84 (0.69–0.99) |
| Ethnicity | 82 | 0.74 (0.54–0.94) | 0.79 (0.62–0.96) |
| SES | 74 | 0.62 (0.41–0.84) | 0.71 (0.52–0.90) |
| Implementers | |||
| Implementer identified | 100 | 1.00 | |
| Qualifications | 74 | 0.59 (0.37–0.82) | 0.70 (0.50–0.89) |
| Ethnicity | 96 | 0.84 (0.53–1.00) | 0.96 (0.89–1.00) |
| Age | 96 | 0.82 (0.53–1.00) | 0.96 (0.89–1.00) |
| Gender | 96 | 0.78 (0.37–1.00) | 0.96 (0.89–1.00) |
| Socio-economic status | 100 | 1.00 | |
| Role of the evaluator | 96 | 0.90 (0.70–1.00) | 0.96 (0.88–1.00) |
| Implementing organization | |||
| Leadership | 89 | 0.46 (0.03–0.89) | 0.88 (0.76–1.00) |
| Resourcing | 100 | 1.00 | |
| Intervention development | 93 | 0.72 (0.34–1.00) | 0.92 (0.82–1.00) |
| Quality of materials | 93 | 0.79 (0.53–1.00) | 0.92 (0.82–1.00) |
| Cultural sensitivity | 93 | 0.71 (0.34–1.00) | 0.92 (0.82–1.00) |
| Training | 82 | 0.64 (0.40–0.88) | 0.80 (0.63–0.96) |
| Program improvement processes | 93 | 0.68 (0.33–1.00) | 0.92 (0.82–1.00) |
| Technical or supervisory guidance | 78 | 0.54 (0.26–0.81) | 0.76 (0.58–0.93) |
| Associate organizations and community partners | |||
| Presence/absence of partnership | 93 | 0.81 (0.4 8–1.00) | 0.92 (0.81–1.00) |
| Other partnership proc | 96 | 0.79 (0.45–1.00) | 0.96 (0.89–1.00) |
| Ecological context | |||
| Settings considered | 74 | 0.62 (0.42–0.82) | 0.66 (0.46–0.86) |
| # Settings | 48 | 0.37 (0.15–0.59) | 0.40 (0.18–0.62) |
| Process evaluation | |||
| Recruitment | 82 | 0.68 (0.46–0.91) | 0.80 (0.63–0.96) |
| Attrition | 74 | 0.63 (0.41–0.85) | 0.71 (0.52–0.90) |
| Minimum attrition | 74 | 0.47 (0.20–0.74) | 0.72 (0.53–0.90) |
| Reach | 93 | 0.82 (0.59–1.00) | 0.92 (0.81–1.00) |
| Minimum reach | 96 | 0.65 (0.02–1.00) | 0.96 (0.89–1.00) |
| Dose delivered | 79 | 0.70 (0.50–0.90) | 0.76 (0.58–0.93) |
| Minimum dose delivered | 85 | 0.45 (0.03–0.86) | 0.85 (0.70–0.99) |
| Dose received | 89 | 0.54 (0.10–0.98) | 0.88 (0.76–1.00) |
| Minimum dose received | 100 | 1.00 | |
| Fidelity | 78 | 0.60 (0.33–0.87) | 0.76 (0.58–0.93) |
| Minimum fidelity | 100 | 1.00 | |
| Adaptation | 82 | 0.67 (0.42–0.92) | 0.80 (0.63–0.96) |
| Minimum adaptation | 100 | 1.00 | |
| Participant engagement | 89 | 0.54 (0.10–0.98) | 0.88 (0.76–1.00) |
| Provider engagement | 93 | 0.64 (0.17–1.00) | 0.92 (0.82–1.00) |
| Co-intervention | 78 | 0.38 (0.03–0.73) | 0.76 (0.59–0.94) |
| Contamination | 74 | 0.42 (0.12–0.71) | 0.72 (0.54–0.90) |
| Change model | |||
| Apriori intervention model | 78 | 0.65 (0.40–0.89) | 0.72 (0.52–0.92) |
| Logic diagram used | 100 | 1.00 | |
| Environmment | |||
| Years | 74 | 0.63 (0.41–0.86) | 0.71 (0.52–0.89) |
| Country | 74 | 0.65 (0.45–0.85) | 0.70 (0.51–0.90) |
| Urbanicity | 78 | 0.67 (0.45–0.90) | 0.75 (0.57–0.93) |
Fig. 3Rater (n = 2) scores for 12 measures in the checklist for implementation (Ch-IMP)
Reasons for inter-rater disagreement
| Information missed in extraction | |
| a. | Missed information in text |
| b. | Missed information in in-text or summary tables in appendices |
| c. | Missed information in a multi-dimensional measure |
| Information on target variables and processes being unclear | |
| a. | Lack of justification provided for target variables |
| b. | In some instances it was unclear whether a target variable was defined on the basis of information present or absent in primary studies |
| c. | Information provided in the review made it difficult to assign the target variable to a response category |
| d. | In the absence of intervention theories or models linking intervention strategies to process, impacts and outcomes, it was difficult to interpret some variables |
| Limitations of the tool | |
| a. | Definitions in the tool did not adequately capture the heterogeneity in the target variable |
| b. | Target variable has a two-part question which can lead to inconsistent ratings |
| c. | Response category definitions |
| d. | Definitions of the term is too narrow |
| e. | Multiple indicators for a single measure |
| f. | Reviews with one primary study |
| Limitations of the review | |
| a. | Inconsistency in the presentation of target variable in the review |
| b. | Location of information in the review not in expected sections |
| c. | Lack of sub-headings |
| d. | Tables of summary characteristics |
| Was any consideration given to intervention development? |
| • Intervention development can be strengthened through strategic program planning and program design processes [ |
| Was fidelity assessed, that is, the degree to which interventions are implemented as intended by its developers? |
| Intervention fidelity is a commonly used measure in process evaluation. It has been conceptualised and measured in a variety of ways. Its essential definition reflects the extent to which an intervention is implemented as originally intended by program developers. It has been applied to assessing intervention strategies to the integrity of an implementing system (i.e., “the extent to which an intervention has been implemented as intended by those responsible for its development”; “closeness between the program-as-planned and the program-as-delivered”; “faithful replication”; the degree to which “specified procedures are implemented as planned”). Please use the comment function to provide the definition used in the review. |