| Literature DB >> 24669765 |
Thomas J Waltz1, Byron J Powell, Matthew J Chinman, Jeffrey L Smith, Monica M Matthieu, Enola K Proctor, Laura J Damschroder, JoAnn E Kirchner.
Abstract
BACKGROUND: Identifying feasible and effective implementation strategies that are contextually appropriate is a challenge for researchers and implementers, exacerbated by the lack of conceptual clarity surrounding terms and definitions for implementation strategies, as well as a literature that provides imperfect guidance regarding how one might select strategies for a given healthcare quality improvement effort. In this study, we will engage an Expert Panel comprising implementation scientists and mental health clinical managers to: establish consensus on a common nomenclature for implementation strategy terms, definitions and categories; and develop recommendations to enhance the match between implementation strategies selected to facilitate the use of evidence-based programs and the context of certain service settings, in this case the U.S. Department of Veterans Affairs (VA) mental health services. METHODS/Entities:
Mesh:
Year: 2014 PMID: 24669765 PMCID: PMC3987065 DOI: 10.1186/1748-5908-9-39
Source DB: PubMed Journal: Implement Sci ISSN: 1748-5908 Impact factor: 7.327
Overview of the four stages of the ERIC process
| Stage 1 | Refined compilation of discrete implementation strategies | Modified Delphi, 2 feedback rounds and consensus meeting | •Expert consensus on key concepts (definitions & ratings) | |
| Modified Delphi | ||||
| Stage 2 | Post-consensus compilation of discrete implementation strategies | Sort the strategies in to subcategories; rate each strategy in terms of importance and feasibility | •Weighted and unweighted cluster maps | |
| Concept Mapping | | | •Ladder maps | |
| | | | •Go-zone graphs | |
| | | | •Importance and feasibility ratings for each strategy | |
| Stage 3 | •Discrete implementation strategies | Essential ratings are obtained for each strategy for three temporal frames given each scenario | For each practice change: | |
| Menu-Based Choice | •Practice change narrative | | •Relative Essentialness Estimates for each strategy given each scenario | |
| •Narratives of contextual variations of practice change scenarios | | •A rank list of the most common strategy recommendation combinations | ||
| | | •A summary of strategies that may serve as compliments and substitutes for each other | ||
| Stage 4 | •Menu-Based Choice data summaries for each scenario | Facilitated discussion; live polling of consensus reached during discussion | For each practice change: | |
| Facilitated Consensus Meeting | •Importance and feasibility ratings from the concept mapping task | | •Expert consensus regarding which discrete implementation strategies are of high importance | |
| •Context specific recommendations |
Figure 1Overview of the voting process in the final round of the modified Delphi task. Note. In the third and final round of the modified-Delphi task, expert panelists will vote on all strategies where concerns were raised regarding the core definition in the first two online survey rounds. For each strategy, the original and proposed alternate definitions will be presented for an approval poll in which participants can vote to approve all definition alternatives that they find acceptable. In the first round of voting, if one definition receives a supermajority of votes (≥60%) and receives more votes than all others, that definition will be declared the winner and the poll will move to the next term. If there is no consensus, a five-minute discussion period is opened. When the discussion concludes, a run-off poll is conducted to determine the most acceptable definition alternative.
Figure 2Screenshot of the MBC task worksheets. Note. Each practice change will have an Excel workbook that has a separate worksheet for each of three scenarios (i.e., Scenario A, Scenario B, Scenario C), with each practice context having different barriers and facilitators. Several features support multifaceted decision-making while completing the task. First, all of the discrete implementation strategies developed in ERIC Stage 1 will be listed in the first column, and sorted into categories based on ERIC Stage 2 Concept Mapping data. Further, for each strategy, a comment box containing the definition for the term appears when the participant moves their cursor over the strategy’s cell. In Figure 2, the ‘Conduct local consensus discussions’ (cell A15) definition box has been made visible. Second, the participant response options are provided in a drop-down menu format to prevent data entry errors. In Figure 2, cell H6 has been selected so the drop-down menu is visible. Third, participants will be encouraged to complete their recommendations for Scenarios A through C sequentially. After the recommendations have been made for Scenario A, these will remain viewable on the worksheet for Scenario B, and the recommendations for Scenarios A and B remain viewable on the Scenario C worksheet, as seen in Figure 2. This supports the participants in efficiently making recommendations considering the current context (Scenario C) while comparing and contrasting these recommendations with those provided for Scenarios A and B, where different combinations of barriers and facilitators are present. Finally, different hues of the response columns are used to visually separate the recommendations for the three contexts with ‘Pre-implementation’ having the lightest shade and ‘Sustainment’ having the darkest.