| Literature DB >> 26537706 |
Cara C Lewis1,2, Sarah Fischer3, Bryan J Weiner4, Cameo Stanick5, Mimi Kim6,7, Ruben G Martinez8.
Abstract
BACKGROUND: High-quality measurement is critical to advancing knowledge in any field. New fields, such as implementation science, are often beset with measurement gaps and poor quality instruments, a weakness that can be more easily addressed in light of systematic review findings. Although several reviews of quantitative instruments used in implementation science have been published, no studies have focused on instruments that measure implementation outcomes. Proctor and colleagues established a core set of implementation outcomes including: acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration, sustainability (Adm Policy Ment Health Ment Health Serv Res 36:24-34, 2009). The Society for Implementation Research Collaboration (SIRC) Instrument Review Project employed an enhanced systematic review methodology (Implement Sci 2: 2015) to identify quantitative instruments of implementation outcomes relevant to mental or behavioral health settings.Entities:
Mesh:
Year: 2015 PMID: 26537706 PMCID: PMC4634818 DOI: 10.1186/s13012-015-0342-x
Source DB: PubMed Journal: Implement Sci ISSN: 1748-5908 Impact factor: 7.327
Fig. 1Rating process methodology
Literature search strategies
| Strategy | Definition |
|---|---|
| 1) Search instrument by name | Full instrument name entered into each search engine. |
| 2) Search instrument by acronym | Acronym(s) entered into each search engine. |
| 3) Search by source article identification | Source article name/reference entered into each search engine. |
| 4) Search by source article “cited by” feature | Source article entered into Google Scholar and “cited by” feature was used. |
| 5) Search for grey literature | Instrument searched in Google to identify grey literature. |
Fig. 2PRISMA enhanced systematic review flowchart for all constructs
Fig. 3Acceptability PRISMA enhanced systematic review flowchart
Fig. 4Adoption PRISMA enhanced systematic review flowchart
Fig. 5Appropriateness PRISMA enhanced systematic review flowchart
Fig. 6Cost PRISMA enhanced systematic review flowchart
Fig. 7Feasibility PRISMA enhanced systematic review flowchart
Fig. 8Fidelity PRISMA enhanced systematic review flowchart
Fig. 9Penetration PRISMA enhanced systematic review flowchart
Fig. 10Sustainability PRISMA enhanced systematic review flowchart
Number and percentage of instruments with a rating of 1 or more for each construct
| Rating criteria | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Construct name | Internal consistency | Structural validity | Predictive validity | Norms | Responsiveness | Usability | ||||||
| # | % | # | % | # | % | # | % | # | % | # | % | |
| Acceptability | 36 | 72.0 | 13 | 26.0 | 11 | 22.0 | 46 | 92.0 | 1 | 2.00 | 50 | 100.0 |
| Adoption | 8 | 42.1 | 7 | 36.8 | 5 | 26.3 | 10 | 52.6 | 0 | 0.00 | 19 | 100.0 |
| Appropriateness | 2 | 28.6 | 2 | 28.6 | 1 | 14.3 | 3 | 42.9 | 1 | 14.3 | 7 | 100.0 |
| Cost | 0 | 0.00 | 0 | 0.00 | 0 | 0.00 | 6 | 75.0 | 0 | 0.00 | 6 | 75.0 |
| Feasibility | 1 | 12.5 | 1 | 12.5 | 0 | 0.00 | 4 | 50.0 | 0 | 0.00 | 8 | 100.0 |
| Penetration | 1 | 25.0 | 1 | 25.0 | 1 | 25.0 | 4 | 100.0 | 1 | 25.0 | 4 | 100.0 |
| Sustainability | 3 | 37.5 | 3 | 37.5 | 1 | 12.5 | 2 | 25.0 | 1 | 12.5 | 8 | 100.0 |
Summary statistics of all instrument ratings, including scores of “0”
| Rating criteria | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Construct name | Internal consistency | Structural validity | Predictive validity | Norms | Responsiveness | Usability | ||||||
| M | SD | M | SD | M | SD | M | SD | M | SD | M | SD | |
| Acceptability | 2.66 | 1.77 | 0.90 | 1.57 | 0.51 | 1.14 | 2.88 | 1.32 | 0.08 | 0.57 | 3.30 | 0.51 |
| Adoption | 1.47 | 1.90 | 0.92 | 1.42 | 0.79 | 1.37 | 1.95 | 2.01 | 0.00 | 0.00 | 2.84 | 0.60 |
| Appropriateness | 1.00 | 1.73 | 0.29 | 0.49 | 0.14 | 0.19 | 1.29 | 1.70 | 0.57 | 1.51 | 3.00 | 0.58 |
| Cost | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 2.63 | 1.92 | 0.00 | 0.00 | 2.63 | 1.77 |
| Feasibility | 0.38 | 1.06 | 0.50 | 1.41 | 0.00 | 0.00 | 1.25 | 1.39 | 0.00 | 0.00 | 3.38 | 0.52 |
| Penetration | 1.00 | 2.00 | 1.00 | 2.00 | 0.75 | 1.50 | 3.25 | 0.96 | 0.38 | 0.75 | 3.75 | 0.50 |
| Sustainability | 1.25 | 1.75 | 0.88 | 1.46 | 0.13 | 0.35 | 1.00 | 1.85 | 0.13 | 0.35 | 3.00 | 0.53 |
Summary statistics of all instrument ratings, non-zero ratings only
| Rating criteria | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Construct name | Internal consistency | Structural validity | Predictive validity | Norms | Responsiveness | Usability | ||||||
| M | SD | M | SD | M | SD | M | SD | M | SD | M | SD | |
| Acceptability | 3.71 | 0.70 | 3.46 | 0.66 | 2.32 | 1.31 | 3.13 | 1.05 | N/A | N/A | 3.30 | 0.51 |
| Adoption | 3.50 | 1.07 | 2.50 | 1.90 | 3.00 | 0.35 | 3.70 | 0.95 | N/A | N/A | 2.71 | 0.47 |
| Appropriateness | 3.50 | 0.71 | 0.50 | 0.00 | N/A | N/A | 3.00 | 1.00 | N/A | N/A | 3.00 | 0.58 |
| Cost | N/A | N/A | N/A | N/A | N/A | N/A | 3.50 | 1.22 | N/A | N/A | 3.50 | 0.84 |
| Feasibility | N/A | N/A | N/A | N/A | N/A | N/A | 2.50 | 0.58 | N/A | N/A | 3.38 | 0.52 |
| Penetration | N/A | N/A | N/A | N/A | N/A | N/A | 3.25 | 0.96 | N/A | N/A | 3.75 | 0.50 |
| Sustainability | 3.33 | 0.58 | 2.33 | 1.53 | N/A | N/A | 4.00 | 0.00 | N/A | N/A | 3.00 | 0.53 |
N/A indicates that the given category had no or only one non-zero score
Fig. 11Adoption head-to-head comparison graph
Fig. 12Packet size and total EBA rating score