| Literature DB >> 28974248 |
Byron J Powell1, Cameo F Stanick2, Heather M Halko3, Caitlin N Dorsey4, Bryan J Weiner5, Melanie A Barwick6, Laura J Damschroder7, Michel Wensing8, Luke Wolfenden9, Cara C Lewis4.
Abstract
BACKGROUND: Advancing implementation research and practice requires valid and reliable measures of implementation determinants, mechanisms, processes, strategies, and outcomes. However, researchers and implementation stakeholders are unlikely to use measures if they are not also pragmatic. The purpose of this study was to establish a stakeholder-driven conceptualization of the domains that comprise the pragmatic measure construct. It built upon a systematic review of the literature and semi-structured stakeholder interviews that generated 47 criteria for pragmatic measures, and aimed to further refine that set of criteria by identifying conceptually distinct categories of the pragmatic measure construct and providing quantitative ratings of the criteria's clarity and importance.Entities:
Mesh:
Year: 2017 PMID: 28974248 PMCID: PMC5627503 DOI: 10.1186/s13012-017-0649-x
Source DB: PubMed Journal: Implement Sci ISSN: 1748-5908 Impact factor: 7.327
Mean clarity and importance ratings for each criterion (n = 24)
| # | Criterion | Clarity | Importance | Quad. |
|---|---|---|---|---|
| Acceptable | ||||
| 4 | Creates a low social desirability bias | 5.21 | 5.88 | III |
| 22 | Transparent | 6.75 | 6.92 | III |
| 24 | Acceptable (to staff and clients) | 7.83 | 8.50 | I |
| 25 | Tied to reimbursement | 8.00 | 5.08 | II |
| 28 | Relevant | 7.21 | 8.71 | IV |
| 30 | Offers relative advantage over ex | 7.33 | 7.54 | IV |
| 43 | Low cost | 8.67 | 8.04 | I |
| Compatible | ||||
| 3 | Applicable | 7.25 | 8.25 | IV |
| 8 | Efficient | 7.79 | 8.21 | I |
| 12 | Focused | 5.92 | 6.92 | III |
| 16 | The output of routine activities | 6.58 | 7.21 | III |
| 37 | Not used for staff punishment | 7.63 | 7.63 | I |
| 40 | Non-duplicative | 7.21 | 7.50 | III |
| Easy | ||||
| 9 | Offers flexible administration time | 6.88 | 6.92 | III |
| 10 | Easy to interpret | 8.88 | 8.38 | I |
| 15 | Creates low assessor burden (ease of training, scoring, administration time) | 8.50 | 7.75 | I |
| 17 | Easy to administer | 8.75 | 8.13 | I |
| 20 | Not wordy | 8.79 | 6.38 | II |
| 21 | Completed with ease | 8.75 | 7.71 | I |
| 23 | Requires no expertise | 7.46 | 4.75 | III |
| 26 | Of low complexity | 7.58 | 6.42 | III |
| 27 | Uses accessible language | 7.75 | 8.13 | I |
| 31 | Accessible by phone | 8.29 | 4.88 | II |
| 32 | Brief | 8.21 | 6.92 | II |
| 34 | Intuitive | 6.29 | 6.25 | III |
| 36 | Feasible | 7.00 | 8.25 | IV |
| 39 | Simple | 7.54 | 7.17 | III |
| 41 | Easy to use | 8.29 | 8.00 | I |
| 42 | Easy to score | 8.88 | 7.75 | I |
| 44 | One that offers automated scoring or can be scored elsewhere | 8.63 | 6.71 | II |
| 45 | Offers a compatible format to setting/user | 5.63 | 7.29 | III |
| 47 | Low burden | 7.33 | 8.21 | IV |
| Useful | ||||
| 1 | Informs decision making | 8.00 | 8.71 | I |
| 2 | Fits organizational activities | 8.21 | 7.96 | I |
| 5 | Provides a cut-off score leading to an intervention or treatment plan | 7.63 | 6.96 | II |
| 6 | Connects to clinical outcomes | 8.38 | 8.83 | I |
| 7 | Important to clinical care | 7.96 | 8.92 | I |
| 11 | Produces reliable and valid results | 9.13 | 9.25 | I |
| 13 | Reveals problems/issues in process or outcomes | 6.79 | 6.67 | IV |
| 14 | Informs adherence of fidelity | 7.54 | 7.42 | III |
| 18 | Assesses organizational progress over time | 7.50 | 7.67 | IV |
| 19 | Sensitive to change | 7.25 | 7.25 | III |
| 29 | Meaningful | 6.79 | 8.71 | IV |
| 33 | Confirms efficacy of interventions | 7.13 | 7.92 | IV |
| 35 | Has a meaningful score distribution | 6.54 | 6.71 | III |
| 38 | Optimizes patient care | 7.46 | 8.83 | IV |
| 46 | Informs clinical intervention selection | 7.92 | 8.29 | I |
Fig. 3Go-zone graph of mean clarity and importance ratings (n = 24). The range of the x- and y-axes reflect the mean values obtained for all 47 of the pragmatic criteria for the clarity and importance rating scales. The plot is divided into quadrants based upon the overall mean values for each rating scale: quadrant I (above the mean for both clarity and importance), quadrant II (above the mean for clarity, below the mean for importance), quadrant III (below the mean for clarity and importance), and quadrant IV (below the mean for clarity, above the mean for importance)
Fig. 2Mean clarity and importance ratings per cluster (n = 24)
Fig. 1Point and cluster map of criteria demonstrating spatial relationships (n = 23). This point and cluster map reflects the product of our stakeholders’ (valid response n = 23) sorting the 47 criteria into groups that they deemed conceptually similar. Each strategy is depicted as a dot with a number that corresponds to Table 1. The distances between criteria reflect the frequency at which they were sorted together; thus, strategies that were sorted together frequently are closer together on the map. These spatial relationships are relative to the data in this study and do not reflect an absolute relationship (i.e., a 5-mm distance on this map does not reflect the same relationship as a 5-mm distance on a map from a different dataset) [15]. Items 19 (“sensitive to change”) and 7 (“important to clinical care”) were originally assigned to the “compatible” cluster, but were moved to the “useful” cluster because the investigative team believed that it represented a better conceptual fit. The gray dotted lines within the “useful” cluster and between the “useful” and “compatible” clusters represent how the clusters would have been represented if we had not made this change