| Literature DB >> 29258601 |
Steve R Makkar1, Anna Williamson2, Catherine D'Este3, Sally Redman2.
Abstract
BACKGROUND: Few measures of research use in health policymaking are available, and the reliability of such measures has yet to be evaluated. A new measure called the Staff Assessment of Engagement with Evidence (SAGE) incorporates an interview that explores policymakers' research use within discrete policy documents and a scoring tool that quantifies the extent of policymakers' research use based on the interview transcript and analysis of the policy document itself. We aimed to conduct a preliminary investigation of the usability, sensitivity, and reliability of the scoring tool in measuring research use by policymakers.Entities:
Mesh:
Year: 2017 PMID: 29258601 PMCID: PMC5735943 DOI: 10.1186/s13012-017-0676-7
Source DB: PubMed Journal: Implement Sci ISSN: 1748-5908 Impact factor: 7.327
Means, standard deviations, medians, minimum, and maximum scores on each domain averaged across expert raters
| Domain | M | SD | Median | Min | Max |
|---|---|---|---|---|---|
| Searching | 4.27 | 2.31 | 4.25 | 0.00 | 9.00 |
| Research obtained | 4.54 | 2.34 | 4.36 | 0.00 | 9.00 |
| Relevance appraisal | 3.17 | 2.05 | 2.06 | 0.00 | 7.76 |
| Quality appraisal | 2.50 | 2.31 | 2.00 | 0.00 | 9.00 |
| Generating new research | 2.50 | 3.23 | 0.18 | 0.00 | 9.00 |
| Interacting with researchers | 2.51 | 3.11 | 0.49 | 0.00 | 9.00 |
| Conceptual research use | 3.81 | 2.82 | 3.03 | 0.00 | 9.00 |
| Instrumental research use | 4.12 | 3.14 | 4.75 | 0.00 | 9.00 |
| Tactical research use | 5.79 | 3.51 | 6.09 | 0.00 | 9.00 |
| Imposed research use | 3.60 | 2.95 | 5.07 | 0.00 | 9.00 |
| Total | 3.68 | 2.98 | 3.48 | 0.00 | 9.00 |
Single-measure reliability coefficients for independent coders, experts, and between independent coders and experts
| Aim | Single-measure reliability | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Domain | Searching for research | Research obtained | Relevance appraisal | Quality appraisal | Generating new research | Interacting with researchers | Conceptual research use | Instrumental research use | Tactical research use | Imposed research use |
| (A) Reliability between the two independent coders; | 0.74 | 0.78 | 0.58 | 0.78 | 0.86 | 0.73 | 0.62 | 0.62 | 0.74 | 0.76 |
| (B) Reliability between the nine experts; | 0.57 | 0.63 | 0.25 | 0.50 | 0.41 | 0.40 | 0.35 | 0.25 | 0.47 | 0.62 |
| (C) Reliability between the independent coders and the average of the experts; | 0.75 | 0.69 | 0.48 | 0.76 | 0.73 | 0.64 | 0.55 | 0.57 | 0.70 | 0.76 |
Average-measure reliability coefficients for independent coders and experts
| Aim | Average-measure reliability | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Domain | Searching for research | Research obtained | Relevance appraisal | Quality appraisal | Generating new research | Interacting with researchers | Conceptual research use | Instrumental research use | Tactical research use | Imposed research use |
| (A) Reliability between the two independent coders; | 0.85 | 0.88 | 0.74 | 0.88 | 0.93 | 0.84 | 0.76 | 0.77 | 0.85 | 0.87 |
| (B) Reliability between the nine experts; | 0.77 | 0.83 | 0.46 | 0.74 | 0.66 | 0.65 | 0.58 | 0.49 | 0.71 | 0.89 |