| Literature DB >> 27938409 |
Jennifer Petkovic1,2, Vivian Welch3,4, Maria Helena Jacob3, Manosila Yoganathan3, Ana Patricia Ayala5, Heather Cunningham5, Peter Tugwell6,7,8.
Abstract
BACKGROUND: Systematic reviews are important for decision makers. They offer many potential benefits but are often written in technical language, are too long, and do not contain contextual details which make them hard to use for decision-making. There are many organizations that develop and disseminate derivative products, such as evidence summaries, from systematic reviews for different populations or subsets of decision makers. This systematic review aimed to (1) assess the effectiveness of evidence summaries on policymakers' use of the evidence and (2) identify the most effective summary components for increasing policymakers' use of the evidence. We present an overview of the available evidence on systematic review derivative products.Entities:
Keywords: Evidence summaries; Policymakers; Systematic reviews
Mesh:
Year: 2016 PMID: 27938409 PMCID: PMC5148903 DOI: 10.1186/s13012-016-0530-3
Source DB: PubMed Journal: Implement Sci ISSN: 1748-5908 Impact factor: 7.327
Fig. 1PRISMA flow diagram
Characteristics of included studies
| Study ID | Methods | Participants | Intervention description | Outcomes |
|---|---|---|---|---|
| Brownson 2011 [ | RCT | Legislative staff members (e.g., committee staff), state legislators, and executive branch administrators (e.g., division directors, program heads) | 4 different policy briefs on mammography screening to reduce breast cancer mortality | Self-reported understandability |
| Carrasco-Labra 2016 [ | RCT | Health care professionals, guideline developers and researchers that use and/or develop systematic reviews | An alternate summary of findings table was compared against the current format | Self-reported understanding assessed with 7 multiple choice questions (5 response options). Self-reported accessibility of information assessed with 3 self-reported domains (how easy it is to find critical information, how easy it is to understand the information, whether the information is presented in a useful way for decision-making. Satisfaction measured by asking which about satisfaction with the different formatting elements. Preference assessed using a 7-point Likert scale for the 2 tables |
| Dobbins 2009 [ | RCT | Front line staff, managers, directors, coordinators, and others from public health departments in Canada (those directly responsible for making program decisions related to healthy body weight promotion in children) | 1st group (control) | Self-reported global evidence-informed decision-making (participants were asked to report the extent to which research evidence was considered in a recent program planning decision within the previous 12 months) related to healthy body weight promotion and public health policies and programs measured by the sum of actual strategies, policies, and/or interventions for healthy body weight promotion in children being implemented by the department |
| Masset 2013 [ | RCT | Individuals who normally read policy briefs related to international development, e.g., employed in academia, NGOs, and international aid organizations, some self-reported influence on policy decisions and therefore considered policymakers | 3 versions of a policy brief summarizing the results of a SR | Beliefs about the effectiveness of and strength of the evidence for the interventions included in the briefs |
| Opiyo 2013 [ | RCT | Panel of healthcare professionals with roles in neonatal and pediatric policy and care in Kenya | 3 intervention packages | Self-reported understanding of the summary content measured by the proportion of correct responses to clinical questions relevant to the effects of the intervention. |
| Vandvik 2012 [ | RCT | All panelists for the antithrombotic therapy and prevention of thrombosis, American College of Chest Physicians | 2 formats of the evidence profile that differed by 4 features | User preferences for specific formatting options and the overall format of the table were assessed using a 7-point Likert scale |
Characteristics of ongoing studies
| Study ID | Methods | Participants | Intervention description | Outcomes |
|---|---|---|---|---|
| Wilson 2011 [ | RCT | Decision makers (programs, services, advocacy) from community-based HIV/AIDS organizations in Canada affiliated with the Canadian AIDS Society and from relevant provincial HIV/AIDS networks | At baseline, all participants will receive the “self-serve” evidence service (includes a listing of relevant systematic reviews, links to PubMed records, and worksheets to help find and use research evidence). During the intervention, one group will receive the “full-serve” version of SHARE (Synthesized HIV/AIDS Research Evidence) which includes access to a database of HIV systematic reviews, emailed updates, access to user-friendly summaries, links to scientific abstracts, peer relevance assessments (indicating how useful the information is), as well as an interface for comments in the records, plus links to the full text, and access to worksheets to help find and use evidence. The control group will continue to receive the self-serve evidence service. During the final 2-month period, both groups will receive the full-serve version of SHARE | The primary outcome measure will be the mean number of logins/month/organization. The secondary outcome will be intention to use research evidence (measured with a survey administered to one key decision maker from each organization) |
| Wilson 2015 [ | CBA | Clinical Commissioning Groups: governing body and executive members, clinical leads, and any other individuals deemed as being involved in commissioning decision-making processes | 3 arms: (1) consulting plus responsive push of tailored evidence (access to an evidence briefing service provided by the Centre for Reviews and Dissemination (CRD) plus advice and support via phone, email, face-to-face; monthly check in to discuss further evidence needs; issues around use of evidence; alert team to new SRs, and other synthesized evidence relevant to priorities); (2) consulting plus an unsolicited push of non-tailored evidence (access to intervention 1 without tailored evidence briefings and instead just evidence briefings without contextual information); or (3) “standard” service (CRD will disseminate evidence briefings generated in intervention 1 and any other non-tailored briefings produced by CRD over the intervention period) | Primary outcome: change at 12 months from baseline of a CCGs ability to acquire, assess, adapt, and apply research evidence to support decision-making. Secondary outcomes will measure individuals’ intentions to use research evidence in decision-making |
Evidence summary formats and results
| Study | Type of evidence summary | Format of summary | Method of delivery | Components | Outcomes |
|---|---|---|---|---|---|
| Brownson 2011 [ | Policy brief | Printed leaflet/booklet, PDF version for those who prefer online | Mailed, follow up telephone call, emailed if preferred | Front cover varied according to story- versus data-driven, color printed (included data or story), 3rd and 4th pages are the same across all 4 briefs, data-driven briefs contained 2 statements with percentages related to mammography screening, story-driven had 2 personal stories related to mammography, all briefs had data about uninsured women, women not up to date on mammograms, breast cancer mortality compared to other causes, benefits of mammograms, and recommendations | The briefs were considered understandable and credible (mean ratings ranged from 4.3 to 4.5 on 5.0 Likert scale). Likelihood of using the brief was different by study condition for staff members ( |
| Carrasco-Labra 2016 [ | Summary of findings table | Table | Emailed link to online survey | The new format of summary of findings table moved the number of participants and studies to the outcomes column, quality of evidence was presented with the main reasons for downgrading, “footnotes” was changed to “explanations”, baseline risk and corresponding risk were expressed as percentages, column presenting absolute risk reduction (risk difference) or mean difference, no comments column, addition of “what happens” column, no description of the GRADE evidence definitions | Participants with the new summary of findings table format had higher proportion of correct answers for almost all questions. The new format was more accessible (easier to understand information about the effects (MD 0.4, SE 0.19); and displayed results in a way that was more helpful for decision-making (MD 0.5 SE 0.18); overall, participants preferred the new format (MD 2.8, SD 1.6) |
| Dobbins 2009 [ | Evidence summaries | Text | Targeted, tailored emails | Short summary including key findings and recommendations | The post-intervention change in Global Evidence-Informed Decision-making was 0.74 (95% CI 0.26–1.22) for the group receiving only access to healthevidence.ca; –0.42 (–1.10, 0.26) for the group receiving tailored, targeted emails; and –0.09 (–0.78, 0.60) for the knowledge broker group. |
| Masset 2013 [ | Policy brief | Text, colored leaflet | Introduction to the problem, description of methodology, conclusions, and policy implications, 2 versions had expert commentary | Respondents with stronger beliefs about the agricultural interventions at baseline rated the policy brief more favourably | |
| Opiyo 2013 [ | Summary of findings table, graded entry summary of evidence | Text, tables | Summary of findings table | No differences between groups in the odds of correct responses to key clinical questions | |
| Vandvik 2012 [ | Summary of findings table | Table | Tables presented outcomes, number of participants, summary of findings, and quality assessment using GRADE | Participants liked presentation of study event rates over no study event rates, absolute risk differences over absolute risks, and additional information in table cells over footnotes. Panelists presented with time frame information in the tables, and not only in footnotes, were more likely to properly answer questions regarding time frame and those presented with risk differences, and not absolute risks were more likely to rightly interpret confidence intervals for absolute effects. Information was considered easy to find and to comprehend and also helpful in making recommendations regardless of table format |
Fig. 2Risk of bias
Summary of findings table
| Evidence summaries to increase policymakers’ use of systematic review evidence | |||
| Patient or population: policymakers and health system managers | |||
| Outcomes | Impact | No. of participants (studies) | Quality of the evidence (GRADE) |
| Use of systematic review evidence in decision-making | Little to no difference in effect on evidence-informed decision-making when compared to access to a knowledge broker or online registry of research [ | 399 (2) | ⊕⊕⊕⊝ |
| Understanding, knowledge and/or beliefs | One study found little to no effect on understanding of information when provided in different summary of findings table formats [ | 676 (4) | ⊕⊕⊕⊝ |
| Perceived credibility of the summaries | Little to no difference in perceived credibility for different versions of the policy brief (data-driven versus story-driven, local- versus state-level data) [ | 291 (1) | ⊕⊕⊕⊝ |
| Perceived usefulness and usability of systematic review summaries | The graded entry format was rated higher than the systematic review alone, and there was little to no difference between the ratings for the summary of findings table and the systematic review alone [ | 443 (3) | ⊕⊕⊕⊝ |
| Perceived understandability of the summaries | All formats of the policy brief were reported as easy to understand [ | 356 (2) | ⊕⊕⊕⊝ |
| Perceived desirability of the summaries | Alternate versions of the summary of findings were preferred [ | 378 (2) | ⊕⊕⊕⊕ |
GRADE working group grades of evidence, High quality further research is very unlikely to change our confidence in the estimate of effect, Moderate quality further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate, Low quality further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate, Very low quality we are very uncertain about the estimate
aUnclear ROB