Literature DB >> 36071877

Evidence synthesis summary formats for clinical guideline development group members: a mixed-methods systematic review protocol.

Melissa K Sharp1, Barrie Tyner2, Dayang Anis Binti Awang Baki3, Cormac Farrell2, Declan Devane4,5, Kamal R Mahtani6, Susan M Smith1, Michelle O'Neill2, Máirín Ryan2,7, Barbara Clyne1,2.   

Abstract

Introduction: Evidence syntheses, often in the form of systematic reviews, are essential for clinical guideline development and informing changes to health policies. However, clinical guideline development groups (CGDG) are multidisciplinary, and participants such as policymakers, healthcare professionals and patient representatives can face obstacles when trying to understand and use evidence synthesis findings. Summary formats to communicate the results of evidence syntheses have become increasingly common, but it is currently unclear which format is most effective for different stakeholders. This mixed-methods systematic review (MMSR) evaluates the effectiveness and acceptability of different evidence synthesis summary formats for CGDG members.
Methods: This protocol follows guidance from the Joanna Briggs Institute on MMSRs and is reported according to the Preferred Reporting Items for Systematic Reviews (PRISMA)-P guideline. A comprehensive search of six databases will be performed with no language restrictions. Primary outcomes are those relating to the effectiveness and preferences for and attitudes towards the different summary formats. We will include qualitative research and randomised controlled trials. Two reviewers will perform title, abstract, and full-text screening. Independent double-extraction of study characteristics and critical appraisal items will be undertaken using a standardised form. We will use a convergent segregated approach to analyse quantitative and qualitative data separately; results will then be integrated. Discussion: The results of this systematic review will provide an overview of the effectiveness and acceptability of different summary formats for evidence synthesis findings. These findings can be helpful for those in or communicating to guideline development groups. The results can also inform the development and pilot-testing of summary formats for evidence summaries. Copyright:
© 2022 Sharp MK et al.

Entities:  

Keywords:  communication; evidence summaries; mixed-methods systematic review; presentation of findings; summary of findings table

Year:  2022        PMID: 36071877      PMCID: PMC9433911          DOI: 10.12688/hrbopenres.13325.2

Source DB:  PubMed          Journal:  HRB Open Res        ISSN: 2515-4826


Introduction

Clinical guidelines support decision making to improve patient outcomes and quality of care in a cost-effective manner . The development of a clinical guideline involves a rigorous synthesis of the best available evidence on a specific clinical topic. It may involve formal consensus methods with a range of multidisciplinary stakeholders . Guideline development groups comprise a range of decision makers, often including healthcare professionals, methodologists, health policymakers, clinicians, and patient representatives – all of whom have varying levels of expertise in evidence synthesis methods. This complicates the consensus process as stakeholders may prioritise and understand the findings of evidence syntheses, such as systematic reviews, differently . While the methods and recognition of the importance of systematic reviews have advanced in recent decades , there are still barriers to their creation and use . A meta-analysis of nearly 200 systematic reviews registered on the international Prospective Register of Systematic Reviews (PROSPERO) registry found that the average systematic review, from registration to publication date, takes 67.3 weeks, involves an average of five authors, and requires the full-text screening of 63 papers (range: 0–4385) . The number of academic papers and systematic reviews being published in recent decades has rapidly increased , further accelerating during the recent COVID-19 (coronavirus disease) pandemic . The expanding evidence base and acceptance of trade-offs in validity in time-sensitive matters , has resulted in the growing popularity of other evidence synthesis methods, such as rapid reviews . This increase in different types of evidence synthesis methods further complicates matters for guideline development groups, who may interpret different types of systematic reviews in different ways based on how familiar they might be with particular approaches. For those using different types of evidence synthesis to inform clinical guideline development and health policy, the amount of included studies, length, and technical nature of evidence syntheses can make it difficult to find answers about the effectiveness of healthcare interventions . Previous work has highlighted that decision makers more easily understand evidence summaries than complete systematic reviews . These summaries can come in a variety of different formats such as policy briefs, one-page reports, abstracts, summary of findings tables, plain language summaries, visual abstracts or infographics, podcasts, and more. While formatting may vary, decision-makers have expressed several key preferences, such as succinct summaries highlighting contextual factors like local applicability and costs . Succinctness should be inherent in an evidence summary, but how this distilled information is formatted and presented affects the interpretation and use of systematic reviews . It is currently unclear which evidence summary format is most helpful for decision making for different guideline development group stakeholders. For example, Cochrane recommends a ‘summary of findings’ table but testing with users familiar with the Cochrane library and evidence-based practices raised concerns around comprehension and presentation of results and the balance between precision and simplicity . Others have tested the presentation of information using different formats such as an abstract, plain-language summary, podcast or podcast transcription with no clear answer regarding which format was most suited to which stakeholder and resulted in the best understanding . Similarly, infographics, plain-language summaries, and traditional abstracts were found to be equally effective in transmitting knowledge to healthcare providers; however, there were differences in measures of acceptability (i.e., user-friendliness and reading experience) . To better support clinical guideline development groups and decision-makers, it is important to identify which format works best for which stakeholder. Previous reviews have focused on identifying barriers and facilitators to use, or have been solely based on summary of findings tables . As impacts on decision-making and preferences for formats may be evaluated through different study designs, a comprehensive synthesis of the evidence is needed beyond a typical single method systematic review. Mixed methods systematic reviews (MMSR) can more easily identify discrepancies within available evidence, pinpoint how quantitative or qualitative research has focused on particular interest areas, and offer a deeper understanding of findings . A MMSR is especially useful for this project as it brings together findings of effectiveness and experience so findings are more useful for decision makers . Guideline developers need to consider diverse considerations in their work such as feasibility, priority, cost effectiveness, equity, acceptability, and patient values and preferences . Similarly, a MMSR allows us to consider and integrate data from a variety of different questions and synthesize information in a single project.

Objectives

The aim of this mixed methods systematic review is to evaluate the effectiveness of, preferences for, and attitudes towards, different communication formats of evidence summary findings amongst guideline development group members, including healthcare providers, policy makers and patient representatives. To achieve this, the proposed MMSR will answer the following questions: How and to what degree do different summary formats (digital, visual, audio) of presenting evidence synthesis findings impact the end user’s understanding of the review findings? What are the end users’ preferences for and attitudes towards these formats?

Protocol

The proposed systematic review will be conducted in accordance with the Joanna Briggs Institute (JBI) Manual for Evidence Synthesis which details the methodology for mixed methods systematic reviews (MMSR) .

Eligibility criteria

As this is a MMSR, we will include quantitative (i.e., randomised controlled trials), qualitative, and mixed methods studies evaluating the effectiveness and/or preferences for and attitudes towards evidence summary formats. We will exclude conference abstracts, case reports, case series, editorials, and letters. Further details regarding eligibility criteria are given within the review-relevant sections below. We are interested in studies involving stakeholders such as policy makers, healthcare providers, and health systems managers, as well as other GDG members such as clinicians, patient representatives, and methodologists such as systematic review authors. We will exclude studies where the sole participants are students, the general population (those not involved in the clinical guideline development process), and journalists as communication to these populations is more complex given a wide variety of confounding factors. We will also exclude studies related to clinical decision-making for individual patients. We have followed the Population, Intervention, Comparison, Outcome (PICO) format for the quantitative review ( Table 1) and the Sample, Phenomenon of Interest, Design, Evaluation, Research type (SPiDER) format for the qualitative review ( Table 2) and will present unique aspects of each methodological approach within the relevant sections below.
Table 1.

PICO for the quantitative review of effectiveness.

PopulationMembers of guideline development groups (e.g., policy makers, decision makers, healthcare professionals, methodologists, patient representatives)
InterventionA summary format which communicates evidence synthesis findings
ComparatorAlternative summary formats
OutcomesQuantitative estimates of effectiveness and acceptability
Table 2.

SPiDER for the qualitative evidence synthesis.

SampleMembers of guideline development groups (e.g., policy makers, decision makers, clinicians, methodologists, patient representatives)
Phenomenon of interestHow summary formats impact decision-making and understanding of evidence synthesis findings
DesignFocus groups, interviews, questionnaires, open-ended survey responses
Evaluation outcomesViews, attitudes, opinions, experiences, perceptions, beliefs, feelings, understanding
Research typeQualitative studies and mixed-methods studies with primary qualitative data collection
Due to the complexity of stakeholders, evidence synthesis types, and summary formats, there is a high potential that confounding factors will be extensive. Relatedly, randomised controlled trials (RCTs) are the most appropriate design to evaluate the effectiveness of the interventions in question. Thus, we chose to restrict to RCTs (e.g., parallel, crossover, cluster, stepped-wedge, etc.) only in order to focus on the performance and impact of summary formats in optimal settings. We will include studies where the intervention is any summary mode (e.g., visual, audio, text-based, etc.) which communicates the findings from an evidence synthesis study (e.g., systematic review, qualitative evidence synthesis, rapid review, etc.) to policy-makers and decision makers, including guideline development groups (GDGs). We anticipate that included summary formats may encompass visual abstracts, Summary of Findings tables, one-page summaries, podcasts, Graphical Overview of Evidence Reviews (GofER) diagrams, and others. We will not exclude a summary format if it is one that we did not explicitly list in our search strategy ( Table 3). Studies in which the summaries are one component of a multi-component intervention will be excluded, as will decision aids for direct patient care.
Table 3.

Ovid MEDLINE search strategy.

Ovid MEDLINE(R) and Epub Ahead of Print, In-Process, In-Data-Review & Other Non-Indexed Citations, Daily and Versions(R) 1946 to April 13, 2021Search results
1exp Administrative Personnel/ 41017
2((health OR healthcare OR hospital*) ADJ2 (administrator* OR analyst* OR decisionmak* OR decision-mak* OR manager* OR official* OR policymak* OR policy-mak* OR policy OR policies OR provider)).tw.73550
3exp Decision Making/ OR Exp Policy Making/ OR exp Health Policy/332820
4((decision* OR policy OR policies) ADJ2 (analys* OR analyz* OR maker* OR making OR develop*)).tw.213763
5(analyst* OR clinician OR decision-mak* OR decisionmak* OR doctor OR guideline development group* OR advisory group OR knowledge user* OR knowledge-user* OR policy-mak* OR policymak* OR stakeholder* OR stake-holder* OR stake holder* OR end user* OR end-user*).tw.359621
6 1 OR 2 OR 3 OR 4 OR 5731506
7exp Evidence-Based Practice/ OR exp "Review Literature as Topic"/ OR meta-analysis as topic/ OR exp Technology Assessment, Biomedical/128395
8(knowledge ADJ2 synthes*).tw.1051
9(meta*) ADJ2 (analysis OR regression OR review OR overview OR synthes*)251572
10meta-analy* OR meta-regression OR meta-review* OR meta-synthes* OR megasynthes* 231916
11(evidence) ADJ2 (synthes* OR summar*)20882
12(quantitative OR qualitative OR systematic OR rapid OR scoping OR realist OR Cochrane OR evidence) ADJ2 (review* OR overview*)270363
13HTA OR health technology assessment6744
147 OR 8 OR 9 OR 10 OR 11 OR 12 OR 13530362
15exp Data Visualization/ OR exp Health communication/ OR exp Implementation science/ 3579
16summary of findings OR summary-of-findings OR table* OR tabular156130
17plain-language summar* OR plain language summar*1758
18infographic* OR podcast* OR visual abstract* OR fact box* OR summary format OR blogshot OR blog shot OR podcast OR video OR GRADE evidence profile OR policy brief OR league table* OR bulletin OR infogram or 1-page summary OR SUPPORT summary OR brief* or summar* OR graphic* OR audio1011424
19(communicat* OR presentat*) ADJ2 (finding*)2500
2015 OR 16 OR 17 OR 18 OR 191156243
21perceive OR understand OR understanding OR acceptability OR effectiveness OR efficacy OR satisfaction OR usability 2703824
22usefulness OR credibility OR clarity OR comprehensive OR appeal OR appropriateness OR preference$693195
2321 OR 223247866
246 AND 14 AND 20 AND 23 3830
For studies examining the effectiveness of evidence summary formats, we will include any comparison to an alternative active comparator. Studies where the comparison is no intervention (e.g., the plain full-text of a manuscript) will be excluded. We do not anticipate finding evidence syntheses with no form of summary or abstract as international organisations, journals, and reporting guidelines would consider a summary to be a mandatory component of any report or peer review manuscript produced. Our primary outcomes of interest are: Effectiveness User understanding and knowledge, and/or beliefs in key findings of evidence synthesis (e.g., changes in knowledge scores about the topic included in the summary) Self-reported impact on decision‐making Intervention metrics (e.g., the time needed to read the summary, expressed language accessibility issues or scale scores) Acceptability Preferences and attitudes (e.g. Likert scales reporting user satisfaction, perceptions, readability). We will not be including outcomes related to health literacy, numeracy, nor risk communication in patient-centred care. We are aligning our definition of ‘health literacy’ with a recent systematic review on its meaning, which is complex in nature and composed of ‘(1) knowledge of health, healthcare and health systems; (2) processing and using information in various formats in relation to health and healthcare; and (3) ability to maintain health through self-management and working in partnerships with health providers.’ As impacts on one’s individual health or clinical care is not the main focus of this review, we are focusing only on one aspect (2) -- impacts on one’s understanding of knowledge which is constrained to a specific topic that an evidence summary is covering . Primary studies investigating the understanding and acceptability of evidence summary formats will include qualitative studies (e.g, interviews or focus groups). Mixed-methods studies with primary qualitative data collection will be included if they meet the inclusion criteria for a randomised controlled trial and where it is possible to extract the findings derived from the qualitative research. We prioritized the inclusion of qualitative data from primary studies over free text from questionnaire surveys as we hypothesized primary data would be richer and thicker and thus more informative. Our primary outcomes of interest relate to participant’s views and experiences with summary formats. This includes their perceptions of the impact of summary formats on their understanding, knowledge, and decision making, and participant’s beliefs, attitudes, and feelings towards usability and readability.

Information sources and search strategy

The following databases will be searched from inception to May 2021: Ovid MEDLINE, EMBASE, APA PsycINFO, CINAHL (Cumulative Index to Nursing and Allied Health Literature), Web of Science, and Cochrane Library. The search strategy for Ovid MEDLINE includes a combination of keywords and medical subject headings (MeSH) terms for GDG members, evidence syntheses, and formats for the communication of findings (see Table 3). As we are looking for primary research on the impacts or effects of interventions and attitudes towards them, we do not anticipate that this literature will be found in grey literature sources such as government or agency websites. Additionally, it is anticipated that controlled trials will have short time points of assessment (and follow-up) thus we do not believe that searching registries will benefit our study. This search strategy has been informed by the strategies of similar reviews in the same topic area . Aligned with the Peer Review of Electronic Search Straggles (PRESS) Statement , we engaged a medical librarian after the MEDLINE search was drafted but before it was translated to the other databases. As we are including a range of study designs, we did not apply study design specific filters. Although we have used a PICO and SPiDER approach for the quantitative and qualitative reviews we used the PICO format to inform the search strategy as previous researchers found that the SPiDER approach for search strategies may be too restrictive and specific . Language and date restrictions will not be applied. Backwards citation identification on all eligible studies will be performed using the citationchaser Shiny application built in R version 1.4 . This application performs backwards citation screening (reviewing reference lists) and internally de-duplicates results. Each step of the search is summarised for transparency and references are given as a downloadable RIS file.

Data management and selection process

All citations will be downloaded and stored in Zotero reference manager version 5.0. For ease, rather than using Zotero for screening, title and abstract screening will be managed using Covidence. Two reviewers will independently screen titles and abstracts for inclusion criteria. Disagreements for inclusion will be resolved through discussions. If it is still unclear if the paper should be included, both authors will review the full version of the paper and discuss it again. If there is still disagreement, a third review author will be consulted. The screening process will be documented in the final manuscript using the Preferred Reporting Items for Systematic Reviews (PRISMA) flow diagram and a supplemental file detailing the reason for exclusion for each individual study will be made publicly available.

Data collection

Two review authors will independently extract data from each of the included studies using a standardised data-extraction form. If there are disagreements or discrepancies, the two authors will discuss and consult with a third review author if needed. Where possible, qualitative outcomes such as themes and categories will be extracted into the standardized form. In parallel, articles containing qualitative methods will be also imported in NVivo12 for line-by-line coding for information related to outcomes. This separate but parallel data extraction is important for our analytical approach of the qualitative data which is discussed in greater detail in the Qualitative Analysis section. The following information will be extracted using the pilot-tested standardized data-extraction form Bibliometric data (first author, title, journal, year of publication, language) Study characteristics (setting, participants demographics, country, study design, intervention, comparators, theoretical framework, analytical approach) Intervention characteristics will be collected following the structure of the Template for Intervention Description and Replication (TiDieR) checklist to provide detailed information on the why, what, who, how, where, and when of the intervention described. Primary and secondary outcomes (quantitative estimates of effectiveness and acceptability; qualitative expressions of views, attitudes, opinions, experiences, perceptions, beliefs, feelings, and understanding) Data from the domains listed within the JBI critical appraisal tools for qualitative and quantitative studies Funding sources If information is missing from the study report, we will contact authors to inquire about these gaps. We will provide narrative syntheses in lieu of imputing missing data.

Bias and quality assessments

The JBI critical appraisal checklists will be used to assess the individual randomized controlled trials and qualitative studies. Two review authors will independently complete the critical appraisal checklist for each included study. Differences will be resolved through discussion and consultation with a third review author if necessary. These checklists will provide useful contextual information about the included studies such as information about performance bias. Checklist items cover things like intervention assessors and their reflexivity which are important factors to consider as participant attitudes towards summary formats may be influenced by external factors such as who created the summary (e.g., their own vs. an external organisation). An assessment of the overall certainty of evidence using the GRADE or ConQual approach is not recommended for a JBI MMSR26. This is due to the complexities in the analysis wherein the data from separate quantitative and qualitative evidence is transformed and integrated. If quantitative data allows for a meta-analysis, a forest plot will be generated using R. If we find a low number of studies, large treatment effects, few events per trial, or all trials are of similar sizes, we will use the Harboard test for publication bias as it reduces the false positive rate. Egger’s test for funnel plot asymmetry will be used to investigate small study effects and publication bias.

Quantitative analysis

A narrative synthesis will be performed, however, if appropriate, quantitative data from randomised control trials will be synthesised using meta-analysis. Heterogeneity will first be explored by assessing pertinent study characteristics that may vary across the included studies (i.e., participant group, or summary format type). If sufficient data is available, subgroup analyses (e.g., participant groups such as medical professionals versus policy makers or intervention type such as visual abstracts versus plain abstracts) will be conducted. Furthermore, statistical heterogeneity will be explored according to statistical guidance on heterogeneity , an estimated I 2 of 50–90% represents substantial heterogeneity. We will weigh this against an χ 2 test for heterogeneity (<.10). If our results indicate 50% or greater and a low χ 2 statistic, this indicates that the heterogeneity may not be due to chance, thus we will not pool results into a meta-analysis. If data can be pooled, effect sizes and accompanying 95% confidence intervals will be reported as either relative risks (for dichotomous and dichotomised ordinal data) or standardized mean differences (for continuous data). Where data is available, we will compare and contrast reported findings on preference and whether or not preference is aligned with improvement of outcomes of impact such as knowledge.

Qualitative analysis

Where possible, qualitative findings will be pooled together using the meta-aggregation approach, which allows a reviewer to present findings of included studies as originally intended by the original authors . This approach organises and categorises findings based on similarity in meaning and avoids re-interpretation. Therefore, it does not violate paradigms and approaches used by the original study authors. This approach also enables meaningful generalizable recommendations for practitioners and policy makers . If we are unable to pool findings together (i.e., create and present categories), likely due to an insufficient number of studies identified, a narrative summary will be presented.

Mixed methods synthesis

Following JBI guidance for MMSR, we will use a convergent segregated approach that conducts separate quantitative and qualitative syntheses separately but at the same time and then integrates the findings of each . The segregated design integrates evidence through a method called configuration which essentially arranges complementary evidence into a single line of reasoning . After the separate quantitative and qualitative analyses are conducted, they will be organized into a coherent whole, as they cannot be directly combined nor can one refute the other . Converging or complementary data assumes that, while the streams of evidence may ask different research questions, they are related to different aspects or dimensions of the same phenomenon of interest . Data will be triangulated during the interpretation stage, comparing quantitative and qualitative findings side-by-side to identify areas where there is convergence, inconsistency, or contradiction in the data. We do not aim to transform the qualitative data into quantitative, nor vice versa. There are several methods for integrating qualitative and quantitative evidence syntheses in a convergent segregated MMSR. We will use a thematic synthesis method for integration which groups together similar codes, develops descriptive categories (or themes) to create an overall summary of findings . Initial coding will be performed independently by two authors who will meet and discuss similarities and differences in coding to start grouping them into descriptive categories. A drafted summary of findings will be created by one author, reviewed by both, and discussions will be held until a final version is agreed upon. Two authors will discuss the descriptive categories and, as a group, will draft the final analytical categories with accompanying detailed descriptions. If we have a sufficient number of included studies for meta-analysis (minimum of three), we will report information according to participant subgroups (e.g., clinicians versus policy makers), and outcomes (e.g., understanding, acceptability, etc.).

Registration and amendments

As the focus of this review is not evaluating health-related interventions nor outcomes, we will not register the protocol on PROSPERO. However, we will preregister the study on Open Science Framework. If an amendment to this protocol is necessary, the date of each amendment will be given alongside the rationale and description of the change(s). This information will be detailed in an appendix accompanying the final systematic review publication. Changes will not be incorporated into the protocol.

Dissemination of information

Findings will be disseminated as peer-reviewed publications. Data generated from the work proposed within this protocol will be made available on the aforementioned OSF project page.

Discussion

This review will summarise the evidence on the effectiveness and acceptability of different evidence synthesis summary formats. By including a variety of evidence summary types and stakeholder participants, results can help tease apart the real-world complexity of guideline development groups and provide an overview of what summary formats work for which stakeholders in what circumstances. It is expected that review findings can support decision-making by policy-makers and GDGs, by establishing the best summary formats for presenting evidence synthesis findings.

Data availability

No data are associated with this article.

Reporting guidelines

OSF: PRISMA-P checklist for ‘Evidence synthesis summary formats for clinical guideline development group members: a mixed-methods systematic review protocol’. https://doi.org/10.17605/OSF.IO/SK4NX Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0). I am happy with the responses of the authors. Is the study design appropriate for the research question? Yes Is the rationale for, and objectives of, the study clearly described? Yes Are sufficient details of the methods provided to allow replication by others? No Are the datasets clearly presented in a useable and accessible format? Not applicable Reviewer Expertise: NA I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Thank you for the opportunity to review this interesting protocol. Overall, the research plan is sound and described in detail. I have only a few minor suggestions to propose to make the protocol more precise. In Methods sections, under Eligibility criteria, I propose that you list all research designs that will be included in the analysis. Currently, only RCT-s are included, but it is not stated will all types of RCT-s will be included (parallel, crossover, etc). On top of that, what about non-randomized trials and control before and after studies? Will those studies be included, too? The sentence: "We will not be including outcomes related to health literacy, numeracy, nor risk communication in patient-centred care." is a bit problematic. Based on my understanding, health literacy is the examination of knowledge and/or understanding of health information, which will be the case of most research studies you include in the review. Maybe you wanted to state that you will not include studies that assessed health literacy with specific tests of health literacy? Would you consider including Open Science Framework registry as one of the databases in which you would perform a literature search? Minor points: "If we have a sufficient number of included studies, subgroup analyses will be performed to investigate differences based on participant groups (e.g., clinicians versus policy makers) and outcomes (e.g., understanding, acceptability, etc.)" What would be considered a sufficient number of studies? I hope those comments will help you in the process. Is the study design appropriate for the research question? Yes Is the rationale for, and objectives of, the study clearly described? Yes Are sufficient details of the methods provided to allow replication by others? Partly Are the datasets clearly presented in a useable and accessible format? Not applicable Reviewer Expertise: Public health; Health communication I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Thank you for the opportunity to review this interesting protocol. Overall, the research plan is sound and described in detail. I have only a few minor suggestions to propose to make the protocol more precise. Author’s response: Thank you for the time spent reviewing our manuscript and for your valuable feedback. In Methods sections, under Eligibility criteria, I propose that you list all research designs that will be included in the analysis. Currently, only RCT-s are included, but it is not stated will all types of RCT-s will be included (parallel, crossover, etc). On top of that, what about non-randomized trials and control before and after studies? Will those studies be included, too? Author’s response: Thank you for this question. We have edited the text to explicitly given examples of the types of RCTs that will be included ‘(e.g., parallel, crossover, cluster, stepped-wedge, etc.)’  The other reviewer had a related question regarding inclusion of different study designs. We have chosen to restrict study designs for this review to randomized controlled trials and qualitative studies as we believe these study designs are best able to either 1) control for confounding/interacting factors that could influence one’s attitude towards a format, such as who is disseminating the message (RCTs) or 2) better able to fully explore multiple factors influencing one’s perception (qualitative studies). We chose not to include non-randomised designs or observational studies due to the complexity of factors that influence one’s attitude towards a format. These study designs also relate directly to the objectives of the review (i.e., effectiveness and preferences for/attitudes towards formats). The sentence: "We will not be including outcomes related to health literacy, numeracy, nor risk communication in patient-centred care." is a bit problematic. Based on my understanding, health literacy is the examination of knowledge and/or understanding of health information, which will be the case of most research studies you include in the review. Maybe you wanted to state that you will not include studies that assessed health literacy with specific tests of health literacy? Author’s response: Thank you for this comment. We believe that health literacy outcomes are quite distinct from knowledge/understanding outcomes. We recognize that the definition of health literacy has evolved over time and can often mean different things to different people so we have added additional information to the end of the outcomes section to clarify the focus of our project and the outcomes we are including. We have added text to the manuscript to clarify ‘We are aligning our definition of ‘health literacy’ with a recent systematic review on its meaning, which is complex in nature and composed of ‘(1) knowledge of health, healthcare and health systems; (2) processing and using information in various formats in relation to health and healthcare; and (3) ability to maintain health through self-management and working in partnerships with health providers.’ As impacts on one’s individual health or clinical care is not the main focus of this review, we are focusing only on one aspect (2) -- impacts on one’s understanding of knowledge which is constrained to a specific topic that an evidence summary is covering. [1] Would you consider including Open Science Framework registry as one of the databases in which you would perform a literature search? Author’s response: Thank you for this suggestion. We did not search Open Science Framework as this registry is more often suited for observational studies and qualitative study protocols are still rare in the published literature. The search functionality of OSF (using Lucene) is also quite limited. For example using ‘evidence synthesis summary formats’ would result in over 23,000 registrations to screen. We did not search any registries for this study and will add this as a limitation in the discussion section of the final manuscript detailing the MMSR findings. Minor points: "If we have a sufficient number of included studies, subgroup analyses will be performed to investigate differences based on participant groups (e.g., clinicians versus policy makers) and outcomes (e.g., understanding, acceptability, etc.)" What would be considered a sufficient number of studies? Author’s response: Thank you for this question. To our knowledge there is no numeric threshold for subgroup analyses for a MMSR. We edited the text to say ‘If we have a sufficient number of included studies for meta-analysis (minimum of 3), we will report information according to participant subgroups (e.g., clinicians versus policy makers), and outcomes (e.g., understanding, acceptability, etc.).’ as the analysis will be descriptive anyway. This is a review underpinned by clear and interesting questions, that warrant a mixed method strategy. There is room for improvement though on the level of motivating choices for concepts and finetuning methods. The biggest issue I have is on the relation between content and form in the choice for the mediums to be compared. The authors tend to compare forms (?), but attitude towards form is not fully independent to the way messages are framed, nor by whom they are disseminated. How do the authors intend to deal with this in their comparison?  (e.g. people might prefer a form such as video but it might fail to bring the message across, as participants are distracted by the form). I was not fully sure why things like surveys measuring attitudes and opinions of people were not taken along in this review. (I am aware of the fact that guidance on the inclusion of such designs in reviews is scarce though-consider it a free offer to work towards a more comprehensive review type). Particularly, because in the RCTs authors tend to rely on self-reported measures, with LIKERT scales. Here, a comparison could sit with time, rather than comparator. Does people's attitude change when exposed to different formats over time? Sometimes the arguments are not fully clear. In the introduction, attention is paid to problems complicating the conduct of systematic reviews. This is not the core focus of the project. The project is linked to the translation phase of turning review findings into evidence summaries. This is a matter of content, form and channels rather than the SR pre-track issues. It would therefore be more appropriate to highlight the diversity of actors involved in producing guidelines in relation to how communication and dissemination channels have changed because of that. I am a big fan of the authors discussing things like podcasts and multimedia accounts as examples. Naturally, this is linked to a society moving into multimodality and the limitations of numbers and narratives in reaching out to or working with different publics. The PICO would perhaps work better if clinicians is changed into professionals. It would make it relevant to a broader target group.  Also, the comparator could also be no summary at all, for example, reading findings from reviews straight into the guideline procedure. Why are these explicitly excluded from the review? Do the authors only include qualitative studies where an intervention is evaluated by participants in terms of meaningfulness or applicability? Or do they also include qualitative studies where attitudes, viewpoints and options for form/channel/content are discussed from a more theoretical point of view, or in pilot cases? It is not clear where the selection sits on the qualitative side, but the data collection part suggests the first option. Perhaps the authors may want to bring a rationale for why they want to limit the formats to textual and audio-visual productions, as there are many more forms available through which evidence is and can be communicated (such as installations, theatre, etc.) Perhaps these forms haven't been studied extensively in the context of guidelines, but in a protocol phase we are not sure what to find yet :-). I would find it more practical if the authors would discuss their position towards critical appraisal and how to structure this instead of putting it in the data collection part as an extraction category (it is not clear what will actually be extracted for what purpose). Also, critical appraisal checklist for primary studies have a different purpose than GRADE and CONQUAL that work on a review level (which is clearly not the focus here when talking about the pool of studies included). I think the authors need to spell the two out in more detail, as they are used in different phases of the review process. Would quality of a study not rather fit a sensitivity analysis rather than a subgroup analysis? What purpose would it have as a subgroup in the review? It has disappeared in the mixed method part as a subgroup analysis, so wouldn't mention it as a subgroup in other parts of the text. What do the authors mean with the phrase "If textual pooling is not available, a narrative summary will be presented." Is a narrative summary not already a textual pooling? Unclear. While configuration might have different meanings, it is generally not commonly used as a metaphor for a separate type of synthesis. Configuration would make the different parts unrecognizable in the whole. I don't think meta-aggregation would achieve this level of integration. Perhaps integration is better than configuration here. Also, thematic synthesis is not the same as meta-aggregation, and line by line coding is certainly not a part of meta-aggregation. I believe the authors need to make an informed choice between qualitative evidence synthesis approaches and then follow the guidance through. Is the study design appropriate for the research question? Yes Is the rationale for, and objectives of, the study clearly described? Yes Are sufficient details of the methods provided to allow replication by others? No Are the datasets clearly presented in a useable and accessible format? Not applicable Reviewer Expertise: Methods I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. This is a review underpinned by clear and interesting questions, that warrant a mixed method strategy. There is room for improvement though on the level of motivating choices for concepts and finetuning methods. Author’s response: Thank you for your review and thoughtful suggestions for our manuscript. The biggest issue I have is on the relation between content and form in the choice for the mediums to be compared. The authors tend to compare forms (?), but attitude towards form is not fully independent to the way messages are framed, nor by whom they are disseminated. How do the authors intend to deal with this in their comparison?  (e.g. people might prefer a form such as video but it might fail to ring the message across, as participants are distracted by the form). Author’s response: Thank you for this comment. This is an interesting point, the studies we include in our review may well demonstrate a disconnect between what formats participants prefer, and what formats may actually produce improvement in outcomes such as knowledge. If included studies present this data we will be able to explore this more detail in our analysis but we suspect from the studies we have seen that this level of detail will not be available. Based on the importance of this suggestion, we will include this as a point of consideration in the discussion in the main manuscript in the protocol, we have made the following amendments to clarify our position on this: We have added additional information to the ‘bias and quality assessments’ section in the methods regarding this point: ‘These checklists will provide useful contextual information about the included studies such as information about performance bias. Checklists items cover things like intervention assessors and their reflexivity which are important factors to consider as participant attitudes towards summary formats may be influenced by external factors such as who created the summary (e.g., their own vs. an external organisation).’ We agree that the issue is complex and there are many confounders and interacting factors that could influence one’s attitude towards a format. Relatedly, we have chosen to restrict study designs for this review to randomized controlled trials and qualitative studies as we believe these study designs are best able to control for factors such as who is disseminating the message (RCTs) and are better able to fully explore multiple factors influencing one’s perception (qualitative studies). For example, these attitudes can be more fully explored with probes during semi-structured interviews. We have included the use of assessment tools for bias to further investigate issues such as the potential influence of the study assessors (e.g., those who designed or created the format) and their recipients (i.e, the readers/participants). We have also kept our outcomes broad so as to not take a one-dimensional view towards preference for one format over another. We can describe differences between preferences for formats and quantitative outcomes such as knowledge in the analysis where the data is available in included studies. We have clarified this in the analysis section of the methods as follows: “Where data is available, we will compare and contrast reported findings on preference and whether or not preference is aligned with improvement of outcomes of impact such as knowledge.” I was not fully sure why things like surveys measuring attitudes and opinions of people were not taken along in this review. (I am aware of the fact that guidance on the inclusion of such designs in reviews is scarce though-consider it a free offer to work towards a more comprehensive review type). Particularly, because in the RCTs authors tend to rely on self-reported measures, with LIKERT scales. Here, a comparison could sit with time, rather than comparator. Does people's attitude change when exposed to different formats over time? Author’s response: Thank you for this comment. The aim of this mixed-methods systematic review was to evaluate the effectiveness and acceptability of different evidence synthesis summary formats for CGDG members. For effectiveness we chose to restrict to RCTs as the most appropriate design to evaluate the effectiveness of the interventions in question. Therefore, observational studies like surveys were not included for the effectiveness question as they would not be appropriate to answer the question posed, as described in the methods section. We did not explicitly exclude RCT designs that had interventions with ‘repeated exposures,’ multiple timepoints for outcome assessment or longitudinal follow ups so if identified such studies will be included. However, these design seem unlikely in this context. While policy makers, clinicians, and methodologists could potentially be involved in multiple guideline development groups, this would likely be over quite an extended period of time and would likely have objectives related to the decision-making process and attitude change over time. RCTs exploring this seem unlikely and repeated surveys would not provide effectiveness data. To explore opinions and attitudes, we included qualitative studies and mixed-methods studies with primary qualitative data collection. We choose to focus on primary qualitative data as this data in this context. In keeping with Cochrane guidance, we do not consider a questionnaire survey to be a qualitative study ( ). We prioritized inclusion of qualitative data from primary studies over free text from questionnaire surveys as we hypothesized primary data would be richer and thicker and thus more informative. We have clarified this in the methods section as follows: “We prioritized the inclusion of qualitative data from primary studies over free text from questionnaire surveys as we hypothesized primary data would be richer and thicker and thus more informative.” Sometimes the arguments are not fully clear. In the introduction, attention is paid to problems complicating the conduct of systematic reviews. This is not the core focus of the project. The project is linked to the translation phase of turning review findings into evidence summaries. This is a matter of content, form and channels rather than the SR pre-track issues. It would therefore be more appropriate to highlight the diversity of actors involved in producing guidelines in relation to how communication and dissemination channels have changed because of that. I am a big fan of the authors discussing things like podcasts and multimedia accounts as examples. Naturally, this is linked to a society moving into multimodality and the limitations of numbers and narratives in reaching out to or working with different publics. Author’s response: Thank you for this feedback. We have edited the introduction to try to truncate the paragraph on systematic reviews and to try to relate more to the core focus of the project. The process of conducting a systematic review is related to the time and resources the team will have to create the summary formats themselves so we have largely kept the information about the ‘pre-track’ problems. Furthermore, we are focusing on multiple types of evidence syntheses and the landscape has recently evolved in this regard. This also directly relates to the search strategy used and inclusion criteria. The PICO would perhaps work better if clinicians is changed into professionals. It would make it relevant to a broader target group.  Also, the comparator could also be no summary at all, for example, reading findings from reviews straight into the guideline procedure. Why are these explicitly excluded from the review? Author’s response: Thank you for this question. We have changed clinicians to healthcare professionals to be more inclusive. We have excluded no summary at all as a comparator because we do not think it is appropriate to compare the reading experience of a summary format versus none. We have added text to the ‘Quantitative systematic review’ section: Studies where the comparison is no intervention (e.g., the full-text of an evidence synthesis manuscript) will be excluded. Producing a full text of an evidence synthesis without some form of summary is not ‘usual care’. Internationally, organisations, journals and reporting guidelines would consider a summary to be a mandatory component of any report/peer review manuscript produced (e.g., Cochrane, PRISMA guidance, etc.).  Therefore no form of summary or abstract ‘is not an appropriate comparator Do the authors only include qualitative studies where an intervention is evaluated by participants in terms of meaningfulness or applicability? Or do they also include qualitative studies where attitudes, viewpoints and options for form/channel/content are discussed from a more theoretical point of view, or in pilot cases? It is not clear where the selection sits on the qualitative side, but the data collection part suggests the first option. Author’s response:  Thank you for this question. We have chosen the SPiDER approach for the qualitative evidence synthesis as we are not restricting qualitative studies to those that evaluate an intervention in terms of meaningfulness of applicability. Our evaluation outcomes include the ‘Views, attitudes, opinions, experiences, perceptions, beliefs, feelings, understanding’ of participants. The information in the data collection section regarding the qualitative data is just to provide a summary overview of the study’s main findings (outcomes) to help with descriptive analyses. We edited the text to emphasize that NVivo12 coding will be conducted ‘in parallel’ (rather than ‘however, included’ as previously worded). Perhaps the authors may want to bring a rationale for why they want to limit the formats to textual and audio-visual productions, as there are many more forms available through which evidence is and can be communicated (such as installations, theatre, etc.) Perhaps these forms haven't been studied extensively in the context of guidelines, but in a protocol phase we are not sure what to find yet :-). Author’s response: Thank you for this suggestion. We did not list any exclusion of formats explicitly, nor did we limit the formats to textual or audio-visual productions. In the list provided, we used e.g. ‘e.g., visual, audio, text based’ (for example) to list potential summary modes we may find rather than ‘i.e.’ (that is) to explicitly list summary modes we are restricting our search to. Our search strategy included specific formats that we thought we might find but we would not exclude other types of communications that weren’t explicitly listed. We have added text to the ‘quantitative systematic review’ section to clarify this ‘We will not exclude a summary format if it is one that we did not explicitly list in our search strategy (Table 3).’ I would find it more practical if the authors would discuss their position towards critical appraisal and how to structure this instead of putting it in the data collection part as an extraction category (it is not clear what will actually be extracted for what purpose). Also, critical appraisal checklist for primary studies have a different purpose than GRADE and CONQUAL that work on a review level (which is clearly not the focus here when talking about the pool of studies included). I think the authors need to spell the two out in more detail, as they are used in different phases of the review process. Author’s response: Thank you for this suggestion. In reporting this protocol, we have followed the JBI manual and as such the ‘bias and quality assessments’ subheading should be it’s own unique section and does not fall under the data collection section. The JBI critical appraisal tools are mentioned in the data collection section as they are a part of the standard data-extraction form that will be used by the reviewers. These should be two separate sections; apologies if there was a formatting error. We have edited the text within the ‘data collection’ and ‘bias and quality assessments’ sections to try to clarify what will be extracted for what purpose and to clarify the GRADE/ConQual concern. The text changed in the data collection now reads: ‘The following data information will be extracted using the pilot-tested standardized data-extraction form’ and ‘Data from the domains listed within the JBI critical appraisal tools for qualitative and quantitative studies.’ We agree that the critical appraisal checklist for primary studies have a different purpose than GRADE and CONQUAL. We have reworded the information in the ‘bias and quality assessments’ section to clarify that the JBI checklists are for individual studies AND there is no checklist recommended for the final assessment in the entirety of evidence (similar to the GRADE and CONQUAL purposes) for a mixed methods systematic review. Would quality of a study not rather fit a sensitivity analysis rather than a subgroup analysis? What purpose would it have as a subgroup in the review? It has disappeared in the mixed method part as a subgroup analysis, so wouldn't mention it as a subgroup in other parts of the text. Author’s response: Thank you for this question. We recognize that the text in the quantitative analysis section was unclear. We have edited the text, deleting the for example list and restricting it only to the subgroups of interest (participant group or summary format type). We note subgroup analyses for participant groups or intervention types as these involve comparisons across the different groups. There are no planned sensitivity analyses for study quality as there will likely be far too much heterogeneity in outcomes (and the studies in general) to be able to compare the different methods. What do the authors mean with the phrase "If textual pooling is not available, a narrative summary will be presented." Is a narrative summary not already a textual pooling? Unclear. Author’s response: Thank you for this question. Apologies for this issue. We have edited the text accordingly. ‘ If we are unable to pool findings together (i.e., create and present categories), likely due to an insufficient number of studies identified, a narrative summary will be presented.’ While configuration might have different meanings, it is generally not commonly used as a metaphor for a separate type of synthesis. Configuration would make the different parts unrecognizable in the whole. I don't think meta-aggregation would achieve this level of integration. Perhaps integration is better than configuration here. Author’s response: Thank you for this suggestion. We have reworded the mixed methods synthesis section to make our approach more clear and have added a references to the specific sections of the JBI manual that we are referencing regarding configuration and the thematic analysis (chapter 8 and section 8.2 for methodology; not section 2 which is for SR of qualitative evidence which details the meta-aggregation approach). We are following JBI’s guidance for MMSRs, and have chosen to use a convergent segregated approach. We have added additional text to clarify that ‘There are several methods for integrating qualitative and quantitative evidence syntheses26, [2] in a convergent segregated MMSR. We will use a thematic synthesis method for integration which groups together similar codes, develops descriptive categories (or themes) to create an overall summary of findings. Also, thematic synthesis is not the same as meta-aggregation, and line by line coding is certainly not a part of meta-aggregation. I believe the authors need to make an informed choice between qualitative evidence synthesis approaches and then follow the guidance through. Author’s response: Thank you for this comment. Apologies for the confusion but we are not using a meta-aggregation approach for the mixed methods synthesis – only for the qualitative analysis which is done prior to the integration of the two separate streams of evidence. We have edited the mixed methods synthesis section to further clarify our methodological approach which is in accordance with chapter 8 of the JBI manual.
  38 in total

1.  Guidelines International Network: toward international standards for clinical practice guidelines.

Authors:  Amir Qaseem; Frode Forland; Fergus Macbeth; Günter Ollenschläger; Sue Phillips; Philip van der Wees
Journal:  Ann Intern Med       Date:  2012-04-03       Impact factor: 25.391

2.  User testing and stakeholder feedback contributed to the development of understandable and useful Summary of Findings tables for Cochrane reviews.

Authors:  Sarah E Rosenbaum; Claire Glenton; Hilde Kari Nylund; Andrew D Oxman
Journal:  J Clin Epidemiol       Date:  2010-06       Impact factor: 6.437

3.  Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation.

Authors:  Craig Lockwood; Zachary Munn; Kylie Porritt
Journal:  Int J Evid Based Healthc       Date:  2015-09

4.  GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction.

Authors:  Pablo Alonso-Coello; Holger J Schünemann; Jenny Moberg; Romina Brignardello-Petersen; Elie A Akl; Marina Davoli; Shaun Treweek; Reem A Mustafa; Gabriel Rada; Sarah Rosenbaum; Angela Morelli; Gordon H Guyatt; Andrew D Oxman
Journal:  BMJ       Date:  2016-06-28

Review 5.  Making evidence more wanted: a systematic review of facilitators to enhance the uptake of evidence from systematic reviews and meta-analyses.

Authors:  John Wallace; Charles Byrne; Mike Clarke
Journal:  Int J Evid Based Healthc       Date:  2012-12

6.  Guidelines 2.0: systematic development of a comprehensive checklist for a successful guideline enterprise.

Authors:  Holger J Schünemann; Wojtek Wiercioch; Itziar Etxeandia; Maicon Falavigna; Nancy Santesso; Reem Mustafa; Matthew Ventresca; Romina Brignardello-Petersen; Kaja-Triin Laisaar; Sérgio Kowalski; Tejan Baldeh; Yuan Zhang; Ulla Raid; Ignacio Neumann; Susan L Norris; Judith Thornton; Robin Harbour; Shaun Treweek; Gordon Guyatt; Pablo Alonso-Coello; Marge Reinap; Jan Brozek; Andrew Oxman; Elie A Akl
Journal:  CMAJ       Date:  2013-12-16       Impact factor: 8.262

7.  How much do you need: a randomised experiment of whether readers can understand the key messages from summaries of Cochrane Reviews without reading the full review.

Authors:  Lisa K Maguire; Mike Clarke
Journal:  J R Soc Med       Date:  2014-10-23       Impact factor: 5.344

8.  Framework for Managing the COVID-19 Infodemic: Methods and Results of an Online, Crowdsourced WHO Technical Consultation.

Authors:  Viroj Tangcharoensathien; Neville Calleja; Tim Nguyen; Tina Purnat; Marcelo D'Agostino; Sebastian Garcia-Saiso; Mark Landry; Arash Rashidian; Clayton Hamilton; Abdelhalim AbdAllah; Ioana Ghiga; Alexandra Hill; Daniel Hougendobler; Judith van Andel; Mark Nunn; Ian Brooks; Pier Luigi Sacco; Manlio De Domenico; Philip Mai; Anatoliy Gruzd; Alexandre Alaphilippe; Sylvie Briand
Journal:  J Med Internet Res       Date:  2020-06-26       Impact factor: 5.428

Review 9.  Rapid review methods more challenging during COVID-19: commentary with a focus on 8 knowledge synthesis steps.

Authors:  Andrea C Tricco; Chantelle M Garritty; Leah Boulos; Craig Lockwood; Michael Wilson; Jessie McGowan; Michael McCaul; Brian Hutton; Fiona Clement; Nicole Mittmann; Declan Devane; Etienne V Langlois; Ahmed M Abou-Setta; Catherine Houghton; Claire Glenton; Shannon E Kelly; Vivian A Welch; Annie LeBlanc; George A Wells; Ba' Pham; Simon Lewin; Sharon E Straus
Journal:  J Clin Epidemiol       Date:  2020-06-29       Impact factor: 6.437

10.  Methods for the thematic synthesis of qualitative research in systematic reviews.

Authors:  James Thomas; Angela Harden
Journal:  BMC Med Res Methodol       Date:  2008-07-10       Impact factor: 4.615

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.