| Literature DB >> 27842604 |
Michelle Pollock1, Ricardo M Fernandes2,3, Lorne A Becker4, Robin Featherstone1, Lisa Hartling5.
Abstract
BACKGROUND: Overviews of reviews (overviews) compile data from multiple systematic reviews to provide a single synthesis of relevant evidence for decision-making. Despite their increasing popularity, there is limited methodological guidance available for researchers wishing to conduct overviews. The objective of this scoping review is to identify and collate all published and unpublished documents containing guidance for conducting overviews examining the efficacy, effectiveness, and/or safety of healthcare interventions. Our aims were to provide a map of existing guidance documents; identify similarities, differences, and gaps in the guidance contained within these documents; and identify common challenges involved in conducting overviews.Entities:
Keywords: Evidence synthesis; Evidence-based medicine; Evidence-based practice; Knowledge synthesis; Metasummary; Overview of reviews; Review methods; Scoping review; Systematic reviews; Umbrella review
Mesh:
Year: 2016 PMID: 27842604 PMCID: PMC5109841 DOI: 10.1186/s13643-016-0367-5
Source DB: PubMed Journal: Syst Rev ISSN: 2046-4053
Key characteristics of overviews of reviews
| 1) Overviews should contain a clearly formulated objective designed to answer a specific clinical research question, typically about a healthcare intervention. |
| 2) Overviews should intend to search for and include only systematic reviews (with or without meta-analyses). |
| 3) Overviews should use explicit and reproducible methods to identify multiple systematic reviews that meet their inclusion criteria and to assess the methodological quality of these systematic reviews. |
| 4) Overviews should intend to collect, analyze, and present the descriptive characteristics of their included systematic reviews (and their primary studies) and the quantitative outcome data contained within the systematic reviews. |
Modified from Becker and Oxman and Hartling et al. [3, 4]
Fig. 1Flow diagram of documents through the scoping review
Characteristics of included guidance documents (52 documents produced by 19 research groups)
| Documents that contain explicit methodological guidance for conducting overviews (41 documents produced by 12 research groups) | Documents that describe an author team’s experience conducting one or more published overviews (11 documents produced by 9 research groups) | |||||
|---|---|---|---|---|---|---|
| Research group | Number of documents (Additional file | Years documents were produced | Document formats | Number of documents (Additional file | Years documents were produced | Document formats |
| Cochrane Child Health Field (CHF) | 11a
| 2010–2015 | 6 oral presentationsa, 2 internal documents, 2 posters, 1 journal article | 2 | 2011–2013 | 1 journal article, 1 poster |
| Cochrane Comparing Multiple Interventions Methods Group (CMIMG) | 18a
| 2008–2015 | 10 oral presentationsa, 5 internal documents, 1 journal article, 1 book chapter, 1 website | – | – | – |
| Cochrane Consumers and Communication Review Group (CCRG) | – | – | – | 1 | 2009 | 1 journal article |
| Cochrane Effective Practice and Organization of Care Review Group (EPOC) | 1 | 2011 | 1 oral presentation | 2 | 2011–2015 | 1 oral presentation, 1 poster |
| Cochrane Musculoskeletal Review Group (CM) | – | – | – | 1 | 2010 | 1 poster |
| Cochrane Public Health Group (CPHG) | 1 | 2014 | 1 journal article | – | – | – |
| Cochrane Stroke Group (CS) | – | – | – | 1 | 2015 | 1 oral presentation |
| Duke University (DukeU) | 1 | 2012 | 1 journal article | – | – | – |
| Dutch Cochrane Centre (DCC) | – | – | – | 1 | 2009 | 1 poster |
| Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI) | 2 | 2015 | 1 journal article, 1 oral presentation | – | – | – |
| Joanna Briggs Institute Umbrella Reviews Methodology Group (JBI) | 4 | 2007–2015 | 2 internal documents, 1 journal article, 1 book chapter | – | – | – |
| Ludwig Boltzmann Institute for Health Technology Assessment (LBI) | – | – | – | 1 | 2015 | 1 journal article |
| Norwegian Knowledge Centre for the Health Services (NOKC) | 1 | 2013 | 1 book chapter | – | – | – |
| Pontifical Xavierian University (PXU) | – | – | – | 1 | 2011 | 1 poster |
| Trinity College Dublin (TCD) | 1 | 2011 | 1 journal article | – | – | – |
| University of Birmingham (UBirm) | 1 | 2012 | 1 journal article | – | – | – |
| University of Dundee (UDun) | – | – | – | 1 | 2004 | 1 journal article |
| Western Journal of Nursing Research (WJNR) | 1 | 2014 | 1 editorial | – | – | – |
| Witten/Herdecke University (WHU) | 2 | 2014 | 2 journal articles | – | – | – |
aThree documents are counted twice because they were produced by authors affiliated with both of these groups (Additional file 3, references A1, A6, and A7). For these three documents, guidance presented by DC, LAB, and RMF was extracted into the CMIMG category, and guidance presented by DT, LH, and MF was extracted into the CHF category
Definitions
| Indirect comparison: “A comparison of two interventions via one or more common comparators. For example, the combination of intervention effects from AC and intervention effects from BC studies may (in some situations) be used to learn about the intervention effect AB.” ( | |
| Network meta-analysis: “An analysis that syntheses information over a network of comparisons to assess the comparative effects of more than two alternative interventions for the same condition. A network meta-analysis synthesizes direct and indirect evidence over the entire network, so that estimates of intervention effect are based on all available evidence for that comparisons. This evidence may be direct evidence, indirect evidence or mixed evidence. Typical outputs of a network meta-analysis are a) relative intervention effects for all comparisons; and b) a ranking of the interventions.” ( | |
| Non-Cochrane systematic reviews: Systematic reviews published outside of the Cochrane Database of Systematic Reviews. | |
| Overlapping systematic reviews: Two or more systematic reviews examining the same intervention for the same disorder. Overlapping systematic reviews will often contain one or more of the same primary studies, which may lead to including the same study’s outcome data in an overview two or more times. | |
| Quality of evidence: The confidence we have in the outcome effect estimates, often assessed using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) tool. | |
| Transitivity assumption: “The situation in which an intervention effect measured using an indirect comparison is valid and equivalent to the intervention effect measured using a direct comparison. Specifically, the transitivity assumption states that (the benefit of A over B) is equal to (the benefit of A over C) plus (the benefit of C over B). Equivalently, this may be written as (the benefit of A over C) minus (the benefit of B over C). In practice, transitivity requires similarity; that is that the sets of studies used to obtain the indirect comparison are sufficiently similar in characteristics that moderate the intervention effect. Transitivity can be thought of as a network meta-analysis extension of the idea of homogeneity in a standard meta-analysis.” ( |
Map of methodological guidance for conducting overviews
| Topic area | CHF | CMIMG | CPHG | DukeU | EPOC | EPPI | JBI | NOKC | TCD | UBirm | WHU | WJNR | Frequency effect size |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Guidance related to the context for conducting overviews (i.e., when and why should you conduct an overview?) | |||||||||||||
| Choosing between conducting an overview and a SR | ✓ | ✓ | 2/12 | ||||||||||
| What types of questions about healthcare interventions can be answered using the overview format? | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 7/12 | |||||
| Questions to consider before deciding to conduct an overview | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 6/12 | ||||||
| Author team composition and roles | ✓ | ✓ | ✓ | ✓ | 4/12 | ||||||||
| Target audience of the overview | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 8/12 | ||||
| Guidance related to the process of conducting overviews (i.e., how do you conduct an overview?) | |||||||||||||
| Specifying the scope | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 10/12 | ||
| Searching for SRs | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 11/12 | |
| Selecting SRs for inclusion | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 6/12 | ||||||
| Should an overview include non-Cochrane SRs? | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 9/12 | |||
| Assessing quality of included SRs | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 12/12 |
| Collecting and presenting data on descriptive characteristics of included SRs (and their primary studies) | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 6/12 | ||||||
| Collecting and presenting data on quality of primary studies contained within included SRs | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 7/12 | |||||
| Collecting, analyzing, and presenting outcome data | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 7/12 | |||||
| Assessing quality of evidence of outcome data | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | 6/12 | ||||||
| Interpreting outcome data and drawing conclusions | ✓ | ✓ | ✓ | 3/12 | |||||||||
| Intensity effect size | 13/15 | 15/15 | 3/15 | 9/15 | 8/15 | 11/15 | 11/15 | 8/15 | 10/15 | 3/15 | 5/15 | 8/15 | |
Common challenges involved in conducting overviews
| Topic area | Number of groups contributing challenges (/19) | Summary of challenges identified |
|---|---|---|
| Challenges related to the context for conducting overviews (i.e., when and why should you conduct an overview?) | ||
| Choosing between conducting an overview and a SR | 1 (CMIMG) |
|
| What types of questions about healthcare interventions can be answered using the overview format? | 2 (CCRG, CM) | Methods used to conduct overviews may vary according to the type of question (e.g., scope, clinical characteristics) being posed in the overview. |
| Questions to consider before deciding to conduct an overview | 5 (CHF, CMIMG, DCC, JBI, UDun) | Should authors conduct an overview if there are not enough relevant SRs (e.g., if SRs do not address all important interventions)? |
| Author team composition and roles | 2 (CHF, CMIMG) | Overview authors often have limited time. What skills are required for authors wishing to conduct overviews? |
| Target audience of the overview | 0 | No challenges identified. |
| Challenges related to the process of conducting overviews (i.e., how do you conduct an overview?) | ||
| Specifying the scope | 4 (EPPI, LBI, UBirm, UDun) | Defining the scope, and selecting and prioritizing outcomes, can be difficult. The scope of the overview may have almost complete overlap, or very limited overlap, with the scope of the relevant SRs. |
| Searching for SRs | 5 (CHF, CPHG, EPOC, LBI, UBirm) | Search strategies can be complex. It is unclear whether government reports that include both primary studies and SRs should be included in an overview. It is unclear whether and how overview authors should search for primary studies that are not contained within any included SR. |
| Selecting SRs for inclusion | 8 (CHF, CMIMG, DukeU, EPPI, JBI, UBirm, UDun, WHU) | It is unclear whether lower-quality SRs or older SRs should be included or excluded. Decisions surrounding inclusion and exclusion can affect the efficiency, utility, and breadth of the overview. |
| Should an overview include non-Cochrane SRs? | 9 (CHF, CMIMG, CPHG, DukeU, EPOC, EPPI, TCD, WHU, WJNR) | Including |
| Assessing quality of included SRs | 9 (CCRG, CHF, CMIMG, CPHG, EPOC, EPPI, PXU, UBirm, UDun) | Assessing quality of SRs can be difficult and time-consuming. Many different tools could be used to assess SR quality, and some tools designed to assess quality may also assess reporting. There is also uncertainty surrounding how to interpret and apply the results of quality assessments in the context of overviews. |
| Collecting and presenting data on descriptive characteristics of included SRs (and their primary studies) | 11 (CCRG, CHF, CM, CMIMG, DCC, DukeU, EPOC, JBI, LBI, UDun, NOKC) | Data may be missing, inadequately reported, or reported differently across included SRs, and it is unclear what to do when reporting is incomplete (e.g., should the data be extracted from primary studies?). Additionally, data extraction errors in SRs could lead to errors in the overview. |
| Collecting and presenting data on quality of primary studies contained within included SRs | 7 (CCRG, CHF, CM, DCC, EPOC, EPPI, UDun) | Collecting and presenting quality of primary studies can be difficult and time-intensive. Information about the quality of primary studies included in SRs may be missing, inadequately reported, or reported differently across included SRs. For example, different SRs may use different tools to assess quality of primary studies. |
| Collecting, analyzing, and presenting outcome data | 15 (CCRG, CHF, CM, CMIMG, DCC, DukeU, EPOC, EPPI, JBI, LBI, NOKC, UBirm, UDun, WJNR, WHU) | Collecting, analyzing and presenting outcome data can be difficult, especially when the scope, methods, or results of the included SRs are heterogeneous. Outcome data may be missing, inadequately reported, or reported inconsistently across included SRs, and it is unclear what to do in these situations (e.g., should the data be extracted from primary studies instead?). It is also unclear how best to summarize and report outcome data that comes from |
| Assessing quality of evidence of outcome data | 9 (CCRG, CHF, CM, CPHG, DCC, EPOC, PXU, UDun, WHU) | It may not be possible to simply extract existing GRADE assessments from SRs. However, it may be challenging to conduct (or re-do) GRADE assessments at the overview level, using data from SRs. For example: data needed to assess |
| Interpreting outcome data and drawing conclusions | 6 (CHF, CMIMG, DukeU, EPOC, LBI, WJNR) | Interpreting outcome data and drawing conclusions can be difficult. There is uncertainty surrounding how to interpret outcome data in overviews. It can be difficult to form a coherent judgment when multiple different comparisons from multiple SRs are included in the same overview, and/or when |
CDSR Cochrane Database of Systematic Reviews, DARE Database of Abstracts of Reviews of Effectiveness, EMBASE Excerpta Medica dataBASE, GRADE Grading of Recommendations Assessment, Development and Evaluation, PICO populations, interventions, comparators, and outcomes, SR systematic review