| Literature DB >> 20018115 |
Simon Lewin1, Andrew D Oxman, John N Lavis, Atle Fretheim.
Abstract
This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers. The reliability of systematic reviews of the effects of health interventions is variable. Consequently, policymakers and others need to assess how much confidence can be placed in such evidence. The use of systematic and transparent processes to determine such decisions can help to prevent the introduction of errors and bias in these judgements. In this article, we suggest five questions that can be considered when deciding how much confidence to place in the findings of a systematic review of the effects of an intervention. These are: 1. Did the review explicitly address an appropriate policy or management question? 2. Were appropriate criteria used when considering studies for the review? 3. Was the search for relevant studies detailed and reasonably comprehensive? 4. Were assessments of the studies' relevance to the review topic and of their risk of bias reproducible? 5. Were the results similar from study to study?Entities:
Year: 2009 PMID: 20018115 PMCID: PMC3271835 DOI: 10.1186/1478-4505-7-S1-S8
Source DB: PubMed Journal: Health Res Policy Syst ISSN: 1478-4505
Figure 1Finding and assessing systematic reviews to inform decisions about policy and programme options.
AMSTAR - A MeaSurement Tool to Assess Reviews (from [22])
| 1. Was an 'a priori' design provided? | □ Yes |
|---|---|
| □ Yes | |
| □ Yes | |
| □ Yes | |
| □ Yes | |
| □ Yes | |
| □ Yes | |
| □ Yes | |
| □ Yes | |
| □ Yes | |
| □ Yes | |
Interpreting the results of systematic reviews of effects
| The following questions can help to guide policymakers in interpreting the findings of systematic reviews of effects (adapted from [ |
| • |
| • |
| • |
| • |
| • |
| • |
| • |
| * There is some overlap between the questions listed here and those intended to guide assessment of the reliability of systematic reviews. This is because reliability is an important element in assessing and understanding the results of a systematic review |
Assessing how much confidence can be placed in the findings of systematic reviews of qualitative studies and systematic reviews of economic studies
| An increasing number of systematic reviews of qualitative studies are being undertaken. These use a wide range of approaches, including narrative synthesis, meta-ethnography and realist review. As well as providing important information in their own right, reviews of qualitative studies can also inform and supplement systematic reviews of effects [ |
Examples of sources searched in systematic reviews
| Review | Sources searched |
|---|---|
| 1. Electronic databases of published studies: | |
| • MEDLINE | |
| • Cochrane Central Register of Controlled Trials (CENTRAL) and specialised Cochrane Registers (EPOC and Consumers and Communication Review Groups) | |
| • Science Citations | |
| • EMBASE | |
| • CINAHL (Cumulative Index to Nursing and Allied Health Literature) | |
| • Healthstar | |
| • AMED (Allied and Complementary Medicine Database) | |
| • Leeds Health Education Effectiveness Database | |
| 2. Bibliographies of studies assessed for inclusion | |
| 3. All contacted authors were asked for details of additional studies | |
| 1. Electronic databases of published studies: | |
| • MEDLINE | |
| • EMBASE | |
| • Cochrane Central Register of Controlled Trials (CENTRAL) | |
| 2. Electronic databases of conference abstracts: | |
| • AIDSearch Conference databases | |
| 3. Electronic databases of ongoing trials: | |
| • ClinicalTrials.gov | |
| • Current Controlled Trials | |
| 4. Contacted researchers and relevant organisations in the field | |
| 5. Checked the reference lists of all studies identified by the above methods and examined any systematic reviews, meta-analyses, or prevention guidelines identified during the search process | |
| 1. Electronic databases: | |
| • The Specialized Register of the Cochrane Dementia and Cognitive | |
| • Improvement Group | |
| • Cochrane Central Register of Controlled Trials (CENTRAL) | |
| • MEDLINE | |
| • EMBASE | |
| • PsycINFO (a database of psychological literature) | |
| • CINAHL | |
| • SIGLE (Grey Literature in Europe) | |
| • LILACS (Latin American and Caribbean Health Science Literature) | |
| 2. Electronic databases of conference abstracts: | |
| • ISTP (Index to Scientific and Technical Proceedings) | |
| • INSIDE (British Library Database of Conference Proceedings and Journals) | |
| 3. Electronic databases of theses: | |
| • Index to Theses (formerly ASLIB) (United Kingdom and Ireland theses) | |
| • Australian Digital Theses Program | |
| • Canadian Theses and Dissertations | |
| • DATAD - Database of African Theses and Dissertations | |
| • Dissertation Abstract Online (USA) | |
| 4. Electronic databases of ongoing trials: searched a large range of such databases | |
What should policymakers do when different systematic reviews that address the same question have different results?
| When looking for evidence to inform a particular policy decision, it is not uncommon to identify more than one relevant systematic review. Sometimes the results of these reviews may be different, and this may result in review authors drawing different conclusions about the effects of an intervention. This scenario differs from one in which the findings of two or more reviews agree but in which researchers or others disagree on the interpretation of these findings [ |
| The following series of questions designed by Jadad and colleagues can be used to assist with identifying and addressing the causes of discordance [ |
| • Do the reviews address the same question? If not, the review that is chosen should be the one which addresses a question closest to that of the policy question for which evidence is needed. Alternatively, it should assess outcomes most relevant to the policy question |
| • If the reviews address the same question, do they include the same trials or primary studies? If they do not include the same trials, the review that includes studies most relevant to the policy question being considered should be selected |
| • If the reviews include the same studies, are the reviews of the same quality? If not, the higher quality review should be used |
| Where both reviews are relevant, for example where they address different aspects of the same question, it may be useful to draw evidence from both. |
Figure 2Ways in which reviews may be unreliable and misleading.