| Literature DB >> 29162130 |
Carole Lunny1, Sue E Brennan1, Steve McDonald1, Joanne E McKenzie2.
Abstract
BACKGROUND: Overviews of systematic reviews attempt to systematically retrieve and summarise the results of multiple systematic reviews. Methods for conducting, interpreting and reporting overviews are in their infancy. To date, there has been no evidence map of the methods used in overviews, thus making it difficult to determine the gaps and priorities for methods research. Our objectives were to develop and populate a comprehensive framework of methods for conducting, interpreting and reporting overviews (stage I) and to create an evidence map by mapping studies that have evaluated overview methods to the framework (stage II).Entities:
Keywords: Evaluation of methods; Evidence mapping; Evidence synthesis; Meta-review; Overview; Overview methods; Overviews of systematic reviews; Review of reviews; Systematic review methods; Umbrella review
Mesh:
Year: 2017 PMID: 29162130 PMCID: PMC5698938 DOI: 10.1186/s13643-017-0617-1
Source DB: PubMed Journal: Syst Rev ISSN: 2046-4053
Fig. 1Summary of the research reported in each paper
Fig. 2Stages in the development of an evidence map of overview methods
Data extracted from methods studies evaluating search filters for SRs
| Data extracted | Description |
|---|---|
| Study characteristics | Citation details |
| Primary objective | |
| Search filter evaluation details | Type of search filter evaluation (categorised as single search filter evaluation, comparative search filter evaluation, comparative database evaluation) |
| Health field filter designed for | |
| Number of filters evaluated | |
| Number of filters developed by author | |
| Databases filters tested in and the interface(s) | |
| Technique to identify and/or create gold standard | |
| Sample size of the gold standard set or validation set | |
| Performance measures (e.g. sensitivity/recall, specificity) | |
| Search dates of the gold standard or validation set | |
| Name of filters evaluated | |
| Risk of bias criteria | Existence of a protocol |
| Validation on a data set distinct from the derivation set |
Fig. 3Flowchart of studies retrieved for both stages I and II. *The 42 stage I studies contributed to multiple steps
Fig. 4Flowchart of stage II studies of search filter evaluations
Characteristics of stage I descriptive studies
| Steps in the conduct of an overview | ||||||
|---|---|---|---|---|---|---|
| Citation | Type of study | Summary description of the article | Purpose, objectives, scope | Eligibility criteria | Search methods | Data extraction |
|
| Article describing methods for overviews | • Describes the usefulness of overviews for decision makers and summarises some procedural steps to be undertaken | ✓✓ | ✓✓ | ✓✓ | |
|
| Guidance for undertaking overviews | • Early guidance providing the structure and procedural steps for the production of an overview | ✓✓ | ✓✓ | ✓✓ | ✓✓ |
|
| Article describing methods for overviews | • Describes criteria for explaining differences in overlapping M-As with discordant conclusions | ✓ | ✓✓ | ✓✓ | |
|
| Article describing methods for overviews | • Describes the methodological challenges in the production of overviews that mediate existing synthesised knowledge to policy makers | ✓✓ | ✓✓ | ✓✓ | ✓✓ |
|
| Study examining methods used in a cohort of overviews | • Identifies possible aims of an overview as being to detect unintended effects, improve the precision of effect estimates, or explore heterogeneity of effect across disease groups | ✓✓ | ✓✓ | ||
|
| Guidance for undertaking overviews | • Provides updated Cochrane guidance on the purpose and conduct of overviews | ✓✓ | ✓✓ | ✓✓ | ✓✓ |
|
| Article describing methods for overviews | • Describes steps in the conduct of an overview and methods to address challenges (for example dealing with overlap in primary studies) | ✓✓ | ✓✓ | ✓✓ | ✓✓ |
|
| Article describing methods for overviews | • Mentions the issue of missing or inadequately reported data | ✓ | |||
|
| Article describing methods for overviews | • Describes some challenges inherent in the eligibility criteria process (defining AMSTAR scoring as inclusion criteria, inclusion of non-Cochrane reviews alongside Cochrane reviews) | ✓✓ | ✓✓ | ||
|
| Study examining methods used in a cohort of overviews | • Describes steps in the conduct of overviews and methods used | ✓ | ✓✓ | ✓✓ | ✓✓ |
|
| Study examining methods used in a cohort of overviews | • Mentions challenges relating to the eligibility criteria process in terms of SR quality, search dates, the strength of the evidence to include, etc. | ✓ | ✓ | ||
|
| Article describing methods for overviews | • Briefly defines overviews, mentions the purposes in conducting an overview, and discusses some methodological challenges | ✓ | |||
|
| Article describing methods for overviews | • Defines umbrella reviews as a pre-step to network meta-analysis | ✓✓ | |||
|
| Study examining methods used in a cohort of overviews | • Briefly describes several steps in the conduct of overviews including determining the eligibility criteria and search methods | ✓ | ✓ | ✓ | |
|
| Guidance for undertaking overviews | • Provides guidance as to what methods should be used at which step in the conduct of an overview | ✓✓ | ✓✓ | ✓✓ | ✓✓ |
|
| Commentary or editorial that discuss methods for overviews | • Mentions four methodological shortcomings of one overview on surgical interventions as a letter to the editor | ✓ | |||
|
| Article describing methods for overviews | • Mentions the challenges encountered when the authors conducted three overviews including missing information when extracting data | ✓ | ✓ | ||
|
| Article describing methods for overviews | • Presents a pilot reporting/quality checklist | ✓ | ✓ | ✓ | ✓ |
|
| Study examining methods used in a cohort of overviews | • Describes the methods used in a cohort of overviews | ✓ | ✓✓ | ✓✓ | ✓ |
|
| Article describing methods for overviews | • Describes the methods recommended in 8 HTA guideline documents related to overviews | ✓ | ✓ | ||
|
| Study examining methods used in a cohort of overviews | • Describes the process of searching for primary studies in an overview | ✓✓ | |||
|
| Article describing methods for overviews | • Describes the steps to undertake a complex review that includes multiple SRs, which is similar to overviews | ✓✓ | ✓✓ | ✓✓ | ✓✓ |
|
| Article describing methods for overviews | • Presents tabular methods to deal with the preparation of overview evidence | ✓✓ | ✓✓ | ||
|
| Guidance for undertaking overviews | • Provides Cochrane guidance on the definition of an overviews and as compared to SRs | ✓✓ | ✓✓ | ✓✓ | ✓ |
|
| Study examining methods used in a cohort of overviews | • Examines a cohort of Cochrane reviews for methods used | ✓✓ | |||
|
| Article describing methods for overviews | • Presents a pilot reporting/quality checklist | ✓✓ | ✓✓ | ||
|
| Article describing methods for overviews | • Describes some steps and challenges in undertaking an overview, namely search methods, study selection, quality assessment, and presentation of results | ✓✓ | ✓✓ | ✓✓ | ✓ |
|
| Article describing methods for overviews | • Describes some steps in undertaking an overview and the challenges inherent in production of overviews | ✓✓ | ✓✓ | ||
|
| Study examining methods used in a cohort of overviews | • Describes the process of including trials in overviews | ✓✓ | ✓✓ | ✓✓ | |
AMSTAR A MeaSurement Tool to Assess systematic Reviews, CMIMG Comparing Multiple Interventions Methods Group, JBI Joanna Briggs Institute, PRISMA Preferred Reporting Items for Systematic reviews and Meta-Analyses, HTA health technology assessment, MECIR Methodological Expectations of Cochrane Intervention Reviews, SR systematic review, M-As meta-analyses
aIndicates a poster presentation
✓✓ Indicates a study describing one or more methods
✓ Indicates a study mentioning one or more methods
Specification of purpose, objectives and scope
| Step | Sub-step | Methods/approaches | Sources |
|---|---|---|---|
| 1.0 Determine stakeholder involvement in planning the overview | |||
| 1.1 Agree on who is responsible for setting the overall purpose and objectives | |||
| 1.1.1 Commissioners of the overview | Whitlock 2008 [ | ||
| 1.1.2 Researcher or author team | Becker 2008 [ | ||
| 1.1.3 Multiple/all stakeholders in collaboration | Caird 2015 [ | ||
| 1.2 Determine the extent and approach to stakeholder involvement in defining the purpose, objectives and scope of the overview (i.e. who, on what aspects, at what stage(s), how) | Caird 2015 [ | ||
| 2.0 Define the purpose, objectives and scope | |||
| 2.1 Define the purpose of the overview | |||
| 2.1.1 Map the type and quantity of available evidence (e.g. types of interventions, outcomes, populations/settings, study designs but not effects) | Becker 2008 [ | ||
| 2.1.2 Compare multiple interventions with the intent of drawing inferences about the comparative effectiveness of the interventions intervention for the same condition, problem or population | Becker 2008 [ | ||
| 2.1.3 Summarise the effects of an intervention for the same condition, problem or population where different outcomes are addressed in different SRs | Becker 2008 [ | ||
| 2.1.4 Summarise the effects of an intervention across conditions, problems or populations (e.g. “borrowing strength” when there is sparse data for a single condition and a similar mechanism of action for the intervention is predicted across conditions) | Becker 2008 [ | ||
| 2.1.5 Summarise unexpected (including adverse) effects of an intervention across conditions, problems or populations | Becker 2008 [ | ||
| 2.1.6 Identify and explore reasons for heterogeneity in the effects of an intervention (e.g. by examining reasons for discordant results or conclusions across SRs) | Bolland 2014 [ | ||
| 2.1.7 Other purposes | CMIMG 2012 [ | ||
| 2.2 Confirm that an overview is the appropriate type of study for addressing the purpose and objectives, as opposed to other types of reviews (i.e. intervention review, network meta-analysis) | |||
| 2.2.1 Use a decision algorithm | Becker 2008 [ | ||
| 2.2.2 Use other reasoning (triggers), for example, a new or updated SR might be more appropriate than an overview when SRs: (i) are not available, or have insufficient overlap with the overview question/PICO, (ii) have methodological shortcomings (including not being up-to-date), (iii) are discordant and the reason for discordance cannot be identified (e.g. by methodological differences), and (iv) need independent confirmation (or disconfirmation) (e.g. where SR authors have conflicts of interest such as industry ties or funding) | Chen 2014 [ | ||
| 2.3 Determine any constraints that will restrict the scope of the overview (e.g. time, staffing, skill set) | Caird 2015 [ | ||
| 2.4 Define the scope of the overview taking into account 2.1–2.3 | |||
| 2.4.1 Narrow scope-based on a well-defined question (specific PICOs) or methodological criteria restrictions (i.e. date range of eligible literature, sources searched, publication types and study designs, extent and quality of data extracted, type of synthesis undertaken) | Baker 2014 [ | ||
| 2.4.2 Broad scope - based on a broadly defined question with diverse and multiple PICOs elements, or no methodological restrictions | Baker 2014 [ | ||
| 2.5 Define the objectives using PICO elements (or equivalent) to develop an answerable question | Baker 2014 [ | ||
CMIMG Comparing Multiple Interventions Methods Group, JBI Joanna Briggs Institute, NSAIDs nonsteroidal anti-inflammatory drugs, PICOs Population, Intervention, Comparison, Outcome, and Study design, RoB risk of bias, SRs systematic reviews
Specification of eligibility criteria
| Step | Sub-step | Methods/approaches | Sources |
|---|---|---|---|
| 1.0 Plan the eligibility criteria | |||
| 1.1 Determine PICO eligibility criteria for the overview (and setting and timing if applicable) | Becker 2008 [ | ||
| 1.2 Determine PICO eligibility criteria for SRs | |||
| 1.2.1 Select only SRs that are similar (or narrower) in scope to the overview PICO elements (i.e. exclude SRs that include out-of-scope interventions/populations in addition to the intervention/population addressed by the overview) | Becker 2008 [ | ||
| 1.2.2 Select all SRs that address the PICO elements, including those broader in scope than the overview (i.e. SRs that include the intervention/ population addressed by the overview, plus other out-of-scope interventions/ populations). This may involve selecting: (i) any SR, irrespective of whether separate data are available for the subgroup of interest or (ii) limiting to SRs that present separate data for the subgroup of interest | Becker 2008 [ | ||
| 1.3 Determine criteria (mechanisms) to select outcomes where there are multiple | |||
| 1.3.1 Include all outcomes reported in included SRs | Becker 2008 [ | ||
| 1.3.2 Select one or more outcomes using pre-specified criteria, for example: (i) outcomes judged important by subject specialists (e.g. consumers, policy makers), (ii) primary outcomes, and (iii) outcomes common to more than one SR | Caird 2015 [ | ||
| 1.3.3 Select one or more outcomes using pre-specified decision rules (e.g. combine selection criteria in an algorithm) | Inferred method | ||
| 1.4 Determine methodological eligibility criteria for SRs | |||
| 1.4.1 Include all SRs that meet the PICO criteria (i.e. no methodological criteria applied) | Caird 2015 [ | ||
| 1.4.2 Select SRs that meet minimum quality criteria or take a particular methodological approach. | Becker 2008 [ | ||
| 1.5 Determine eligibility criteria to deal with SRs with overlap | |||
| 1.5.1 Include all SRs that meet the PICO, irrespective of overlap | Cooper 2012 [ | ||
| 1.5.2 Select one SR from multiple addressing the same question using pre-specified methodological criteria as outlined in 1.4.2 | Cooper 2012 [ | ||
| 1.5.3 Select one SR from multiple addressing the same question using pre-specified decision rules (e.g. combine one or more eligibility criteria in an algorithm) | Cooper 2012 [ | ||
| 1.5.4 Exclude SRs that do not contain any unique primary studies, when there are multiple SRs | Pieper 2014 [ | ||
| 1.6 Determine whether to consider additional primary studies for inclusion | |||
| 1.6.1 Do not include primary studies | Becker 2008 [ | ||
| 1.6.2 Include primary studies if pre-specified eligibility criteria are met, for example: (i) when a SR is not up-to-date, (ii) when a SR is inconclusive (i.e. new studies may overturn the findings of a SR), (iii) when the included SRs provide incomplete coverage of evidence in relation to the overview PICO (e.g. missing one or more interventions, population subgroup, study design), and (iv) when there are concerns about the methods SRs used to identify and select studies | Baker 2014 [ | ||
| 1.6.3 Include primary studies using pre-specified decision rules to determine eligibility (e.g. combine one or more eligibility criteria in an algorithm for selection) | Pieper 2014 [ | ||
| 2.0 Plan the study selection process | |||
| 2.1 Determine the number of overview authors required to select studiesa | |||
| 2.1.1 Independent screening all stages by 2 or more authors | Becker 2008 [ | ||
| 2.1.2 One author screening at all stages | Hartling 2012 [ | ||
| 2.1.3 One author screening titles/abstracts, 2 or more screening full text | Hartling 2012 [ | ||
| 2.1.4 One screened at all stages, 2nd confirmed | Hartling 2012 [ | ||
| 2.1.5 One screened at all stages, 2nd confirms if uncertainty | Hartling 2012 [ | ||
AHRQ’s EPC Agency for Healthcare Research and Quality ‘s Evidence-based Practice Center, AMSTAR A MeaSurement Tool to Assess systematic Reviews, CMIMG Comparing Multiple Interventions Methods Group, JBI Joanna Briggs Institute, PICOs Population, Intervention, Comparison, Outcome, and Study design, RCT randomised controlled trial, SRs systematic reviews
aAdaption of the step from SRs to overviews. No methods evaluation required, but special consideration needs to be given to unique issues that arise in conducting overviews
Search methods
| Step | Sub-step | Methods/approaches | Sources |
|---|---|---|---|
| 1.0 Plan the sources to search | |||
| 1.1 Determine the type of sources to search | |||
| 1.1.1 Select the types of databases to search (e.g. SR databases (e.g. Cochrane, Epistemonikos), prospective SR registers (e.g. PROSPERO), or general bibliographic databases (e.g. EMBASE, PubMed), or grey literature databases (e.g. conference databases, government websites)) | Becker 2008 [ | ||
| 1.1.2 Select other types of sources (e.g. reference checking, forward citation searching, handsearching key journals)a | Cooper 2012 [ | ||
| 1.1.3 Select a combination of 1.1.1–1.1.2 | Cooper 2012 [ | ||
| 2.0 Plan the search strategy for retrieval of SRs | |||
| 2.1 Determine the search filter to use in general databases | |||
| 2.1.1 Select a published SR filter (e.g. EMBASE, MEDLINE, PubMed) | Cooper 2012 [ | ||
| 2.1.2 Develop a new search filter based on a conceptual approach or a textual analysis approach | Baker 2014 [ | ||
| 3.0 Plan how primary studies will be retrieved, if eligibility criteria determines that primary studies should be included | |||
| 3.1 Determine the sequence for searching | |||
| 3.1.1 Run a parallel search strategy for both SRs and primary studies simultaneously | Baker 2014 [ | ||
| 3.1.2 Run a sequential search strategy first for SRs and second for primary studies (i.e. either develop a strategy to search for primary studies, or use the search strategies of the included SRs to search for primary studies) | Pieper 2014 [ | ||
| 3.2 Use pragmatic/expedient approaches to retrieve primary studies | Caird 2015 [ | ||
| 3.3 Select a combination of 3.1–3.2 | |||
CMIMG Comparing Multiple Interventions Methods Group, JBI Joanna Briggs Institute, PROSPERO International Prospective Register of Systematic Reviews, SRs systematic reviews
aAdaption of the step from SRs to overviews. No methods evaluation required, but special consideration needs to be given to unique issues that arise in conducting overviews
Data extraction
| Step | Sub-step | Methods/approaches | Sources |
|---|---|---|---|
| 1.0 Plan the data elements to extract | |||
| 1.1 Determine the data to extract on the characteristics of SRsa | Becker 2008 [ | ||
| 1.2 Determine the data required to assess which SRs address the overview question and allow assessment of the overlap across SRsa | Smith 2011 [ | ||
| 1.3 Determine data to extract about the results from the SRs for each relevant primary outcome | |||
| 1.3.1 Extract M-A results | Becker 2008 [ | ||
| 1.3.2 Extract numeric trial results | Thomson 2013 [ | ||
| 1.3.3 Extract narrative results | Bolland 2014 [ | ||
| 1.3.4 Extract a combination of 1.3.1–1.3.3 | |||
| 1.3.5 Extract risk of bias assessment (overall assessment, or domain/item level data, or both) and certainty of the evidence | Becker 2008 [ | ||
| 1.4 Determine the data to extract from primary studiesa | |||
| 1.4.1 Extract numerical trial results | Caird 2015 [ | ||
| 1.4.2 Extract data required to assess risk of bias for each domain or item | Hartling 2012 [ | ||
| 1.5 Develop a data extraction forma | Becker 2008 [ | ||
| 2.0 Plan the data extraction process | |||
| 2.1 Determine the sources where data will be obtained from | |||
| 2.1.1 SRs | Becker 2008 [ | ||
| 2.1.2 Primary studies | Caird 2015 [ | ||
| 2.1.3 Registry entries (for SRs and/or trials) | Inferred method | ||
| 2.1.4 A combination of the above | Caird 2015 [ | ||
| 2.2 Determine how overlapping information across SRs will be handled | |||
| 2.2.1 Extract information from all SRs | Bolland 2014 [ | ||
| 2.2.2 Extract information from only one SR based on a priori eligibility criteria | Cooper 2012 [ | ||
| 2.3 Determine how discrepant data across SRs will be handled in data extraction | |||
| 2.3.1 Extract all data, recording discrepancies | Becker 2008 [ | ||
| 2.3.2 Extract data from only one SR based on a priori eligibility criteria | Cooper 2012 [ | ||
| 2.3.3 Extract data element (e.g. effect estimates, quality assessments) from the SR which meets decision rule criteria | Bolland 2014 [ | ||
| 2.3.4 Reconcile discrepancies through approaches outlined in 2.4 | Bolland 2014 [ | ||
| 2.4 Determine additional steps to deal with missing data from SRs, or when there is variation in information reported across SRs | |||
| 2.4.1 Retrieve reports of the primary studies | Bolland 2014 [ | ||
| 2.4.2 Contact SR or trial authors, or both, for missing info and/or clarification | Bolland 2014 [ | ||
| 2.4.3 Search SR or trial registry entries for information | Inferred method | ||
| 2.4.4 A combination of the above approaches | Bolland 2014 [ | ||
| 2.4.5 Do not take additional steps to deal with missing data or discrepancies | Becker 2008 [ | ||
| 2.5 Pilot the data extraction forma | Cooper 2012 [ | ||
| 2.6 Determine the number of overview authors required to extract dataa | |||
| 2.6.1 Single, double, or more | Becker 2008 [ | ||
| 2.6.2 Data extraction versus data checking | Becker 2008 [ | ||
| 2.7 Determine if authors (co-)authored one or several of the reviews included in the overview, and if yes, plan safeguards to avoid bias in data extraction | Buchter 2015 [ | ||
CMIMG Comparing Multiple Interventions Methods Group, JBI Joanna Briggs Institute, M-A meta-analysis, SRs systematic reviews
aAdaption of the step from SRs to overviews. No methods evaluation required, but special consideration needs to be given to unique issues that arise in conducting overviews
Methods and approaches for addressing common scenarios unique to overviews
| Methods/approaches proposed in the literaturea | |||
|---|---|---|---|
| Scenario for which authors need to plan | Eligibility criteria (Table | Data extraction (Table | |
| 1 | Reviews include | 1.4.2 | 1.2 |
| 2 | Reviews report | 1.4.2 | 2.3 (2.3.1–2.3.4) |
| 3 | Data are | 1.6.2, 1.6.3 | 2.4 (2.4.1–2.4.5) |
| 4 | Reviews provide incomplete coverage of the overview question (e.g. missing comparisons, populations) | 1.6.2, 1.6.3 | 1.2 |
| 5 | Reviews are not up-to-date | 1.4.2 | 2.1.2, 2.1.4 |
| 6 | Review methods raise concerns about bias or quality | 1.4.2 | 1.2 |
aThe methods/approaches could be used in combination
Characteristics of stage II evaluation of methods studies
| First Author Year | Primary objective | Existence of a protocol | Study design | Health field the filter designed for | # of filters evaluated (# filters developed by the author) | Database (interfaces) | Technique to identify and/or create a gold standard | Sample size of the gold standard set or validation set (n) | Validation on a data set distinct from the derivation data | Performance measures used | Search dates for the gold standard or validation set | Name of filters evaluated (number of filters) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
| Assess search filters for child health SRs in PubMed | NR | Comparative search filter evaluation | Child health | 9 | PubMed | Handsearching, Developed based on database searches | 387 | Yes | Sensitivity/recall, precision | Handsearch 1994, 1997, 2000, 2002, and 2004; DARE up to 2004, and year 2006 | PubMed filter 2006 |
|
| Evaluate propose a range of search strategies to identify SRs in MEDLINE | NR | Search filter evaluation, Comparative search filter evaluation | Medicine (general and internal) | 15 (11) | MEDLINE (Ovid) | Handsearching | 288 | No | Sensitivity/recall, precision | 1992 and 1995 | Boynton 1998 (eleven) |
|
| Evaluate search strategies for finding SRs in PsycINFO | NR | Search filter evaluation | Psych. | N/A | PsycINFO | Handsearching | 58 | No | Sensitivity/recall, precision, specificity, accuracy | 2000 | Eady 2008 |
|
| Identify SRs of adverse effects in two major databases | NR | Search filter evaluation | Adverse effects | N/A | DARE (CDSR and CRD) | Developed based on database searches | 270 | No | Sensitivity/recall, precision | 1994 to 2005 | Golder 2006 |
|
| Develop and validate the health-evidence.ca SR filter and compare its performance to other filters | NR | Search filter evaluation, Comparative search filter evaluation | Public health | 31 (3) | MEDLINE, EMBASE, and CINAHL | Handsearching, Developed based on database searches | 219 | Yes | Sensitivity/recall, precision, specificity NNR | 2004/2005 | health-evidence.ca SR filter - Lee 2012 (three) |
|
| Develop optimal search strategies in Medline for retrieving SRs | NR | Search filter evaluation, Comparative search filter evaluation | Medicinefamily practice, nursing, | 10 (4) | MEDLINE | Handsearching | 753 | Yes | Sensitivity/recall, specificity, precision | 2000 | Montori 2005 (four) |
|
| Evaluate seven databases to determine their coverage of SRs of hypertension | NR | Comparative database evaluation | Hypertension | N/A | Cochrane, DARE, EMBASE, Epistemonikos, MEDLINE, PubMed, and TRIP | Developed based on database searches | 440 | N/A | Sensitivity/recall, precision | 2003–2015 | SR filters incorporated into the databases; MEDLINE used Montori 2005 |
|
| Evaluate a search strategy for identifying SRs | NR | Search filter evaluation | Treatment diagnosis, prognosis, causation, quality improvement, or economics | N/A | MEDLINE (PubMed) | Handsearching, Developed based on database searches | 104 | No | Sensitivity/recall, precision | 1999–2000 | PubMed n.d. |
|
| Improve methods to derive a more objective search strategy to identify SRs in MEDLINE | NR | Search filter evaluation, Comparative search filter evaluation | Treatment diagnosis, prognosis, causation | 7 (5) | MEDLINE (Ovid) | Handsearching journals | 110 | No | Sensitivity/recall, precision | 1995 and 1997 | White 2001 (five] |
|
| Develop search strategies that optimize the retrieval of SRs from EMBASE. | NR | Search filter evaluation | Internal medicinegeneral practice, mental health, nursing practice | N/A | MEDLINE | Handsearching journals | 220 | No | Sensitivity/recall, specificity, precision, accuracy | 2000 | Wilczynski 2007 |
|
| Determine the consistency and accuracy of indexing SRs and meta-analyses in MEDLINE | NR | Search filter evaluation | Medicine | N/A | MEDLINE | Developed based on database searches | NA | No | Sensitivity/recall, specificity, precision, accuracy | 2000 | Wilczynski 2009 |
|
| Determine how well the previously validated broad and narrow Clinical Queries retrieve SRs | NR | Search filter evaluation | Therapy, diagnosis prognosis, etiology | N/A | MEDLINE, EMBASE, CINAHL, and PsycINFO | Developed based on database searches | NA | No | Sensitivity/recall, specificity precision | 2000 | Wilczynski 2011 |
|
| Compare sensitivity and specificity of search strategies for detecting reviews in MEDLINE and EMBASE | NR | Comparative search filter evaluation | Medicine | 7 | MEDLINE, EMBASE | Handsearching journals | 753 in MEDLINE, 220 in EMBASE | N/A | Sensitivity/recall, specificity precision | 2000 | Montori 2005 (three) |
|
| Design optimal search strategies for locating review articles in CINAHL | NR | Search filter evaluation | Nursing and allied health | N/A | CINAHL | Handsearching journals | 127 | No | Sensitivity/recall, specificity precision, accuracy | 2000 | Wong 2006 |
|
| Determine whether sensitive and specific search strategies exist to select SRs | NR | Search filter evaluation | Etiology, prognosis, therapy diagnosis | N/A | SWISH v.1.1.1 | Developed based on database searches | 209 | No | Sensitivity/recall, specificity | Not reported | Zacks 1998 |
Sensitivity/recall is defined as the proportion of relevant reports correctly retrieved by the filter; Precision is the number of relevant reports retrieved divided by the total number of records retrieved by the filter; NNR is the inverse of the precision; Specificity is the proportion of irrelevant reports correctly not retrieved by the filter, Accuracy is the proportion of all reports that are correctly classified
CDSR Cochrane Database of Systematic Reviews, CRD Centre for Review and Dissemination, n.d. no date, NNR number needed to read, N/A not applicable, NR not reported, PH public health, SRs systematic reviews, SWISH Simple Web Indexing System for Humans