| Literature DB >> 28518130 |
Siddharth Singh1,2.
Abstract
Systematic reviews with or without meta-analyses serve a key purpose in critically and objectively synthesizing all available evidence regarding a focused clinical question and can inform clinical practice and clinical guidelines. Performing a rigorous systematic review is multi-step process, which includes (a) identifying a well-defined focused clinically relevant question, (b) developing a detailed review protocol with strict inclusion and exclusion criteria, (c) systematic literature search of multiple databases and unpublished data, in consultation with a medical librarian, (d) meticulous study identification and (e) systematic data abstraction, by at least two sets of investigators independently, (f) risk of bias assessment, and (g) thoughtful quantitative synthesis through meta-analysis where relevant. Besides informing guidelines, credible systematic reviews and quality of evidence assessment can help identify key knowledge gaps for future studies.Entities:
Year: 2017 PMID: 28518130 PMCID: PMC5454386 DOI: 10.1038/ctg.2017.20
Source DB: PubMed Journal: Clin Transl Gastroenterol ISSN: 2155-384X Impact factor: 4.488
Steps in designing and conducting systematic reviews and meta-analysis, and common pitfalls in conducting and interpreting meta-analysis
| 1 | Develop a focused clinical question | Clinically relevant question, often derived from equipoise observed in literature and clinical practice | •Too broad or too narrow a question, which limit feasibility and relevance of a systematic review •Performing quantitative synthesis, without clear clinical basis (in absence of equipoise about direction or magnitude of benefit) |
| 2 | Developed systematic review protocol | Use PICO format and develop explicit inclusion and exclusion criteria and identify | •Lack of clarity in scope and purpose, due to absence of |
| 3 | Systematic literature review | Engage medical librarian to conduct a sensitive search pertinent to PICO question, of multiple databases, including conference proceedings, clinical trial registries, and gray literature, as well as recursive search of systematic reviews | •Clinician-designed search, of single database, which are generally too specific, rather than being sensitive, which may miss several potential studies •Failure to search conference proceedings, trial registries, which may exacerbate the file-drawer problem •Search restricted to English language |
| 4 | Study identification | Screening titles and abstracts and full texts based on inclusion/exclusion criteria by two investigators independently | •Single investigator identifying studies, resulting in potentially missed studies or inappropriate inclusion/exclusion of studies |
| 5 | Data abstraction | Abstract all relevant study data, using piloted data abstraction form, by two investigators independently | •Single investigator abstracting data, without confirmation by another investigator, which may introduce bias or errors •Failure to anticipate and abstract data, which may be inconsistently reported across studies (for example, differences in definition of outcome, drug doses/schedules, co-interventions, etc.) |
| 6 | Risk of bias assessment | Critical assessment of study quality, through a combination of standardized tools and investigator-identified factors that may bias results, in duplicate, independently | •Failure to systematically and critically appraise quality of included studies •Strict adherence to elements reported only in risk of bias tools, without adaptation of focused clinical question, frequently resulting in failure to identify potential sources of bias •Use of quantitative scoring to stratify studies as “high” or “low” quality, failing to recognize that different elements are not weighted equally across clinical questions |
| 7 | Quantitative synthesis or meta-analysis | If appropriate (studies conceptually similar), perform meta-analysis, generally using random-effects model, estimating effect estimate and confidence intervals, statistical and conceptual assessment of heterogeneity, subgroup and sensitivity analyses, and small study effects | •Performing meta-analysis, even if studies are conceptually heterogeneous, findings from which are not applicable to clinical practice and may misinform lay audience •Using statistical measures of heterogeneity to determine whether fixed- or random-effects model should be adopted •Overinterpretation and inappropriate interpretation of subgroup analyses and failure to critically analyze and acknowledge causes for heterogeneity •Overinterpreting findings from small studies due to failure to recognize small study effects |
PICO, Patients, Interventions, Comparators and Outcomes.
Factors to consider when interpreting credibility of claims of significance from subgroup analyses
| 1 | Can chance explain the subgroup differences? | Instead of focusing on each subgroup in isolation, compare summary point estimate and confidence intervals across subgroups—if point estimate is similar, and confidence intervals are overlapping, and statistical test of interaction. For example, if subgroup 1 has effect estimate (RR) 0.78 with 95% CI 0.40–1.05, and subgroup 2 has effect estimate 0.75 with 95% CI 0.38–0.95, then the correct interpretation would be that there is NO difference in subgroups, rather than that effects are significant in subgroup 2 and not in subgroup 1; results may not be statistically significant in subgroup 1 due to small sample size or low event rate in subgroup 1, rather than true differences in efficacy of intervention in subgroups |
| 2 | Is the subgroup difference consistent across studies and suggested by comparisons within rather than between studies? | Findings from subgroup analyses are credible if observed in multiple individual studies, rather than just at summary level |
| 3 | Was the subgroup difference one of a small number of | Prespecified, hypotheses-driven subgroup analyses to explain heterogeneity across studies are more plausible, rather than |
| 4 | Is there a strong preexisting biological rationale supporting the apparent subgroup effect? | Subgroup claims are more credible if supported by strong external, biological evidence from preclinical studies or studies of surrogate outcomes |
CI, confidence interval; RR, relative risk.
Figure 1Differences between traditional meta-analyses and network meta-analyses. In traditional pairwise meta-analysis, only head-to-head direct comparisons can be analyzed. In contrast, network meta-analyses involve the simultaneous analysis of direct evidence (from randomized controlled trials (RCTs) directly comparing treatments of interest, indicated by solid arrows) and indirect evidence (from RCTs comparing treatments of interest with a common comparator, indicated by dotted arrows) to calculate a mixed-effect estimate as the weighted average of the two.