OBJECTIVE: To describe a systematic approach for identifying, reporting, and synthesizing information to allow consistent and transparent consideration of the applicability of the evidence in a systematic review according to the Population, Intervention, Comparator, Outcome, Setting domains. STUDY DESIGN AND SETTING: Comparative effectiveness reviews need to consider whether available evidence is applicable to specific clinical or policy questions to be useful to decision makers. Authors reviewed the literature and developed guidance for the Effective Health Care program. RESULTS: Because applicability depends on the specific questions and needs of the users, it is difficult to devise a valid uniform scale for rating the overall applicability of individual studies or body of evidence. We recommend consulting stakeholders to identify the factors most relevant to applicability for their decisions. Applicability should be considered separately for benefits and harms. Observational studies can help determine whether trial populations and interventions are representative of "real world" practice. Reviewers should describe differences between available evidence and the ideally applicable evidence for the question being asked and offer a qualitative judgment about the importance and potential effect of those differences. CONCLUSION: Careful consideration of applicability may improve the usefulness of systematic reviews in informing practice and policy. Published by Elsevier Inc.
OBJECTIVE: To describe a systematic approach for identifying, reporting, and synthesizing information to allow consistent and transparent consideration of the applicability of the evidence in a systematic review according to the Population, Intervention, Comparator, Outcome, Setting domains. STUDY DESIGN AND SETTING: Comparative effectiveness reviews need to consider whether available evidence is applicable to specific clinical or policy questions to be useful to decision makers. Authors reviewed the literature and developed guidance for the Effective Health Care program. RESULTS: Because applicability depends on the specific questions and needs of the users, it is difficult to devise a valid uniform scale for rating the overall applicability of individual studies or body of evidence. We recommend consulting stakeholders to identify the factors most relevant to applicability for their decisions. Applicability should be considered separately for benefits and harms. Observational studies can help determine whether trial populations and interventions are representative of "real world" practice. Reviewers should describe differences between available evidence and the ideally applicable evidence for the question being asked and offer a qualitative judgment about the importance and potential effect of those differences. CONCLUSION: Careful consideration of applicability may improve the usefulness of systematic reviews in informing practice and policy. Published by Elsevier Inc.
Authors: Evelyn P Whitlock; Michelle Eder; Jamie H Thompson; Daniel E Jonas; Corinne V Evans; Janelle M Guirguis-Blake; Jennifer S Lin Journal: Syst Rev Date: 2017-03-02
Authors: Kathryn M Stadeli; Mariam N Hantouli; Elena G Brewer; Elizabeth Austin; Kemi M Doll; Danielle C Lavallee; Giana H Davidson Journal: Am J Surg Date: 2019-07-29 Impact factor: 2.565
Authors: Salmaan Kanji; Dugald Seely; Fatemeh Yazdi; Jennifer Tetzlaff; Kavita Singh; Alexander Tsertsvadze; Andrea C Tricco; Margaret E Sears; Teik C Ooi; Michele A Turek; Becky Skidmore; Mohammed T Ansari Journal: Syst Rev Date: 2012-05-31
Authors: Laura M Gaudet; Kavita Singh; Laura Weeks; Becky Skidmore; Alexander Tsertsvadze; Mohammed T Ansari Journal: PLoS One Date: 2012-02-21 Impact factor: 3.240