Literature DB >> 31517797

Identifying Research Gaps and Prioritizing Psychological Health Evidence Synthesis Needs.

Susanne Hempel1,2, Kristie Gore3, Bradley Belsher4.   

Abstract

BACKGROUND: Evidence synthesis is key in promoting evidence-based health care, but it is resource-intense. Methods are needed to identify and prioritize evidence synthesis needs within health care systems. We describe a collaboration between an agency charged with facilitating the implementation of evidence-based research and practices across the Military Health System and a research center specializing in evidence synthesis.
METHODS: Scoping searches targeted 15 sources, including the Veterans Affairs/Department of Defense Guidelines and National Defense Authorization Acts. We screened for evidence gaps in psychological health management approaches relevant to the target population. We translated gaps into potential topics for evidence maps and/or systematic reviews. Gaps amenable to evidence synthesis format provided the basis for stakeholder input. Stakeholders rated topics for their potential to inform psychological health care in the military health system. Feasibility scans determined whether topics were ready to be pursued, that is, sufficient literature exists, and duplicative efforts are avoided.
RESULTS: We identified 58 intervention, 9 diagnostics, 12 outcome, 19 population, and 24 health services evidence synthesis gaps. Areas included: posttraumatic stress disorder (PTSD) (19), suicide prevention (14), depression (9), bipolar disorder (9), substance use (24), traumatic brain injury (20), anxiety (1), and cross-cutting (14) synthesis topics. Stakeholder input helped prioritize 19 potential PTSD topics and 22 other psychological health topics. To date, 46 topics have undergone feasibility scans. We document lessons learned across clinical topics and research methods.
CONCLUSION: We describe a transparent and structured approach to evidence synthesis topic selection for a health care system using scoping searches, translation into evidence synthesis format, stakeholder input, and feasibility scans.

Entities:  

Mesh:

Year:  2019        PMID: 31517797      PMCID: PMC6750194          DOI: 10.1097/MLR.0000000000001175

Source DB:  PubMed          Journal:  Med Care        ISSN: 0025-7079            Impact factor:   2.983


Evidence synthesis is an essential step in promoting evidence-based medicine across health systems; it facilitates the translation of research to practice. A systematic review of the research literature on focused review questions is a key evidence synthesis approach that can inform practice and policy decisions.1 However, systematic reviews are resource-intense undertakings. In a resource-constrained environment, before an evidence review is commissioned, the need and the feasibility of the review must be established. Establishing the need for the review can be achieved through a research gap analysis or needs assessment. Identification of a gap serves as the first step in developing a new research question.2 Research gaps in health care do not necessarily align directly with research needs. Research gaps are only critical where knowledge gaps substantially inhibit the decision-making ability of stakeholders such as patients, health care providers, and policymakers, thus creating a need to fill the knowledge gap. Evidence synthesis enables the assessment of whether a research gap continues to exist or whether there is adequate evidence to close the knowledge gap. Furthermore, a gap analysis often identifies multiple, competing gaps that are worthwhile to be pursued. Given the resource requirements of formal evidence reviews, topic prioritization is needed to best allocate resources to those areas deemed the most relevant for the health system. Regardless of the topic, the prioritization process is likely to be stakeholder-dependent. Priorities for evidence synthesis will vary depending on the mission of the health care system and the local needs of the health care stakeholders. A process of stakeholder input is an important mechanism to ensure that the evidence review will meet local needs as well to identify a receptive audience of the review findings. In addition to establishing the need for an evidence review, the feasibility of conducting the review must also be established. In conducting primary research, feasibility is often mainly a question of available resources. For evidence reviews, the resources, the availability of primary research, and the presence of existing evidence reviews on the topic need to be explored. Not all topics are amenable for a systematic review which focus on a specific range of research questions and rely heavily on published literature. Furthermore, evidence review synthesizes the existing evidence; hence, if there is insufficient evidence in the primary research literature, an evidence review is not useful. Establishing a lack of evidence is a worthwhile exercise since it identifies the need for further research. However, most health care delivery organizations will be keen to prioritize areas that can be synthesized, that is, investing in synthesizing a body of research sizable enough to derive meaningful results. For evidence reviews, the presence of existing evidence syntheses is also an important consideration, in particular, to determine the incremental validity of a new review. Although primary research benefits profoundly by replication, secondary literature, in particular in the context of existing high-quality reviews and/or limited evidence, may not add anything to our knowledge base.3 This work describes a structured and transparent approach to identify and prioritize areas of psychological health that are important and that can be feasibly addressed by a synthesis of the research literature. It describes a collaboration between an agency charged with facilitating the implementation of evidence-based research and practices across the Military Health System (MHS) and a research center specializing in evidence synthesis.

METHODS

This project is anchored in the relationship between the Defense Health Agency Psychological Health Center of Excellence (PHCoE) and the RAND Corporation’s National Defense Research Institute (NDRI), one of the Federally Funded Research and Development Centers (FFRDC) dedicated to providing long-term analytic support to the Defense Health Agency. PHCoE, an agency charged with facilitating the implementation of evidence-based research and practices across the Military Health System funded a series of systematic reviews and evidence maps synthesizing psychological research. The project draws on the expertise of the Southern California Evidence-based Practice Center (EPC) located at RAND, a center specializing in evidence synthesis. The project included scoping searches, stakeholder input, and feasibility scans. The project is ongoing; this manuscript describes methods and results from June 2016 to September 2018. The project was assessed by our Human Subject Protection staff and determined to be exempt (date July 7, 2016, ID ND3621; August 6, 2017, ID ND3714). The following describes the process, Figure 1 provides a visual overview.
FIGURE 1

Process of identifying research gaps and prioritizing psychological health evidence synthesis needs.

Process of identifying research gaps and prioritizing psychological health evidence synthesis needs.

Scoping Searches to Identify Evidence Synthesis Gaps

Scoping searches targeted pertinent sources for evidence gaps. The searches focused on clinical conditions and interventions relevant to psychological health, including biological psychiatry, health care services research, and mental health comorbidity. Proposed topics and study populations were not limited by deployment status or deployment eligibility, but the topic section considered the prevalence of clinical conditions among Department of Defense active duty military personnel managed by the MHS. The scoping searches excluded evidence gaps addressing children and adolescents and clinical conditions exclusively relevant to veterans managed by the Department of Veterans Affairs.

Scoping Search Sources

We screened 15 sources in total for evidence synthesis gaps. Veterans Affairs/Department of Defense clinical practice guidelines were a key source for documented evidence gaps.4–9 Recently updated guidelines were screened only for evidence gaps that indicated a lack of synthesis of existing research or content areas that were outside the scope of that guideline (guidelines rely primarily on published systematic reviews and can only review a limited number of topic areas). We consulted the current report of the committee on armed services of the House of Representatives regarding the proposed National Defense Authorization Act (NDAA) and the report for the upcoming fiscal year.10,11 We specifically screened the report for research priorities identified for psychological health. We also screened the published National Research Action Plan designed to improve access to mental health services for veterans, service members, and military families.12 We conducted a literature search for publications dedicated to identifying evidence gaps and research needs for psychological health and traumatic brain injury. We searched for publications published since 2000–2016 in the most relevant databases, PubMed and PsycINFO, that had the words research gap, knowledge gap, or research priority in the title and addressed psychological health (Supplemental Digital Content, http://links.lww.com/MLR/B836). The search retrieved 203 citations. Six publications were considered potentially relevant and obtained as full text, 1 source was subsequently excluded because the authors conducted a literature search <3 years ago and it was deemed unlikely that a new review would identify substantially more eligible studies.13–19 We also used an analysis of the utilization of complementary and alternative medicine in the MHS20 to identify interventions that were popular with patients but for which potentially little evidence-based guidance exists. We focused our scoping efforts on complementary approaches such as stress management, hypnotherapy, massage, biofeedback, chiropractic, and music therapy to align with the funding scope. In the next step, we reviewed the existing clinical practice guidelines to determine whether clinicians have guidance regarding these approaches. The Department of Defense Health Related Behaviors Survey of Active Duty Military Personnel21 is an anonymous survey conducted every 3 years on service members with the aim of identifying interventions or health behaviors patients currently use. To address evidence gaps most relevant to patients, we screened the survey results, and then matched the more prevalent needs identified with guidance provided in relevant clinical practice guidelines. We consulted the priority review list assembled by the Cochrane group to identify research needs for systematic reviews. We screened the 2015–2017 lists for mental health topics that are open to new authors, that is, those that do not have an author team currently dedicated to the topic. None of the currently available topics appeared relevant to psychological health and no topics were added to the table. We also consulted with ongoing federally funded projects to identify evidence gaps that were beyond the scope of the other projects. In addition, we screened a list of psychological health research priorities developed at PHCoE for knowledge gaps that could be addressed in systematic reviews or evidence maps. Finally, we screened resources available on MHS web sites for evidence gaps.

Gap Analysis Procedure and Approach to Translating Gaps into Evidence Review Format

We first screened these sources for knowledge gaps, regardless of considerations of whether the gap is amenable to evidence review. However, we did not include research gaps where the source explicitly indicated that the knowledge gap is due to the lack of primary research. We distinguished 5 evidence gap domains and abstracted gaps across pertinent areas: interventions or diagnostic questions, treatment outcomes or specific populations, and health services research and health care delivery models. We then translated the evidence gaps into potential topics for evidence maps and/or systematic reviews. Evidence maps provide a broad overview of large research areas using data visualizations to document the presence and absence of evidence.22 Similar to scoping reviews, evidence maps do not necessarily address the effects of interventions but can be broader in scope. Systematic reviews are a standardized research methodology designed to answer clinical and policy questions with published research using meta-analysis to estimate effect sizes and formal grading of the quality of evidence. We considered systematic reviews for effectiveness and comparative effectiveness questions regarding specific intervention and diagnostic approaches.

Stakeholder Input

Evidence synthesis gaps that were determined to be amenable to systematic review or evidence map methods provided the basis for stakeholder input. Although all topics were reviewed by project personnel, we also identified psychological health service leads for Army, Navy, Air Force, and Marines within the Defense Health Agency as key stakeholders to be included in the topic selection process. To date, 2 rounds of formal ratings by stakeholders have been undertaken. The first round focused on the need for systematic review covering issues related to posttraumatic stress disorder (PTSD). The second round focused on other potential psychological health topics determined to be compatible with the MHS mission. Represented clinical areas were suicide prevention and aftercare, depressive disorders, anxiety disorders, traumatic brain injury, substance use disorder including alcohol and opioid use disorder, and chronic pain. All of the potential topics addressed either the effects of clinical interventions or health service research questions. Stakeholders rated the topics based on their potential to inform psychological health care in the military health system. The raters used a scale 5-point rating scale ranging from “No impact” to “Very high impact.” In addition, stakeholders were able to add additional suggestions for evidence review. We analyzed the mean, the mode, and individual stakeholder rating indicating “high impact” for individual topics.

Feasibility Scans

Feasibility scans provided an estimate of the volume and the type of existing research literature which is informative for 3 reasons. First, this process determined whether sufficient research was available to inform a systematic review or an evidence map. Second, feasibility scans can provide an estimate of the required resources for an evidence review by establishing whether only a small literature base or a large number of research studies exists. Finally, feasibility scans identify existing high-profile evidence synthesis reports that could make a new synthesis obsolete. Feasibility scans for potential evidence maps concentrated on the size of the body of research that would need to be screened and the relevant synthesis questions that can inform how this research should be organized in the evidence map. Feasibility scans for systematic reviews aimed to determine the number of relevant studies, existing high-quality reviews, and the number of studies not covered in existing reviews. Randomized controlled trials (RCTs) are the focus of most of the systematic review topics, that is, strong research evidence that could inform clinical practice guideline committees to recommend either for or against interventions. An experienced systematic reviewer used PubMed, a very well-maintained and user-friendly database for biomedical literature, developed preliminary search strategies, and applied database search filters (eg, for RCTs or systematic reviews) in preliminary literature searches to estimate the research volume for each topic. Scans also identified any existing high-quality evidence review published by agencies specializing in unbiased evidence syntheses such as the Agency for Healthcare Research and Quality (AHRQ)’s Evidence-based Practice Center program, the Cochrane Collaboration, the Campbell Collaboration, the Evidence Synthesis Program of the Department of Veterans Affairs, and the Federal Health Technology Assessment program. We used the databases PubMed and PubMed Health to identify reports. We appraised the scope, relevance and publication year of the existing high-profile evidence reviews. The research base for psychological health develops rapidly and evidence syntheses need to ensure that current clinical policies reflect the best available evidence. When determining the feasibility and appropriateness of a new systematic review, we took the results of the original review and any new studies that had been published subsequent to the systematic review on the same topic into account.

RESULTS

The following results are described: the results of the scoping searches and gap analysis, the translation of gaps into evidence synthesis format, the stakeholder input ratings, and the feasibility scans.

Scoping Searches and Gap Analysis Results

The scoping search and gap analysis identified a large number of evidence gaps as documented in the gap analysis table in the Appendix (Supplemental Digital Content, http://links.lww.com/MLR/B836). Across sources, we identified 58 intervention, 9 diagnostics, 12 outcome, 19 population, and 24 health services evidence synthesis gaps. The evidence gaps varied considerably with regard to scope and specificity, for example, highlighting knowledge gaps in recommendations for medications for specific clinical indications or treatment combinations4 to pointing out to gaps in supporting caregivers.11 The largest group of evidence gaps were documented for interventions. This included open questions for individual interventions (eg, ketamine)12 as well as the best format and modality within an intervention domain (eg, use of telehealth).6 Diagnostic evidence gaps included open questions regarding predictive risk factors that could be used in suicide prevention8 and the need for personalized treatments.12 Outcome evidence gaps often pointed to the lack of measured outcomes to include cost-effectiveness as well as the lack of knowledge on hypothesized effects, such as increased access or decreased stigma associated with technology-based modalities.23 Population evidence gaps addressed specific patient populations such as complex patients5 and family members of service members.11 The health services evidence gaps addressed care support through technology (eg, videoconferencing23) as well as treatment coordination within health care organizations such as how treatment for substance use disorder should be coordinated with treatment for co-occurring conditions.4

Potential Evidence Synthesis Topics

The gaps were translated into potential evidence map or systematic review topics. This translation process took into account that some topics cannot easily be operationalized as an evidence review. For example, knowledge gaps regarding prevalence or utilization estimates were hindered by the lack of publicly available data. In addition, we noted that some review questions may require an exhaustive search and a full-text review of the literature because the information cannot be searched for directly, and hence were outside the budget restraints. The clinical areas and number of topics were: PTSD (n=19), suicide prevention (n=14), depression (n=9), bipolar disorder (n=9), substance use (n=24), traumatic brain injury (n=20), anxiety (n=1), and cross-cutting (n=14) evidence synthesis topics. All topic areas are documented in the Appendix (Supplemental Digital Content, http://links.lww.com/MLR/B836).

Stakeholder Input Results

Stakeholders rated 19 PTSD-related research gaps and suggested an additional 5 topics for evidence review, addressing both preventions as well as treatment topics. Mean ratings for topics ranged from 1.75 to 3.5 on a scale from 0 (no impact potential) to 4 (high potential for impact). Thus, although identified as research gaps, the potential of an evidence review to have an important impact on the MHS varied across the topics. Only 2 topics received a mean score of ≥3 (high potential), including predictors of PTSD treatment retention and response and PTSD treatment dosing, duration, and sequencing. In addition, raters’ opinions varied considerably across some topics with SDs ranging from 0.5 to 1.5 across all topics. The stakeholders rated 22 other psychological health topics, suggested 2 additional topics for evidence review, and revised 2 original topics indicating which aspect of the research gap would be most important to address. Mean scores for the rated topics ranged from 0.25 to 3.75, with the SDs for each item ranging from 0 to 1.4. Six topics received an average score of ≥3, primarily focused on the topics of suicide prevention, substance use disorders, and telehealth interventions. Opinions on other topics varied widely across service leads.

Feasibility Scan Results

Evidence review topics that were rated by stakeholders as having some potential for impact (using a rating cutoff score>1) within the MHS were selected for formal feasibility scans. To date, 46 topics have been subjected to feasibility scans. Of these, 11 were evaluated as potential evidence map, 17 as a systematic review, and 18 as either at the time of the topic suggestion. The results of the feasibility scans are documented in the table in the Appendix (Supplemental Digital Content, http://links.lww.com/MLR/B836). The feasibility scan result table shows the topic, topic modification suggestions based on literature reviews, and the mean stakeholder impact rating. The table shows the employed search strategy to determine the feasibility; the estimated number of RCTs in the database PubMed; the number and citation of Cochrane, Evidence Synthesis Program, and Health Technology Assessment reviews, that is, high-quality syntheses; and the estimated number of RCTs published after the latest existing systematic review that had been published on the topic. Each potential evidence review topic was discussed in a narrative review report that documented the reason for determining the topic to be feasible or not feasible. Reasons for determining the topic to be not feasible included the lack of primary research for an evidence map or systematic review, the presence of an ongoing research project that may influence the evidence review scope, and the presence of an existing high-quality evidence review. Some topics were shown to be feasible upon further modification; this included topics that were partially addressed in existing reviews or topics where the review scope would need to be substantially changed to result in a high-impact evidence review. Topics to be judged feasible met all outlined criteria, that is, the topic could be addressed in a systematic review or evidence map, there were sufficient studies to justify a review, and the review would not merely replicate an existing review but make a novel contribution to the evidence base.

DISCUSSION

The project describes a transparent and structured approach to identify and prioritize evidence synthesis topics using scoping reviews, stakeholder input, and feasibility scans. The work demonstrates an approach to establishing and evaluating evidence synthesis gaps. It has been repeatedly noted that research gap analyses often lack transparency with little information on analytic criteria and selection processes.24,25 In addition, research need identification may not be informed by systematic literature searches documenting gaps but primarily rely on often unstructured content expert input.26,27 Evidence synthesis needs assessment is a new field that to date has received very little attention. However, as health care delivery organizations move towards providing evidence-based treatments and the existing research continue to grow, both evidence reviews and evidence review gap identification and prioritization will become more prominent. One of the lessons learned is that the topic selection process added to the timeline and required additional resources. The scoping searches, translation into evidence synthesis topics, stakeholder input, and feasibility scans each added time and the project required a longer period of performance compared to previous evidence synthesis projects. The project components were undertaken sequentially and had to be divided into topic areas. For example, it was deemed too much to ask for stakeholder input for all 122 topics identified as potential evidence review topics. Furthermore, we needed to be flexible to be able to respond to unanticipated congressional requests for evidence reviews. However, our process of identifying synthesis gaps, checking whether topics can be translated into syntheses, obtaining stakeholder input to ensure that the gaps are meaningful and need filling, and estimating the feasibility and avoiding duplicative efforts, has merit considering the alternative. More targeted funding of evidence syntheses ensures relevance and while resources need to be spent on the steps we are describing, these are small investments compared to the resources required for a full systematic review or evidence map. The documented stakeholder engagement approach was useful for many reasons, not just for ensuring that the selection of evidence synthesis topics was transparent and structured. The stakeholders were alerted to the evidence synthesis project and provided input for further topic refinement. This process also supported the identification of a ‘customer’ after the review was completed, that is, a stakeholder who is keen on using the evidence review is likely to take action on its results and ready to translate the findings into clinical practice. The research to practice gap is substantial and the challenges of translating research to practice are widely documented.28–30 Inefficient research translation delays delivery of proven clinical practices and can lead to wasteful research and practice investments. The project had several strengths and limitations. The project describes a successful, transparent, and structured process to engage stakeholders and identifies important and feasible evidence review topics. However, the approach was developed to address the specific military psychological health care system needs, and therefore the process may not be generalizable to all other health care delivery organizations. Source selection was tailored to psychological health synthesis needs and process modifications (ie, sources to identify gaps) are needed for organizations aiming to establish a similar procedure. To keep the approach manageable, feasibility scans used only 1 database and we developed only preliminary, not comprehensive searches. Hence, some uncertainty about the true evidence base for the different topics remained; feasibility scans can only estimate the available research. Furthermore, the selected stakeholders were limited to a small number of service leads. A broader panel of stakeholders would have likely provided additional input. In addition, all evaluations of the literature relied on the expertise of experienced systematic reviewers; any replication of the process will require some staff with expertise in the evidence review. Finally, as outlined, all described processes added to the project timeline compounding the challenges of providing timely systematic reviews for practitioners and policymakers.31,32 We have described a transparent and structured approach to identify and prioritize areas of evidence synthesis for a health care system. Scoping searches and feasibility scans identified gaps in the literature that would benefit from evidence review. Stakeholder input helped ensure the relevance of review topics and created a receptive audience for targeted evidence synthesis. The approach aims to advance the field of evidence synthesis needs assessment. Supplemental Digital Content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's website, www.lww-medicalcare.com.
  17 in total

Review 1.  Research gaps on methadone harms and comparative harms: findings from a review of the evidence for an American Pain Society and College on Problems of Drug Dependence clinical practice guideline.

Authors:  Melissa B Weimer; Roger Chou
Journal:  J Pain       Date:  2014-04       Impact factor: 5.820

2.  AHRQ series paper 3: identifying, selecting, and refining topics for comparative effectiveness systematic reviews: AHRQ and the effective health-care program.

Authors:  Evelyn P Whitlock; Sarah A Lopez; Stephanie Chang; Mark Helfand; Michelle Eder; Nicole Floyd
Journal:  J Clin Epidemiol       Date:  2009-06-21       Impact factor: 6.437

3.  Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. The Cochrane Effective Practice and Organization of Care Review Group.

Authors:  L A Bero; R Grilli; J M Grimshaw; E Harvey; A D Oxman; M A Thomson
Journal:  BMJ       Date:  1998-08-15

4.  Development and pilot test of a process to identify research needs from a systematic review.

Authors:  Ian J Saldanha; Lisa M Wilson; Wendy L Bennett; Wanda K Nicholson; Karen A Robinson
Journal:  J Clin Epidemiol       Date:  2012-09-18       Impact factor: 6.437

5.  The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses.

Authors:  John P A Ioannidis
Journal:  Milbank Q       Date:  2016-09       Impact factor: 4.911

6.  Clinical policies and the quality of clinical practice.

Authors:  D M Eddy
Journal:  N Engl J Med       Date:  1982-08-05       Impact factor: 91.245

7.  Expediting systematic reviews: methods and implications of rapid reviews.

Authors:  Rebecca Ganann; Donna Ciliska; Helen Thomas
Journal:  Implement Sci       Date:  2010-07-19       Impact factor: 7.327

Review 8.  The answer is 17 years, what is the question: understanding time lags in translational research.

Authors:  Zoë Slote Morris; Steven Wooding; Jonathan Grant
Journal:  J R Soc Med       Date:  2011-12       Impact factor: 5.344

9.  Incorporating evidence review into quality improvement: meeting the needs of innovators.

Authors:  Margie Sherwood Danz; Susanne Hempel; Yee-Wei Lim; Roberta Shanman; Aneesa Motala; Susan Stockdale; Paul Shekelle; Lisa Rubenstein
Journal:  BMJ Qual Saf       Date:  2013-07-05       Impact factor: 7.035

Review 10.  What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products.

Authors:  Isomi M Miake-Lye; Susanne Hempel; Roberta Shanman; Paul G Shekelle
Journal:  Syst Rev       Date:  2016-02-10
View more
  2 in total

1.  Establishing an Evidence Synthesis Capability For Psychological Health Topics in the Military Health System.

Authors:  Bradley E Belsher; Erin H Beech; Marija S Kelber; Susanne Hempel; Daniel P Evatt; Derek J Smolenski; Marjorie S Campbell; Jean L Otto; Maria A Morgan; Don E Workman; Lindsay Stewart; Rebecca L Morgan; Marina Khusid; Amanda Edwards-Stewart; Kevin O'Gallagher; Nigel Bush
Journal:  Med Care       Date:  2019-10       Impact factor: 2.983

2.  Key stakeholders' perspectives and experiences with defining, identifying and displaying gaps in health research: a qualitative study.

Authors:  Linda Nyanchoka; Catrin Tudur-Smith; Raphaël Porcher; Darko Hren
Journal:  BMJ Open       Date:  2020-11-10       Impact factor: 2.692

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.