Literature DB >> 33317593

Cost-effectiveness of the Adaptive Implementation of Effective Programs Trial (ADEPT): approaches to adopting implementation strategies.

Andria B Eisman1, David W Hutton2, Lisa A Prosser2,3, Shawna N Smith2,4, Amy M Kilbourne5,6.   

Abstract

BACKGROUND: Theory-based methods to support the uptake of evidence-based practices (EBPs) are critical to improving mental health outcomes. Implementation strategy costs can be substantial, and few have been rigorously evaluated. The purpose of this study is to conduct a cost-effectiveness analysis to identify the most cost-effective approach to deploying implementation strategies to enhance the uptake of Life Goals, a mental health EBP.
METHODS: We used data from a previously conducted randomized trial to compare the cost-effectiveness of Replicating Effective Programs (REP) combined with external and/or internal facilitation among sites non-responsive to REP. REP is a low-level strategy that includes EBP packaging, training, and technical assistance. External facilitation (EF) involves external expert support, and internal facilitation (IF) augments EF with protected time for internal staff to support EBP implementation. We developed a decision tree to assess 1-year costs and outcomes for four implementation strategies: (1) REP only, (2) REP+EF, (3) REP+EF add IF if needed, (4) REP+EF/IF. The analysis used a 1-year time horizon and assumed a health payer perspective. Our outcome was quality-adjusted life years (QALYs). The economic outcome was the incremental cost-effectiveness ratio (ICER). We conducted deterministic and probabilistic sensitivity analysis (PSA).
RESULTS: Our results indicate that REP+EF add IF is the most cost-effective option with an ICER of $593/QALY. The REP+EF/IF and REP+EF only conditions are dominated (i.e., more expensive and less effective than comparators). One-way sensitivity analyses indicate that results are sensitive to utilities for REP+EF and REP+EF add IF. The PSA results indicate that REP+EF, add IF is the optimal strategy in 30% of iterations at the threshold of $100,000/QALY.
CONCLUSIONS: Our results suggest that the most cost-effective implementation support begins with a less intensive, less costly strategy initially and increases as needed to enhance EBP uptake. Using this approach, implementation support resources can be judiciously allocated to those clinics that would most benefit. Our results were not robust to changes in the utility measure. Research is needed that incorporates robust and relevant utilities in implementation studies to determine the most cost-effective strategies. This study advances economic evaluation of implementation by assessing costs and utilities across multiple implementation strategy combinations. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT02151331 , 05/30/2014.

Entities:  

Keywords:  Cost-effectiveness analysis; Costs and cost analysis; Implementation science; Mental health services, community

Mesh:

Year:  2020        PMID: 33317593      PMCID: PMC7734829          DOI: 10.1186/s13012-020-01069-w

Source DB:  PubMed          Journal:  Implement Sci        ISSN: 1748-5908            Impact factor:   7.327


Researchers to date have focused primarily on quantifying intervention costs; few have focused on implementation strategy costs and cost-effectiveness. This research focuses on advancing approaches for evaluating cost and cost-effectiveness of implementation strategies, which are provider tools/strategies to promote intervention uptake and have been understudied. This study is one of the first to conduct a comparative economic analysis of an adaptive implementation strategy trial, to provide useful, accessible information for communities to make well-informed decisions about resourcing implementation investments.

Background

Evidence-based treatments for mental health conditions, including depression, are essential to improving the public’s health [1]. Mental health conditions frequently co-occur with substance use disorders, and other co-occurring conditions, inciting sequelae of short- and long-term consequences [2]. Mental health conditions have a significant financial toll: researchers estimated in 2008 that the annual earnings loss for serious mental illness in 2008 was $193.2 billion [3]. Collaborative care models (CCMs) have demonstrated effectiveness in improving outcomes among patients with mental disorders; collaborative care models such as Life Goals are designed to improve medical and psychiatric outcomes for persons with mood disorders through personal goal-setting aligned with wellness and symptom coping strategies and supported through collaborative care [4-6]. Life Goals is an evidence-based CCM that focuses on three components recognized as central to effective CCMs: patient self-management, care management, and provider decision support [7, 8]. Several randomized trials have shown Life Goals to be effective in improving mental and physical health outcomes for patients with unipolar and bipolar depression [4–6, 9]. The Life Goals self-management component comprises six psychosocial sessions for patients, to be delivered in either individual or group format. While all Life Goals patients complete core introduction and conclusion modules, the four intermediary sessions can be chosen by patients and providers from among several mental health and wellness subjects, including depression, mania, physical activity, or substance abuse. Life Goals also provides manualized support for care management and provider decision support, including templates for tracking patient progress and guides to common medications for unipolar/bipolar depression patients. Most individuals suffering from depression and other mental health conditions are not receiving evidence-based practices (EBPs) such as Life Goals in community settings, resulting in poor and costly health outcomes and millions of research dollars wasted when EBPs fail to reach those most in need [10-12]. Researchers increasingly recognize that EBPs must be complemented by effective implementation strategies (i.e., implementation interventions) to achieve desired public health outcomes [13]. Replicating Effective Programs (REP) is an implementation strategy focused on maximizing flexibility and fidelity in EBP delivery [14]. REP, based on the CDC’s research-to-practice framework [15], is guided by Social Learning [16] and Diffusion of Innovations Theories [17]. Standard REP includes three primary components: program packaging, provider training, and facilitation. Standard REP is a low intensity, minimal cost intervention that is akin to standard implementation for many evidence-based programs and practices; standard REP has improved uptake of brief HIV-focused interventions but has been less successful with the uptake of more complex behavioral interventions [18]. Researchers have also developed enhanced REP for more complex clinical behavioral interventions, which include added customization for program packaging and training, and implementation facilitation [19]. Implementation facilitation (i.e., facilitation) is a promising implementation strategy from the integrating Promoting Action on Research Implementation in Health Services (iPARIHS) framework that provides ongoing, individualized assistance for program delivery that can help enhance uptake of EBPs such as Life Goals in community clinics [19, 20]. Facilitation applies principles of interactive problem solving with practice-based knowledge to support providers as they engage in program delivery [21, 22]. Individuals within (internal facilitator, IF) and outside of (external facilitator, EF) the organization can provide ongoing support for EBP implementation [19]. External facilitators (EF) provide expertise, active guidance, and support for intervention delivery. Internal facilitators (IF) work in tandem with EFs to support providers in program delivery and communicate with organizational leadership and the external facilitator. The costs associated with implementation strategies, especially multicomponent strategies such as REP+facilitation, can be substantial. Cost is a key consideration from an organizational or system perspective when implementing new innovations [11]. Understanding the resources needed to achieve desired behavioral outcomes (e.g., improved mental health) is essential to implementing and sustaining EBPs in communities [23]. Most economic evaluation of implementation, however, has focused on intervention costs and not the costs of implementation strategies required to deploy and sustain them [24]. Economic evaluation of implementation refers to the systematic evaluation of what outcomes a specific implementation strategy or set of competing strategies achieves and the costs of achieving them [25]. Economic evaluation provides key information for decision makers regarding implementation strategies to support and sustain EBP delivery. Organizations benefit from evidence that supports (or refutes) investment in specific strategies as an efficient use of resources, and this can help prioritize implementation efforts [11, 24, 26]. Despite this need for practical economic information that will provide decision makers with information on whether the cost of deploying an implementation strategy is worth the added cost (versus standard implementation or an alternative strategy), less than 10% of implementation studies include cost information, and even fewer conduct comparative economic analyses [25, 27]. Thus, additional research is needed to advance economic evaluation of implementation as this will be instrumental in demonstrating if investment in implementation strategies is worth the additional costs [28]. Many types of cost evaluation exist, but one well suited to implementation science is cost-effectiveness analysis. Cost-effectiveness analysis (CEA) assesses whether incremental benefits of one strategy versus another are sufficient to justify additional costs and has been used to support mental health treatment-focused EBPs for clinical settings [29]. CEA can inform decisions about resource allocation for program selection and delivery [30]. The objective of this study is to estimate the costs and conduct a CEA as part of an adaptive implementation trial comparing different implementation strategies. The goal of Adaptive Implementation of Effective Programs Trial (ADEPT) is to use a sequential multiple assignment randomized trial (SMART) design to compare the effectiveness of different augmentations to REP using EF or a combination of EF + IF on mental health outcomes among patients diagnosed with depression or bipolar disorders in community-based practices; details of the ADEPT trial are described in more detail elsewhere [19]. A secondary ADEPT aim was to assess the costs for different scenarios of combining REP+facilitation (see Fig. 1 and Fig. 4 in the Appendix) to identify the most cost-effective implementation strategy approach. We compare four different implementation strategy combinations and evaluate relative cost-effectiveness to identify which implementation strategies are most cost-effective in achievi program goals: Strategy 0: REP only, Strategy 1: REP+EF, Strategy 2: REP+EF add IF if needed, and Strategy 3: REP+EF/IF. Clinics responding to their respective implementation strategy (e.g., > 10 patients receiving Life Goals) discontinued the implementation strategy. Among those that did not respond during the second phase of the trial, for the final phase Strategy 1 continued with EF, Strategy 2 added IF, and Strategy 3 continued with EF/IF.
Fig. 1

Decision tree of the ADEPT trial. aSites that responded to the implementation strategy after the initial 6 months of the Trial Phase: either < 10 patients receiving Life Goals or > 50% of patients receiving Life Goals had ≤ 3 sessions, min dose for clinically significant results. Sites that responded to the implementation strategy discontinued the strategy during the second 6 months/Phase III of the trial

Decision tree of the ADEPT trial. aSites that responded to the implementation strategy after the initial 6 months of the Trial Phase: either < 10 patients receiving Life Goals or > 50% of patients receiving Life Goals had ≤ 3 sessions, min dose for clinically significant results. Sites that responded to the implementation strategy discontinued the strategy during the second 6 months/Phase III of the trial

Methods

This study will use a simulation modeling approach using data from a previously conducted clinical trial [19, 31]. Our results are reported using the Consolidated Health Economic Evaluation Reporting (CHEERS) guidelines [32]. Implementation strategies included in the model reflect implementation strategies that could be developed using data from the trial. In this study, we focus on the ADEPT community-based mental health or primary care clinics who were non-responsive after 6 months of Replicating Effective Programs (REP) and would receive additional implementation support (i.e., facilitation) to enhance uptake of Life Goals. Non-responsive to REP was defined as 10 or fewer patients receiving Life Goals or < 50% of patients receiving a clinically significant dose of Life Goals, fewer than three Life Goals sessions (< 3 out of 6), after 6 months [33-35]. Eligible sites had at least 100 unique patients diagnosed with depression and could designate at least 1 mental health provider to administer individual or group collaborative care sessions for patients. The study was approved by local institutional review boards (IRBs) and registered under clinicaltrials.gov (identifier: NCT02151331).

Modeling approach

Using data from the ADEPT trial, we designed a cost-effectiveness study to evaluate three strategies that could be implemented to support the uptake and clinical effectiveness of Life Goals. These strategies do not exactly match the arms in the clinical trial because our goal was to evaluate the optimal implementation strategy approach among non-responders. We developed a decision tree to assess 1-year costs and outcomes for different intervention strategies following 6 months of REP (baseline) among non-responsive sites (i.e., slow adopter sites). Implementation strategies included in the model (see Fig. 1) were as follows: Strategy 0: REP only, Strategy 1: REP+EF, Strategy 2: REP+EF, ADD IF if needed, and Strategy 3: REP+EF/IF. The probability of non-response to the implementation strategies in the model was based on observed response rates in the study, which remained consistent across each phase at approximately .09 (that is, 9% were responders). Sites who responded to their assigned implementation strategy after 6 months of the trial (Phase II) discontinued the strategy. Sites who did not respond at after 6 months proceeded with Phase III as follows: for Strategy 1: continued REP+EF, for Strategy 2, added IF and for Strategy 3 continued with REP+EF/IF. The analysis uses a 1-year time horizon and assumes a health sector perspective. Parameter inputs were derived using primary data from ADEPT.

Costs

Implementation strategy costs for baseline REP were the same for all participants and include costs for training program providers, training compensation (e.g., pay during non-work hours), time costs for assessing organizational needs, and pre-implementation meetings. Non-labor costs included costs of the curriculum (manual and materials) and travel costs [24, 36]. Facilitation costs were based on the facilitation logs. The study EF and site IFs logged their tasks, categorizing mode, personnel interaction, duration, and the primary focus of each task. These tasks included coaching, developing an implementation plan, education, linking to outside resources, and consultation. We calculated costs based on time spent by hourly wage plus fringe rates for facilitators. As there was one EF employed by the study team, we used the EF hourly wage + fringe. For the IFs, training, and background (and thus costs) varied. We based the IF salary and fringe rates on current rates for Licensed Masters of Social Work (LMSW) professional using Bureau of Labor Statistics data, as many of the IFs were LMSWs. As we anticipated differences in uptake, that is the number of patients receiving Life Goals by condition, we calculated the total site-level cost per strategy (the level of randomization) and divided by the number of patients in that implementation strategy condition. The number of patients per condition was obtained from site-level records. Costs were collected in 2014 and adjusted to US 2018 dollars using the Consumer Price Index [37]. A summary of cost parameters is provided in Table 1. We report summary statistics for implementation costs with 95% confidence intervals. We estimated the costs of REP using the available cost data to obtain a comprehensive assessment of total implementation intervention costs, plus the costs of facilitation activities in each condition (EF and EF/IF).
Table 1

Model inputs

ParameterBaseLowHighDistributionaSource
Costs
 Cost of REP (Phase I)b588.950558.95NormalTime and resource tracking, study staff
 Additional cost of EF (Phase II)32.7032.3933.01NormalTime logs
 Additional cost of EF (Phase III)17.551.2230.84NormalTime logs
 Additional cost of EF and IF (Phase II)31.2328.5130.49NormalTime logs
 Additional cost of EF and IF (Phase III)6.353.159.27NormalTime logs
Probabilities
 Probability of response after Phase II with REP+EF0.09500.095Site response data
 Probability of response after Phase II with REP+EF/IF0.09100.091Site response data
Utilitiesc,d
 REP only.4750.430.521BetaPatient survey
 EF (Phase II).4970.420.573BetaPatient survey
 EF non-responding site (Phase III).4460.3060.586BetaPatient survey
 EF responding site (Phase III).7210.5330.909BetaPatient survey
 EF and IF (Phase II).4630.3620.564BetaPatient survey
 EF add IF (Phase III).5680.3920.566BetaPatient survey
 EF and IF non-responding site (Phase III).4790.3920.566BetaPatient survey
 EF and IF responding site (Phase III).3720.1840.559BetaPatient survey

aDistributions are parameterized such that the mean is the base value and the standard deviation is ¼ of the difference between the low and high values

bPhases refer to values calculated within phases of the original trial: Phase I: baseline/initial 6-month period prior to the trial phase of the study with REP only, Phase II: second 6 months of the study, Phase III: final 6 months of the trial

cInverse probability weighting to account for missing data with weights estimated from a logistic regression model for predicting non-response

dEQ-5D calculated using mapping algorithm from components of the SF-12

Model inputs aDistributions are parameterized such that the mean is the base value and the standard deviation is ¼ of the difference between the low and high values bPhases refer to values calculated within phases of the original trial: Phase I: baseline/initial 6-month period prior to the trial phase of the study with REP only, Phase II: second 6 months of the study, Phase III: final 6 months of the trial cInverse probability weighting to account for missing data with weights estimated from a logistic regression model for predicting non-response dEQ-5D calculated using mapping algorithm from components of the SF-12

Health outcomes

Quality-adjusted life years (QALYs)

To develop a preference-based health utility measure for the current study, we mapped the SF-12 (which was collected as part of the patient-level evaluation in the ADEPT trial) to the EQ-5D, a multi-attribute utility instrument, using an established algorithm developed by Franks and colleagues [38]. The EQ-5D yields interval-level scores ranging from 0 (dead) to 1 (perfect health). This mapping provides a health utility measure for each health state experienced by patients in the study and can be used to calculate quality-adjusted life years, the preferred measure for health benefits used in cost-effectiveness analysis.

Data analytic approach

We used a decision-tree model to compare the cost-effectiveness across different scenarios for combining REP+facilitation for the Life Goals EBP (see Fig. 1 and Fig. 4 in the Appendix). The time horizon for this analysis was 12 months as this is the duration of the trial phase of the study. In this analysis, we adopted a health system/payer perspective. This narrower perspective stands in contrast to the full, societal perspective, which incorporates all relevant costs and benefits and is recommended for most economic evaluations [39]. While this narrower perspective can potentially ignore important costs or benefits from the broad societal standpoint, it has the practical value of explicitly addressing the budgetary concerns of payers. Thus, this approach fits well with implementation science contexts where financial factors are often central to whether programs and services are adopted and sustained [40]. Assumptions were made on the psychometric properties of the outcome measures, the effectiveness of the Life Goals intervention, and the reliability of time reporting by the facilitators. We test these assumptions in the sensitivity analyses by varying the costs and outcomes related to each intervention condition at low and high values (95% confidence interval). To address missing data on our utility (outcome) measures, we employed an inverse probability weighting (IPW) approach [41]. We estimated per-patient costs and QALYs for each implementation strategy sequence. We calculated the per-patient cost by dividing the total costs per condition by the number of patients in each condition. To compare interventions, we divided net incremental costs (net increase in costs from REP+EF/IF versus REP+EF, for example) by incremental effectiveness (net increase in QALYs in REP+EF/IF versus REP+EF groups, for example) to calculate the incremental cost-effectiveness ratio for patient-level outcomes across the conditions. We conducted a one-way sensitivity analysis on all input parameters listed in Table 1 to create a tornado diagram using net monetary benefits (NMB). We used NMB as this facilitates multiple comparisons, as in the current study, and incremental cost-effectiveness ratios (ICERs) are less suitable with more than 2 comparators [42]. The sensitivity analysis evaluated how costs and incremental cost-effectiveness are affected by variations in key parameters [30]. When available, we based upper/lower bound estimates on the 95% confidence intervals. We also conducted a probabilistic sensitivity analysis (PSA). PSA characterizes uncertainty in all parameters simultaneously, reflecting the likelihood that each model parameter takes on a specific value and provides information on overall decision uncertainty based on parameter uncertainty [43]. We conducted 1000 model simulations to quantify the probability that the implementation strategy is cost-effective for a range of thresholds of willingness-to-pay [44]. We conducted a scenario analysis to evaluate results for longer analytic time horizons, from 2 to 10 years. In this additional analysis, the effects of the intervention were assumed to remain constant over time, consistent with values estimated during the final phase for each condition of the trial.

Results

Results of base case analysis

Base case results are presented in Table 2, and a plot of cost-effectiveness across implementation strategies is depicted in Fig. 2. Our base case analysis results indicate REP ONLY is the least expensive. REP+EF, ADD IF has an ICER of $593/QALY. REP+EF had higher QALYs than REP alone, but the QALYs were not as high as REP+EF, ADD IF, and it was higher cost than REP+EF, ADD IF. REP+EF/IF had higher costs and lower QALYs than REP ONLY.
Table 2

Base case analysis results

ConditionCost per patientEffectivenessICERaNMBb
Utility: QALYS
 Baseline (REP only)588.950.47046911.05
 REP+EF only637.530.48Dominatedc47822.00
 REP+EF, add IF if needed627.400.54593.42d53351.18
 REP+EF/IF625.950.47Dominatedc45987.68

aICER: incremental cost-effectiveness ratio

bNMB: net monetary benefit. Willingness-to-pay threshold $100,000/QALY [45]

cDominated: more expensive and less effective than comparators [46]

dSince “REP+EF/IF” is dominated, “REP+EF, add IF if needed” is compared to REP only

Fig. 2

Cost-effectiveness plane, organization/payer perspective

Base case analysis results aICER: incremental cost-effectiveness ratio bNMB: net monetary benefit. Willingness-to-pay threshold $100,000/QALY [45] cDominated: more expensive and less effective than comparators [46] dSince “REP+EF/IF” is dominated, “REP+EF, add IF if needed” is compared to REP only Cost-effectiveness plane, organization/payer perspective

Sensitivity analysis

To test our model assumptions, we conducted sensitivity analyses on all parameters (Appendix Table 3). In the tornado diagram (see Fig. 3), we found the results were most sensitive to the following variables (in order): utility of individuals in the REP+EF, ADD IF arm at Phase III, the utility of individuals in the REP+EF arm at Phase II, the utility of individuals in the REP+EF arm at Phase III for responders, and utility of individuals in the REP+EF only arm at Phase III. We then conducted threshold analyses for each of the most sensitive parameters. We found that at utility values below .44 for REP+EF, ADD IF at Phase III, the value of REP+EF, ADD IF is no longer cost-effective and REP+EF becomes the most cost-effective choice. We also found that at utility values above .57 for REP+EF at Phase III, REP+EF ADD IF is no longer the most cost-effective option and REP+EF becomes the most cost-effective choice.
Fig. 3

Tornado diagram showing one-way sensitivity analyses for the base case with the most sensitive parameters. All parameters were evaluated and data are provided in the appendix. Thick vertical black lines on the ends of the bars indicate values at which the initial preferred option is no longer cost-effective

Tornado diagram showing one-way sensitivity analyses for the base case with the most sensitive parameters. All parameters were evaluated and data are provided in the appendix. Thick vertical black lines on the ends of the bars indicate values at which the initial preferred option is no longer cost-effective In addition to the deterministic sensitivity analyses, we also conducted probabilistic sensitivity analysis. The results indicate that the intervention with the best results in terms of utility would be most preferred. The willingness-to-pay threshold was not important (unless using a $0 willingness-to-pay threshold). REP+EF, ADD IF is the optimal strategy in about 30% of iterations, REP ONLY is the optimal strategy in 31% of the iterations, and REP+EF/IF the optimal strategy in 22% of the iterations. We also conducted sensitivity analyses to explore an extended time horizon. In this analysis, we investigated the effects of extending the utilities during the final 6-month period in each condition from the current 12-month time horizon to 10 years. We found that if there are no additional benefits beyond 1 year, our cost-effectiveness ratio of REP+EF, ADD IF is $593.42/QALY. If benefits continued to 2 years, the ICER was $223.06/QALY; at 3 years, the ICER was $137.34/QALY; and at 10 years, patients gain 1.14 QALYs and the cost-effectiveness ratio is $33.71/QALY. Full results are provided in Appendix Table 4. REP+EF, ADD IF remained the most cost-effective option with an extended time horizon.

Discussion

Effective implementation of EBPs for mental health treatment in communities is critical to improving the public’s health. Most individuals suffering from depression and other mental health conditions are not receiving evidence-based practices (EBPs) such as Life Goals (LG) in community settings, resulting in poor and costly health outcomes and millions of research dollars wasted when EBPs fail to reach those most in need [10-12]. Implementation strategies are key to improving uptake of EBPs in communities and achieving public health objectives of evidence-based treatments such as Life Goals. Implementation strategies, however, vary in intensity and cost. More research is needed on applying these strategies with consideration of the economic impact, given that community clinics often have limited—and carefully allocated—resources to promote EBP uptake [47]. This research is vital to bridging the research-to-practice gap, but economic evaluation of implementation strategies remains understudied [47]. This study is one of the first to investigate the cost-effectiveness of implementation strategies as part of an adaptive trial. Adaptive trials are an effective way to accelerate research-to-practice translation by simultaneously evaluating multiple strategies and combinations of strategies, based on clinics’ needs. We found that, overall, REP+facilitation in its various permutations is a relatively low-cost implementation strategy. Identifying the costs and potential utilities for each REP+facilitation combination can help decision makers with resource allocation for implementation. Understanding the resources needed to achieve desired behavioral outcomes (e.g., reduced ATOD use) is essential to implementing and sustaining EBIs [23]. Also, we found that REP+EF, ADD IF may be the most cost-effective implementation strategy. But these results are still uncertain based on highly variable quality-of-life assessments by participants. Although researchers have debated if a step-up versus step-down approach to evidence-based clinical treatment is most effective, the optimal approach for implementation strategies to enhance the uptake of these treatments is also unclear. Our results are consistent with other clinical research that suggests a step-up strategy is a more cost-effective approach [48]. This information will support organizations in making informed decisions by providing evidence that supports (or refutes) investment in specific implementation strategies as an efficient use of resources and thus can help prioritize implementation efforts. We also found that stepping up non-responsive sites immediately to REP+EF/IF, the most intensive and costly strategy (at the site level), was not cost-effective. This may be for several reasons. First, EF alone may be sufficient for community clinics to effectively implement the Life Goals intervention and, thus, in most cases IF may not be necessary [31]. Second, many sites had difficulty identifying an internal facilitator. Subsequent analyses into time data indicate that the mean time to identify an IF was 69 days. This suggests that many sites assigned to the IF condition did not have one for the first 2 months of the evaluation period. These results also indicate that community clinics may have a limited capacity to identify and effectively utilize an IF. Finally, we may have had more favorable results with the REP+EF, ADD IF condition during Phase II as the EF was able to work with the clinic on their barriers to uptake immediately and may have been working with several versus a single staff member. Our results were highly dependent on the assessment of utility. The utilities were variable and uncertain across the different intervention arms. This had a strong influence on the overall assessment of cost-effectiveness. Further research is needed that incorporates robust and relevant utilities in implementation research to identify the most cost-effective strategies. Although the trial only evaluated patients up until 1 year, our results did not change if we simulated a longer time horizon of benefits. Extending benefits out from the current trial (12 months) to 10 years, the cost-effectiveness of REP+EF, ADD IF improved to $33.71/QALY. As the clinical benefit from engaging in evidence-based practices for mental health treatment may extend beyond the time horizon of the trial itself, studies that only observe outcomes over a short time horizon may report artificially high CE ratios [49]. We have found that extending the time horizon does reduce the CE ratio.

Limitations

We adopted a health payer perspective, which may not account for other relevant costs if considering the societal perspective. This may include indirect costs such as patient time, costs of hospitalization, or other treatments or lost productivity. Yet, this narrower perspective has the practical value of explicitly addressing the budgetary concerns of payers and fits well with implementation science contexts where financial factors are often central to whether programs and services are adopted and sustained [40]. We did not have additional information on estimates of REP costs to vary parameters and these cost estimates primarily relied on research team recall. There may be additional costs not included in the estimates that may have implications on the CEA results. Also, additional information to vary specific parameters may also help inform those parameters that are most influential on our estimates. In our CEA analyses, however, all groups had REP costs incorporated into total costs, so this is unlikely to influence the CEA results across the REP+facilitation permutations. We did not have a direct measure of QALYs and thus our utility estimates may be especially susceptible to measurement error. A notable amount of research has been done, however, on mapping the SF-12 components to the EQ-5D thus reducing the likelihood of error as a result of the mapping process. Next, we had a notable amount of missing patient-level survey data, increasing the likelihood of biased estimates. We did, however, attempt to reduce this bias using inverse probability weighting. The current study would benefit from a mixed methods approach, specifically a sequential design, to obtain qualitative data following the quantitative data collection to better understand challenges to utilizing IF/EF. Finally, our study has a limited time horizon. Using incremental QALYs gained based on the survey results and running sensitivity analysis to evaluate potential effects of a longer time horizon showed REP+EF, ADD IF was still highly cost-effective. However, a longer time horizon within the RCT would provide additional information for a longer-term return-on-investment and could provide more confidence about which adaptive implementation strategy is best.

Conclusions

Our study has several practice implications. First, our results support using a step-up strategy for implementation support for sites that are slow to implement as a cost-effective approach to enhancing uptake and clinical outcomes. Second, our results provide information for decision makers and community health clinic leadership on the costs and relative benefits of using various implementation strategies to improve clinical outcomes. Third, our results support the need for further cost-effectiveness research and incorporating robust utility assessments in community health clinics to provide evidence that will support or refute investments in specific strategies. Finally, our results point to the need for mid-program utility evaluation for both effectiveness and cost-effectiveness to make accurate determinations of the most efficient implementation strategy approach.
Table 3

Tornado text report including results for all model input parameters

Variable LowVariable BaseVariable HighImpactLowHighSpreadSpread2Risk %Cum Risk %Spread
Utility REPEFaddIF 12mo0.4420.5680.694Increase47821.9952459051.1761911229.18095126094504.90.6660614360.66606143611229.18095
Utility REPEFonly 6mo0.420.4970.573Increase49501.1761957151.176197650585225000.309129890.9751913257650
Utility REPEF responder 12mo0.5330.7210.909Increase52455.938154246.414291790.476193205804.9890.0169338310.9921251571790.47619
Utility REPEFonly 12mo0.3060.4460.586Increase53351.1761954155.32857804.152381646661.05180.0034158190.995540976804.152381
Probability respond REPEF00.0952380950.095238095Increase5262253351.17619729.1761905531697.91680.0028085560.998349532729.1761905
Cost baseline rep0588.95558.95Decrease53381.1761953940.12619558.95312425.10250.0016503040.999999836558.95
Cost REPEFIF 12mo3.156.359.27Decrease53348.5342953354.071435.53714285730.659951021.61953E−070.9999999985.537142857
Cost REPEF 6mo32.3932.733.01Decrease53350.8661953351.486190.620.38442.03049E 0910.62
Probability respond REPEFIF00.0909090910.090909091Increase53351.1761953351.1761900010
Utility REPEFIF 6mo0.3620.4630.564Increase53351.1761953351.1761900010
Cost REPEF 12mo1.2217.5530.84Increase53351.1761953351.1761900010
Utility REPEFIF 12mo0.3920.4790.566Increase53351.1761953351.1761900010
Utility REPbaseline0.430.4750.521Increase53351.1761953351.1761900010
Utility REPEFIF responder 12mo0.1840.3720.559Increase53351.1761953351.1761900010
Cost REPEFIF 6mo28.5131.2332.22Increase53351.1761953351.1761900010

SPREAD spread value, squared

Table 4

One-way sensitivity analysis results for extended time horizon

YearStrategyStrategyindexCostIncr costaEffbIncr effcICERdNMBeC/EfDominance
0Baseline(REP only)2588.9500.4750046911.051239.894737
0REP+EF/IF1625.952727337.002727270.466136364− 0.008863636− 4174.66666745987.683641342.853242(Dominated)
0REP+EF, addIF if needed0627.395238138.44523810.5397857140.064785714593.421536253351.176191162.304265
0REP+EF only3637.528571410.133333330.484595238− 0.055190476− 183.606557447821.995241315.589839(Dominated)
1Baseline(REP only)2588.9500.950094411.05619.9473684
1REP+EF/IF1625.952727337.002727270.935409091− 0.014590909− 2536.01246192914.95636669.175373(Dominated)
1REP+EF, addIF if needed0627.395238138.44523811.1223571430.172357143223.0556707111608.319558.9978574
1REP+EF only3637.528571410.133333330.956785714− 0.165571429− 61.2021857995041.04286666.3232549(Dominated)
2Baseline(REP only)2588.9501.42500141911.05413.2982456
2REP+EF/IF1625.952727337.002727271.404681818− 0.020318182− 1821.163311139842.2291445.618872(Dominated)
2REP+EF, addIF if needed0627.395238138.44523811.7049285710.279928571137.3394573169865.4619367.9891631
2REP+EF only3637.528571410.133333331.42897619− 0.275952381− 36.72131148142260.0905446.1435926(Dominated)
3Baseline(REP only)2588.9501.900189411.05309.9736842
3REP+EF/IF1625.952727337.002727271.873954545− 0.026045455− 1420.69808186769.5018334.0277003(Dominated)
3REP+EF, addIF if needed0627.395238138.44523812.28750.387599.21351767228122.6048274.2711423
3REP+EF only3637.528571410.133333331.901166667− 0.386333333− 26.2295082189479.1381335.3354457(Dominated)
4Baseline(REP only)2588.9502.37500236911.05247.9789474
4REP+EF/IF1625.952727337.002727272.343227273− 0.031772727− 1164.606581233696.7745267.1327423(Dominated)
4REP+EF, addIF if needed0627.395238138.44523812.8700714290.49507142977.6559419286379.7476218.5991721
4REP+EF only3637.528571410.133333332.373357143− 0.496714286− 20.4007286236698.1857268.6188943(Dominated)
5Baseline(REP only)2588.9502.8500284411.05206.6491228
5REP+EF/IF1625.952727337.002727272.8125− 0.0375− 986.7393939280624.0473222.5609697(Dominated)
5REP+EF, addIF if needed0627.395238138.44523813.4526428570.60264285763.79439769344636.8905181.7144906
5REP+EF only3637.528571410.133333332.845547619− 0.607095238− 16.69150522283917.2333224.0442462(Dominated)
6Baseline(REP only)2588.9503.32500331911.05177.1278195
6REP+EF/IF1625.952727337.002727273.281772727− 0.043227273− 856.0042061327551.32190.7361598(Dominated)
6REP+EF, addIF if needed0627.395238138.44523814.0352142860.71021428654.13188508402894.0333155.48003
6REP+EF only3637.528571410.133333333.317738095− 0.71747619− 14.12358134331136.281192.1575945(Dominated)
7Baseline(REP only)2588.9503.800379411.05154.9868421
7REP+EF/IF1625.952727337.002727273.751045455− 0.048954545− 755.8588672374478.5927166.8742048(Dominated)
7REP+EF, addIF if needed0627.395238138.44523814.6177857140.81778571447.01138382461151.1762135.8649528
7REP+EF only3637.528571410.133333333.789928571− 0.827857143− 12.24043716378355.3286168.2165137(Dominated)
8Baseline(REP only)2588.9504.27500426911.05137.7660819
8REP+EF/IF1625.952727337.002727274.220318182− 0.054681818− 676.6916043421405.8655148.3188471(Dominated)
8REP+EF, addIF if needed0627.395238138.44523815.2003571430.92535714341.54637849519408.319120.6446444
8REP+EF only3637.528571410.133333334.262119048− 0.938238095− 10.80038573425574.3762149.5801887(Dominated)
9Baseline(REP only)2588.9504.7500474411.05123.9894737
9REP+EF/IF1625.952727337.002727274.689590909− 0.060409091− 612.5357412468333.1382133.477043(Dominated)
9REP+EF, addIF if needed0627.395238138.44523815.7829285711.03292857137.21964825577665.4619108.4909195
9REP+EF only3637.528571410.133333334.734309524− 1.048619048− 9.66350302472793.4238134.6613626(Dominated)
10Baseline(REP only)2588.9505.22500521911.05112.7177033
10REP+EF/IF1625.952727337.002727275.158863636− 0.066136364− 559.4914089515260.4109121.3353892(Dominated)
10REP+EF, addIF if needed0627.395238138.44523816.36551.140533.70910837635922.604898.56181574
10REP+EF only3637.528571410.133333335.2065− 1.159− 8.743169399520012.4714122.4485876(Dominated)

aIncremental cost

bEffectiveness

cIncremental effectiveness

dIncremental cost-effectiveness ratio

eNet monetary benefit

fCost-effectiveness

  32 in total

Review 1.  Systematic review of economic evaluations and cost analyses of guideline implementation strategies.

Authors:  Luke Vale; Ruth Thomas; Graeme MacLennan; Jeremy Grimshaw
Journal:  Eur J Health Econ       Date:  2007-03-09

2.  Reenvisioning Clinical Science: Unifying the Discipline to Improve the Public Health.

Authors:  Lisa S Onken; Kathleen M Carroll; Varda Shoham; Bruce N Cuthbert; Melissa Riddle
Journal:  Clin Psychol Sci       Date:  2014-01-01

3.  Prevalence and distinct correlates of anxiety, substance, and combined comorbidity in a multi-site public sector sample with bipolar disorder.

Authors:  Mark S Bauer; Lori Altshuler; Denise R Evans; Thomas Beresford; William O Williford; Richard Hauger
Journal:  J Affect Disord       Date:  2005-04       Impact factor: 4.839

4.  Replicating effective programs: HIV/AIDS prevention technology transfer.

Authors:  M S Neumann; E D Sogolow
Journal:  AIDS Educ Prev       Date:  2000

5.  Interpreting the results of cost-effectiveness studies.

Authors:  David J Cohen; Matthew R Reynolds
Journal:  J Am Coll Cardiol       Date:  2008-12-16       Impact factor: 24.094

6.  Life Goals Collaborative Care for patients with bipolar disorder and cardiovascular disease risk.

Authors:  Amy M Kilbourne; David E Goodrich; Zongshan Lai; Julia Clogston; Jeanette Waxmonsky; Mark S Bauer
Journal:  Psychiatr Serv       Date:  2012-12       Impact factor: 3.084

7.  Individual and societal effects of mental disorders on earnings in the United States: results from the national comorbidity survey replication.

Authors:  Ronald C Kessler; Steven Heeringa; Matthew D Lakoma; Maria Petukhova; Agnes E Rupp; Michael Schoenbaum; Philip S Wang; Alan M Zaslavsky
Journal:  Am J Psychiatry       Date:  2008-05-07       Impact factor: 18.112

8.  Collaborative chronic care models for mental health conditions: cumulative meta-analysis and metaregression to guide future research and implementation.

Authors:  Christopher J Miller; Andrew Grogan-Kaylor; Brian E Perron; Amy M Kilbourne; Emily Woltmann; Mark S Bauer
Journal:  Med Care       Date:  2013-10       Impact factor: 2.983

9.  A manic-depressive symptom self-report in optical scanable format.

Authors:  Henry A Glick; Linda McBride; Mark S Bauer
Journal:  Bipolar Disord       Date:  2003-10       Impact factor: 6.744

10.  A Digital Behavioral Weight Gain Prevention Intervention in Primary Care Practice: Cost and Cost-Effectiveness Analysis.

Authors:  Anirudh Krishnan; Eric Andrew Finkelstein; Erica Levine; Perry Foley; Sandy Askew; Dori Steinberg; Gary G Bennett
Journal:  J Med Internet Res       Date:  2019-05-17       Impact factor: 5.428

View more
  5 in total

1.  Using Human-Centered Design to Develop, Launch, and Evaluate a National Digital Health Platform to Improve Reproductive Health for Rwandan Youth.

Authors:  Nicole Ippoliti; Mireille Sekamana; Laura Baringer; Rebecca Hope
Journal:  Glob Health Sci Pract       Date:  2021-11-29

2.  A Qualitative Force Field Analysis of Facilitators and Barriers to Evidence-Based Practice in Healthcare Using an Implementation Framework.

Authors:  Molly McNett; Sharon Tucker; Inga Zadvinskis; Diana Tolles; Bindu Thomas; Penelope Gorsuch; Lynn Gallagher-Ford
Journal:  Glob Implement Res Appl       Date:  2022-08-12

3.  Strengthening Kangaroo Mother Care at a tertiary level hospital in Zambia: A prospective descriptive study.

Authors:  Nobutu Muttau; Martha Mwendafilumba; Branishka Lewis; Keilya Kasprzyk; Colm Travers; J Anitha Menon; Kunda Mutesu-Kapembwa; Aaron Mangangu; Herbert Kapesa; Albert Manasyan
Journal:  PLoS One       Date:  2022-09-01       Impact factor: 3.752

4.  Michigan Model for HealthTM Learning to Enhance and Adapt for Prevention (Mi-LEAP): protocol of a pilot randomized trial comparing Enhanced Replicating Effective Programs versus standard implementation to deliver an evidence-based drug use prevention curriculum.

Authors:  Andria B Eisman; Lawrence A Palinkas; Christine Koffkey; Todd I Herrenkohl; Umaima Abbasi; Judy Fridline; Leslie Lundahl; Amy M Kilbourne
Journal:  Pilot Feasibility Stud       Date:  2022-09-10

5.  The economics of adaptations to evidence-based practices.

Authors:  Ramzi G Salloum; Todd H Wagner; Amanda M Midboe; Sarah I Daniels; Andrew Quanbeck; David A Chambers
Journal:  Implement Sci Commun       Date:  2022-09-24
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.