Literature DB >> 33743138

Cost-Effectiveness Analysis in Implementation Science: a Research Agenda and Call for Wider Application.

Emanuel Krebs1, Bohdan Nosyk2.   

Abstract

PURPOSE OF REVIEW: Cost-effectiveness analysis (CEA) can help identify the trade-offs decision makers face when confronted with alternative courses of action for the implementation of public health strategies. Application of CEA alongside implementation scientific studies remains limited. We aimed to identify areas for future development in order to enhance the uptake and impact of model-based CEA in implementation scientific research. RECENT
FINDINGS: Important questions remain about how to broadly implement evidence-based public health interventions in routine practice. Establishing population-level implementation strategy components and distinct implementation phases, including planning for implementation, the time required to scale-up programs, and sustainment efforts required to maintain them, can help determine the data needed to quantify each of these elements. Model-based CEA can use these data to determine the added value associated with each of these elements across systems, settings, population subgroups, and levels of implementation to provide tailored guidance for evidence-based public health action. There is a need to integrate implementation science explicitly into CEA to adequately capture diverse real-world delivery contexts and make detailed, informed recommendations on the aspects of the implementation process that provide good value. We describe examples of how model-based CEA can integrate implementation scientific concepts and evidence to help tailor evaluations to local context. We also propose six distinct domains for methodological advancement in order to enhance the uptake and impact of model-based cost-effectiveness analysis in implementation scientific research.

Entities:  

Keywords:  Cost-effectiveness analysis; Effectiveness research; Health economic evaluation; Implementation strategies; Public health; Simulation modeling

Mesh:

Year:  2021        PMID: 33743138      PMCID: PMC7980756          DOI: 10.1007/s11904-021-00550-5

Source DB:  PubMed          Journal:  Curr HIV/AIDS Rep        ISSN: 1548-3568            Impact factor:   5.495


Introduction

Now more than ever we need to carefully evaluate the implementation of public health programs, to ensure not only their effectiveness but also the effectiveness of the program’s implementation and ultimately its value, relative to other competing interests. Economic evaluation, and cost-effectiveness analysis (CEA) specifically, can help identify the trade-offs, or opportunity costs, decision makers face when confronted with alternative courses of action for the implementation of public health strategies [1]. By determining the value provided by successful program implementation, CEA can not only help inform resource allocation decisions but also provide evidence for determining optimal scalability and sustainability of implementation strategies addressing the health needs of diverse populations [2]. Furthermore, CEA can help determine the maximum returns on investment that can be obtained from improving implementation activities [3-6]; however, realizing the full public health benefit of systematically incorporating evidence-based interventions (EBIs) into routine practice requires generalizable evidence of how these have been broadly implemented and sustained in practice [3, 7, 8]. For instance, HIV testing provides outstanding value and can even be cost-saving in the long term in high-prevalence populations and settings [9]. Nonetheless, real-world evidence on the scale of implementation of HIV-testing programs is sparse [9]. Levels of implementation documented in the public domain fall short of what would be required, in combination with a wide range of EBIs to treat and protect against infection, to reach the targets of the US ‘Ending the HIV Epidemic’ initiative [10]. Similarly, medication for opioid-use disorder can also provide exceptional value in addition to substantial health benefits [11]. However, there remains a pressing need for data to guide evidence-based decision-making in addressing the North American opioid crisis, particularly for assessing the impact of public health strategies over different timescales, for different populations or settings, and in combination with multiple interventions [12]. Of course, the global COVID-19 pandemic poses the most significant and immediate challenge in public health implementation. A sound understanding of what may limit the efficiency of large-scale implementation of testing and contact tracing programs across different contexts could have helped guide the effective delivery of COVID-19 vaccines [13]. While CEA has a 40-year history in population health research [14], its application alongside implementation scientific studies has been limited [1, 15, 16]. A recent systematic review of economic evaluations in the implementation science literature revealed only 14 articles focused on implementation [17], with only four CEAs using quality-adjusted life years (QALYs) [18]. While the review found improvements in the quality of CEAs over time [17, 19], best-practice recommendations, such as reporting the perspective from which the analysis is conducted, investigating parameter uncertainty, or reporting cost calculations, were not adhered to in most studies [14, 20–22]. Likewise, another recent systematic review found a dearth of economic evaluations of interventions designed to improve the implementation of public health initiatives, also noting the mixed quality of the evidence from these studies [23]. The use of model-based economic evaluation has been suggested to help make CEA the standard practice in implementation science [1]. Using mathematical relationships, models such as dynamic transmission models, agent-based models, state-transition cohort models, or individual-based microsimulation models can provide a flexible framework to compare alternative courses of action. As the COVID-19 pandemic has revealed, simulation modeling is playing a greater role than ever in public-health decision-making [24-26]. The context in which healthcare services are delivered has been shown to influence the cost-effectiveness of interventions [9, 27]. Simulation models that capture the heterogeneity across settings are uniquely positioned to offer guidance on contextually efficient strategies to implement [12, 28, 29]. Fundamentally, public-health programs aim to expand the implementation of evidence-based practices by increasing their reach, adoption, and sustainment to achieve better health outcomes. We outline areas for future development in CEA that would most complement the field of implementation science, in the interest of providing decision makers with pragmatic and context-specific information on the value of implementation and sustainment strategies. Specifically, in order to enhance the uptake and impact of model-based CEA in implementation scientific research, we suggest the need for advancement in (1) determining the reach of EBIs at the population level and within subgroups; (2) reporting adoption of EBIs at a system level and across settings; (3) improving the documentation of pre-implementation planning activities; (4) generating evidence on scaling up EBIs to the population level; (5) measuring the sustainment of EBIs over time; and (6) generating precise and generalizable costs of each of the above components.

Implementation Science Theories, Study Designs, and Their Application in Cost-Effectiveness Analysis

Increasing the impact of a public health program depends not only on the effectiveness of its individual components but also on the extent and quality of its implementation. While implementation scientists have developed frameworks to characterize implementation theory and practice [30], recommendations on how these may be incorporated into CEA are scarce. Applying these frameworks in CEA can ensure we fully capture the design features of a given EBI and the relevant costs for each of the implementation components. This can provide a foundation from which to improve the evaluation of implementation strategies and population health. While this article is not intended as a complete review of all individual approaches available, we describe examples of how model-based CEA can integrate implementation science to help tailor evaluations to local context and focus decision makers on implementation strategies that may provide the greatest public health impact and value for money.

The RE-AIM Framework

The Reach Effectiveness Adoption Implementation Maintenance (RE-AIM) framework has been broadly applied to evaluate health policies and EBIs since it was initially published more than two decades ago [31, 32]. The framework’s domains are recognized as essential components in evaluating population-level effects [30]. We applied the RE-AIM framework in recent model-based CEA evaluating combinations of EBIs in HIV/AIDS [33]. We used RE-AIM to define scale of delivery, periods over which the implementation of EBIs are scaled-up and sustained, and the costs of pre-implementation, implementation, delivery, and sustainment of each intervention [9]. Table 1 describes the definitions and assumptions we used for explicitly integrating intervention and implementation components in simulation modeling.
Table 1

RE-AIM framework definitions and assumptions used for the implementation of EBIs in model-based CEA

DomainDefinitionAssumptions for interventionsAssumptions for implementation
ReachParticipation rate in the interventions(i) Individuals must accept to participate; (ii) participation rates are specific to each intervention.(i) Individuals must access services in the setting(s) in which the interventions are delivered; (ii) reach remains constant over the delivery period.
EffectivenessEffect of the interventions(i) The effect is equivalent in all population subgroups unless there is evidence to the contrary.(i) The effectiveness of each intervention is specific to the setting(s) in which it is delivered.
AdoptionDelivery of the interventions(i) Staff accept to deliver interventions as implemented.(i) The adoption rate is specific to the setting(s) and population(s) in which the interventions are delivered; (ii) adoption remains constant over the delivery period.
ImplementationConsistent delivery of the interventions(i) Delivery costs are a function of the scale of delivery of the interventions.(i) The interventions are adapted to ensure fidelity; (ii) there is a period for scaling up the intervention; (ii) implementation costs are a function of the setting(s) and scale of delivery of the interventions.
MaintenanceSustainment of the interventions(i) The effects of the interventions remain constant over time.(i) Costs for sustainment activities (e.g., retraining) are a function of the scale of delivery of the interventions; (ii) duration of the sustainment period is assumed to be fixed for each intervention.

RE-AIM, Reach Effectiveness Adoption Implementation Maintenance; EBIs, evidence-based interventions; CEA, cost-effectiveness analysis

RE-AIM framework definitions and assumptions used for the implementation of EBIs in model-based CEA RE-AIM, Reach Effectiveness Adoption Implementation Maintenance; EBIs, evidence-based interventions; CEA, cost-effectiveness analysis The scale of delivery refers to the extent to which an EBI ultimately reaches its intended population. In a prior application, we defined scale of delivery as the product of reach and adoption (Scaleij = Reachijk × Adoptionik) for each intervention i, intended population j, and delivery setting k. We used the best evidence available in the public domain for each healthcare setting in which an EBI would be delivered, using similar evidence specific to each population subgroup for which an EBI was intended. Reach was defined as the participation rate in a given intervention, conditional on both the probability an individual would access services in setting k and would accept the intervention. Adoption was defined as the proportion of a delivery setting actually implementing the intervention. Consequently, the population-level impact for each intervention was the product of the resulting scale of delivery and its effectiveness (Population-based Impacti = Scaleij × Effectivenessi). We incorporated a scale-up period to account for the time required to get to the defined scale of delivery followed by a sustainment period necessary for maintaining the scale of delivery. While we strived to use the best publicly available data to inform each of the RE-AIM components, evidence for population-level implementation of interventions, scale-up, adoption, and sustainment was very limited. Furthermore, as it has been previously noted [6, 32], data on costs associated with implementation components other than those specific to the delivery of the intervention itself are mostly lacking. Consequently, our analysis required numerous assumptions for deriving costs, particularly for representative patient volumes across healthcare settings, physician practices, or HIV clinics. Nonetheless, it is encouraging that the Standards for Reporting Implementation Studies (StaRI) [34], a checklist for the transparent reporting of studies evaluating implementation strategies, distinguishes between costs specific to the interventions and to the implementation strategy, suggesting more evidence on costs specific to implementation may soon become available.

Hybrid Study Designs

Hybrid designs combine evaluation of both the implementation process and the intervention. The typology proposed by Curran et al. outlines different considerations for choosing the appropriate hybrid design, with the Hybrid Type 2 simultaneously evaluating the effectiveness of an intervention and its implementation [35]. Similar to the STARI reporting guidelines, which also differentiate cost data for the implementation strategy and the intervention, the recommendations for costs were general and did not distinguish between the various elements of the implementation process (e.g., planning, scaling up, and sustainment). CEA conducted alongside Hybrid Type 2 designs provides an opportunity to also evaluate the different components of implementation strategies most relevant and useful for decision makers and implementers [35, 36]. Mirroring the recommended conditions for use of a Hybrid Type 2 design [35], CEA-specific considerations may include (1) accounting for the costs of pre-implementation activities that ensure the face validity of the implementation strategies being proposed; (2) emphasizing the need for evidence of individual implementation components (e.g., varying levels of reach and adoption) and their costs, supporting applicability to new settings and populations; (3) the potential risk for displacement of other services that may result in indirect costs not captured by an evaluation restricted to the intervention(s) of interest; (4) demonstrating effectiveness (and value) of large-scale implementation to support momentum of adoption in routine care; (5) investigating hypothetical implementation dynamics to iteratively support the development of adaptive implementation strategies; (6) conducting value of information analysis to estimate the additional value the study may produce by reducing uncertainty in decision-making. Simulation modeling and CEA can enhance the relevance of implementation science by measuring the effects of different, sometimes hypothetical, implementation components (e.g., reach, adoption, and sustainment—which are ultimately implementation outcomes themselves), while taking into consideration the impact of context and accounting for variation in costs, ultimately providing value-based recommendations to improve public health.

Setting an Agenda for Methodological Development

The evidence base supporting many public-health interventions is strong and diverse, but important questions remain about how to broadly implement them in routine practice to achieve greater population-level impact [1]. From a methodological standpoint, there is a need to integrate implementation science explicitly into CEA to adequately capture diverse real-world delivery contexts. CEA can produce detailed, informed recommendations on the aspects of the implementation process that provide good value. An effective use of model-based CEA for decision-making thus requires considering the feasibility and costs of individual implementation components, including planning for implementation, time required to scale-up programs, reach, adoption, and sustainment (Fig. 1). We identify areas of advancement needed for model-based CEA to further support public health action by estimating the potential impact of these implementation components, discussing resulting implications for cost considerations and further methodological development (Table 2).
Fig. 1

Cost components, timing, and scale of implementation strategies

Table 2

Areas of advancement to enhance the impact of CEA in implementation scientific research

RE-AIM domainImplementation-process data needs for CEAImplementation-cost data needs for CEAExamples of approaches for answering data needs
Intended populations (reach)

● Integrating surveillance and reporting systems to derive population-level reach within and across settings

● Emphasizing granular data to distinguish access to services by population subgroups

● Costs as a function of population reach specific to delivery settings and key population subgroups

● Functional form of the cost function capturing economies of scale (or decreasing returns as reach increases)

● Publicly available surveillance data to determine baseline service utilization levels and feasible population reach

● Reporting of costs for implementation strategies increasing reach across different systems, settings, population subgroups, and levels of implementation

Service delivery (adoption)

● Existing levels of implementation for EBIs, specific to delivery settings (e.g., formal healthcare sector, community based, schools) and by payer

● Impact of interventions improving system-level adoption

● Costs as a function of increasing adoption accounting for heterogeneity across settings and geographical location

● Functional form of the cost function to capture economies of scale and scope

● Estimate system capacity and public-health-department intended targets for adoption levels, see MISII for measurement framework example [76]

● Increased reporting (and development) of quantitative measures of determinants of adoption and other implementation phases [77]

Planning (effectiveness)

● Human resources needed for pre-implementation planning

● Real-world effectiveness of EBIs, overall and within key subgroups

● Costs of pre-implementation planning

● Direct and indirect costs attributable to different funding agencies

● Increased use of hybrid effectiveness-implementation (type 2) study designs [35] to determine the real-world scalability and generalizability of EBIs

● Systematic and harmonized reporting of human resource (e.g., FTEs), see Cidav et al. [52] and Saldana et al. [54] for pragmatic approaches

Scaling up (implementation)

● Timing of adoption across delivery settings

● Evidence on changing population characteristics at increasing scales of delivery

● Flexible cost functions accounting for scale and scope when implementation timing changes

● Budget impact of speeding up implementation to determine if feasible or affordable under current budget constraints

● Establish reporting guidelines and standardized instruments for quantitative population-level measures of different implementation phases [78]

● Estimate statistical models (e.g., multiple linear regression) to determine cost functions accounting for system and implementation components [70]

Sustaining impact (maintenance)

● Longitudinal measurement of scale of delivery

● Funding mechanisms for EBIs over time

● Impact of interventions improving population-level sustainment

● Costs for the maintenance of EBI implementation over period of sustainment

● Costs for retraining to maintain impact of EBIs

● Establish a standardized and explicit definition of maintenance/sustainment implementation phase for generalizability of reporting [62]

● Yearly reporting of reach and adoption levels

● Tracking of increasing or decreasing delivery costs

RE-AIM, Reach Effectiveness Adoption Implementation Maintenance; EBIs, evidence-based interventions; CEA, cost-effectiveness analysis; MISII, measure of innovation-specific implementation intentions; FTE, full-time equivalency

Cost components, timing, and scale of implementation strategies Areas of advancement to enhance the impact of CEA in implementation scientific research ● Integrating surveillance and reporting systems to derive population-level reach within and across settings ● Emphasizing granular data to distinguish access to services by population subgroups ● Costs as a function of population reach specific to delivery settings and key population subgroups ● Functional form of the cost function capturing economies of scale (or decreasing returns as reach increases) ● Publicly available surveillance data to determine baseline service utilization levels and feasible population reach ● Reporting of costs for implementation strategies increasing reach across different systems, settings, population subgroups, and levels of implementation ● Existing levels of implementation for EBIs, specific to delivery settings (e.g., formal healthcare sector, community based, schools) and by payer ● Impact of interventions improving system-level adoption ● Costs as a function of increasing adoption accounting for heterogeneity across settings and geographical location ● Functional form of the cost function to capture economies of scale and scope ● Estimate system capacity and public-health-department intended targets for adoption levels, see MISII for measurement framework example [76] ● Increased reporting (and development) of quantitative measures of determinants of adoption and other implementation phases [77] ● Human resources needed for pre-implementation planning ● Real-world effectiveness of EBIs, overall and within key subgroups ● Costs of pre-implementation planning ● Direct and indirect costs attributable to different funding agencies ● Increased use of hybrid effectiveness-implementation (type 2) study designs [35] to determine the real-world scalability and generalizability of EBIs ● Systematic and harmonized reporting of human resource (e.g., FTEs), see Cidav et al. [52] and Saldana et al. [54] for pragmatic approaches ● Timing of adoption across delivery settings ● Evidence on changing population characteristics at increasing scales of delivery ● Flexible cost functions accounting for scale and scope when implementation timing changes ● Budget impact of speeding up implementation to determine if feasible or affordable under current budget constraints ● Establish reporting guidelines and standardized instruments for quantitative population-level measures of different implementation phases [78] ● Estimate statistical models (e.g., multiple linear regression) to determine cost functions accounting for system and implementation components [70] ● Longitudinal measurement of scale of delivery ● Funding mechanisms for EBIs over time ● Impact of interventions improving population-level sustainment ● Costs for the maintenance of EBI implementation over period of sustainment ● Costs for retraining to maintain impact of EBIs ● Establish a standardized and explicit definition of maintenance/sustainment implementation phase for generalizability of reporting [62] ● Yearly reporting of reach and adoption levels ● Tracking of increasing or decreasing delivery costs RE-AIM, Reach Effectiveness Adoption Implementation Maintenance; EBIs, evidence-based interventions; CEA, cost-effectiveness analysis; MISII, measure of innovation-specific implementation intentions; FTE, full-time equivalency

Scale of Implementation: Intended Populations and Service Delivery

In our prior work evaluating combinations of EBIs in HIV/AIDS, we derived the scale for individual interventions by determining their potential reach based on how population-level utilization of different healthcare settings (e.g., primary care, emergency departments) varied by geographic region, racial/ethnic group, and sex [9, 33]. For example, to determine the potential reach of an HIV testing intervention in primary care [9], we used regional US estimates of persons having seen a doctor in the past year (e.g., 59.3% of Hispanic/Latino men in the West, 93.9% for Black/African American women in the Northeast) [37]. CEAs can thus provide a realistic assessment of the potential value of implementing public-health programs by explicitly accounting for underlying structural barriers to healthcare access. As adaptation of EBIs can increase population reach within settings, helping to achieve more equitable service provision [38], model-based CEA can also allow for determining the additional value (and health benefits) that can result from efforts to reduce existing disparities [39]. Implementation scientists have argued for health equity to serve as a guiding principle for the discipline [40]. Simulation models can help us understand the extent and deployment of resources required to overcome these disparities [39, 40]. The recent extension of the RE-AIM framework now incorporates health-equity considerations for each of its domains [41]. Specifically, it suggests the importance of determining reach in terms of population characteristics and social determinants of health [41]. However, reporting of real-world implementation data such as these is very limited [9, 42]. The emerging ‘Distributional Cost-Effectiveness Analysis’ framework is designed to explicate the tradeoffs in strategies designed to reduce health inequities [43], as opposed to simply maximizing total health, though of course explicit evidence to this end must be available to execute such analyses [40, 44, 45]. Capturing the population-level impact of EBIs requires information on adoption rates within service-delivery settings, as well as across delivery settings and geographic areas [9]. For example, we estimated the potential adoption in primary care of an HIV testing intervention using the proportion of physicians who offer routine HIV tests (e.g., 31.7% of physicians accepting new patients following Affordable Care Act expansion in California, 54.0% of physicians responding to the United States Centers for Disease Control and Prevention’s Medical Monitoring Project Provider Survey) [9, 46, 47]. Consequently, information on current levels of implementation (when the objective is to improve access to existing EBIs) or on feasible levels of implementation (for EBIs not previously implemented) is essential. Recently, Aarons et al. proposed a ranking of the evidence required to evaluate the transportability of EBIs [7]. Extensions to the value of a perfect implementation framework have similarly explored the influence of relaxing assumptions of immediate and perfect adoption [4, 5] to determine whether it is worthwhile to collect more data to reduce decision uncertainty. Another recent extension to this framework has investigated the influence of heterogeneity in adoption across geographical areas [48]. Overly optimistic expectations of the transportability of findings from one setting to another are one of the major causes of unsuccessful policy implementation [49]. Model-based CEA can help the design of implementation strategies by determining the long-term benefits from increasing adoption, reach, or both (Fig. 1 [A]). This can be particularly informative when promoting uptake outside of formal healthcare settings to reach different populations via adoption of EBIs in low-barrier, community-based settings. Addressing determinants of health such as income and employment, housing, geography, and availability of transportation, stigma or discrimination, and health literacy offers great public health promise by delivering services for people who might not seek treatment through formal care settings or for historically underserved populations. While the Consolidated Framework for Implementation Research can help identify contextual determinants for increasing reach and adoption [50], adapting interventions to different delivery settings or target populations will be more complex and prolonged. Model-based CEA can help estimate the costs and benefits of prevention interventions that promote long-term improvements in public health, which are more difficult to measure than effects from short-term clinical interventions.

Planning for Implementation, Scaling Up, and Sustaining Impact

While calls to explicitly account for the costs attributable to implementation strategies are not novel [51], reporting remains uncommon [23]. Proposals of pragmatic approaches are still recent [52, 53], particularly when considering pre-implementation activities that may have a direct impact on scale of implementation [54] or program feasibility and sustainability [55]. In estimating costs for implementing combinations of EBIs in HIV/AIDS for six US cities, we derived public health department costs attributable to planning activities to coordinate implementation in local healthcare facilities [9]. We derived personnel needs and used full-time equivalent salaries based on work we have done with local health departments in determining programmatic costs associated with planning for the adoption of EBIs [56]. Additionally, we scaled personnel needs according to population size in each city to account for the number of healthcare facilities that would need to be targeted by such an initiative [9]. We propose these to be distinct from costs explicitly attributable to implementation activities such as EBI adaptation or ensuring implementation fidelity [6]. Accounting for planning costs in CEA is essential when considering the efforts required to improve adoption of EBIs across different service-delivery settings (e.g., HIV testing in clinical settings versus sexual health clinics or community-based organizations) [51]. By working in close collaboration with decision makers and forecasting the long-term costs and benefits of alternative implementation strategies, CEA can provide valuable information for making implementation decisions, thus helping determine the best fit for a given context.

Scale-up Period

Model-based CEA can also extrapolate the effects of shortening the scale-up period to reach a targeted level of implementation. Using dynamic transmission models to capture the impact of achieving herd immunity levels from mass vaccination serves as an example [57]. While constant, linear scale-up over the implementation period can be used as a simplifying assumption [9, 57–59], model-based CEA can also provide implementation scientists with a sense of the relative value for rapid, or staggered patterns of scale-up (Fig. 1 [B]). For instance, implementing physician alerts in electronic medical records to promote adherence to HIV antiretroviral therapy can be rapidly scaled-up within a healthcare system, whereas a multi-component intervention requiring training for medical personnel and changes to existing processes may require a more time-consuming sequential implementation if there are constraints on human resources [9]. If available, evidence on the cost and effectiveness of interventions designed to improve implementation could also be incorporated explicitly [5]. Recent retrospective work by Johannesen et al. has explored the relative gains of eliminating slow or delayed scaling up of prevention in cardiovascular disease [48]. Furthermore, evidence on the uptake of specific health technologies based on diffusion theory could provide a methodological foundation to prospectively model the impact on value provided by different patterns of adoption (and therefore scale-up) in the implementation of EBIs [60].

Sustaining Impact Over Time

Determining the costs and health benefits of improved sustainment over time can allow decision makers to plan for the necessary effort levels for maintenance of an implementation strategy (Fig. 1 [C]). The concept of sustainment however lacks a consensus definition in implementation science, and there is limited research on how best to sustain an effective intervention after implementation [61, 62]. There is also a need for better reporting of strategies facilitating sustainment to reduce this large research-to-practice gap [62, 63]. There has been a growing focus on long-term evaluation for the sustainment of EBIs [41] with the recognition that ongoing interventions may be required to sustain the impact of EBIs [63]. CEA can be utilized to assess dynamic scenarios, including the timing for and frequency of retraining or other sustainment activities, ultimately determining how much it may be worth investing to sustain implementation strategies.

Interactions between Implementation Components

Of course, capturing the interaction between each of the above-noted implementation components is another advantage provided by simulation modeling. Working in close collaboration with decision makers, simulation models can be utilized to assess the long-term impact and the value of strategies varying any combination of the populations reached; adoption within and across settings; planning capacity; the scale-up period; and the duration of sustainment (Table 3). Such collaborations can happen during planning for implementation or simultaneously during any stage of executing an implementation strategy, supporting adaptive implementation strategies in near real time.
Table 3

Examples of how model-based CEA can support implementation scientific research

DomainApplication
Intended populations

● Determining the value of expanding reach of EBIs in the intended population and among different populations or subgroups

● Determining the impact of implementation strategies that aim to reduce health inequities

Service delivery

● Determining the value of expanding adoption in the target delivery setting or across different delivery settings

● Determining the impact of implementing EBIs outside of formal healthcare settings to deliver services to underserved individuals

Planning

● Working with decision makers during pre-implementation to project costs and benefits of different implementation strategies

● Prospectively determining the potential budgetary impact of implementation strategy alternatives

Scaling up

● Determining the impact and value of speeding up implementation within and across delivery settings

● Exploring uncertainty resulting from different patterns in the timing of implementation

Sustaining impact

● Determining the impact of imperfect sustainment and how much it may be worth investing in sustainment efforts, both in terms of required frequency and the extent of required effort

● Exploring the impact for the de-implementation of existing EBIs

EBIs, evidence-based interventions; CEA, cost-effectiveness analysis

Examples of how model-based CEA can support implementation scientific research ● Determining the value of expanding reach of EBIs in the intended population and among different populations or subgroups ● Determining the impact of implementation strategies that aim to reduce health inequities ● Determining the value of expanding adoption in the target delivery setting or across different delivery settings ● Determining the impact of implementing EBIs outside of formal healthcare settings to deliver services to underserved individuals ● Working with decision makers during pre-implementation to project costs and benefits of different implementation strategies ● Prospectively determining the potential budgetary impact of implementation strategy alternatives ● Determining the impact and value of speeding up implementation within and across delivery settings ● Exploring uncertainty resulting from different patterns in the timing of implementation ● Determining the impact of imperfect sustainment and how much it may be worth investing in sustainment efforts, both in terms of required frequency and the extent of required effort ● Exploring the impact for the de-implementation of existing EBIs EBIs, evidence-based interventions; CEA, cost-effectiveness analysis

Costs of Implementation Components

Uncertainty surrounding the costs of implementation is a fundamental challenge to the broad implementation of EBIs, particularly in public health [64]. While there is a breadth of frameworks to guide adaptation, there has been little emphasis on how best to estimate the costs of implementation strategies, according to the scale of delivery, and in diverse settings offering a different scope of services. For instance, the Consolidated Framework for Implementation Research can help identify barriers to implementation and includes cost considerations for EBIs [50]; however, evidence on the costs of implementation components is necessary for CEA to be further incorporated into implementation scientific research (Table 2) [1]. Consistency in adherence to best practices and reporting guidelines such as the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) [65] has been important in helping establish model-based CEA as standard practice for decision-making in healthcare [14]. Similar guidelines for reporting implementation costs could further help promote the use of implementation science with CEA. There are a number of high-quality microcosting examples in both the public health [66] and the implementation science literature [54]. Similarly, some studies have proposed approaches for the standardized reporting of pre-implementation and implementation component costs [52, 54, 55, 67] but these have not yet been used widely or applied to population-level EBIs. Moreover, CEA typically uses assumptions of constant average costs (i.e., linear cost functions) yet the functional form of costs can matter, particularly for widespread implementation of EBIs [68]. Assessing economies of scale and scope in delivering individual and combinations of interventions will no doubt be areas of intense inquiry and scrutiny in the coming years as health systems adapt to our new reality. A study using a regression-based approach to investigate factors determining costs for a large HIV prevention program conducted through non-government organizations in India incorporated a diverse set of population and setting characteristics and found decreasing average costs as scale of delivery increased [69]. A similar approach in local public health agencies across the state of Colorado found evidence suggesting economies of scale in the surveillance of communicable diseases, with average costs one-third lower for high-volume agencies as compared to low-volume agencies [70]. Economies of scope offer the potential for improving the impact of public health programs [71-73]. The costs of implementation components for delivering more than one intervention at once may differ and requires careful consideration [71]. Determining the value of integrated programs targeting multiple-disease areas with CEA can be enhanced by incorporating budget-impact analysis to also help build consensus for financing arrangement across different sectors of the health system [74, 75]. As the paradigm for CEA in implementation science shifts to systematically account for the costs of different components necessary for achieving population-level implementation of EBIs, the potential for model-based CEA to support and complement implementation scientific research in public health will continue to increase.

Conclusion

We believe in the promotion of implementation science and cost-effectiveness analysis as vital tools that should be applied in evaluating every meaningful public-health initiative. As the current pandemic has made painfully clear, it is imperative for public-health scientists to advance a range of new approaches to determine what implementation strategy may provide the greatest public-health value and thus promote sustainable and efficient use of limited resources. We conclude by acknowledging that our proposed agenda will require health researchers to continue strengthening partnerships with decision makers to advance how implementation-science methods can be incorporated in CEA. Decision makers continuously need to account for courses of action that are shaped by social, political, and economic considerations. Model-based CEA is ideally positioned to play a central role in providing a sound understanding of how well diverse implementation strategies could work, and how we can strive to further advance public health.
  73 in total

1.  The value of implementation and the value of information: combined and uneven development.

Authors:  Elisabeth Fenwick; Karl Claxton; Mark Sculpher
Journal:  Med Decis Making       Date:  2008 Jan-Feb       Impact factor: 2.583

2.  Implementation Research Methodologies for Achieving Scientific Equity and Health Equity.

Authors:  Moira McNulty; J D Smith; Juan Villamar; Inger Burnett-Zeigler; Wouter Vermeer; Nanette Benbow; Carlos Gallo; Uri Wilensky; Arthur Hjorth; Brian Mustanski; John Schneider; C Hendricks Brown
Journal:  Ethn Dis       Date:  2019-02-21       Impact factor: 1.847

3.  Consolidated Health Economic Evaluation Reporting Standards (CHEERS)--explanation and elaboration: a report of the ISPOR Health Economic Evaluation Publication Guidelines Good Reporting Practices Task Force.

Authors:  Don Husereau; Michael Drummond; Stavros Petrou; Chris Carswell; David Moher; Dan Greenberg; Federico Augustovski; Andrew H Briggs; Josephine Mauskopf; Elizabeth Loder
Journal:  Value Health       Date:  2013 Mar-Apr       Impact factor: 5.725

4.  Impact of program scale and indirect effects on the cost-effectiveness of vaccination programs.

Authors:  Yoko Ibuka; A David Paltiel; Alison P Galvani
Journal:  Med Decis Making       Date:  2012-04-03       Impact factor: 2.583

5.  Sharing the costs of structural interventions: What can models tell us?

Authors:  Robyn M Stuart; David P Wilson
Journal:  Int J Drug Policy       Date:  2020-03-13

6.  Planning for a COVID-19 Vaccination Program.

Authors:  Sarah Schaffer DeRoo; Natalie J Pudalov; Linda Y Fu
Journal:  JAMA       Date:  2020-06-23       Impact factor: 56.272

7.  Implementation science: a reappraisal of our journal mission and scope.

Authors:  Robbie Foy; Anne Sales; Michel Wensing; Gregory A Aarons; Signe Flottorp; Bridie Kent; Susan Michie; Denise O'Connor; Anne Rogers; Nick Sevdalis; Sharon Straus; Paul Wilson
Journal:  Implement Sci       Date:  2015-04-17       Impact factor: 7.327

8.  Using Cost-Effectiveness Analysis to Address Health Equity Concerns.

Authors:  Richard Cookson; Andrew J Mirelman; Susan Griffin; Miqdad Asaria; Bryony Dawkins; Ole Frithjof Norheim; Stéphane Verguet; Anthony J Culyer
Journal:  Value Health       Date:  2017-02       Impact factor: 5.725

9.  Routine HIV testing among providers of HIV care in the United States, 2009.

Authors:  A D McNaghten; Eduardo E Valverde; Janet M Blair; Christopher H Johnson; Mark S Freedman; Patrick S Sullivan
Journal:  PLoS One       Date:  2013-01-14       Impact factor: 3.240

10.  Quantitative measures of health policy implementation determinants and outcomes: a systematic review.

Authors:  Peg Allen; Meagan Pilar; Callie Walsh-Bailey; Cole Hooley; Stephanie Mazzucca; Cara C Lewis; Kayne D Mettert; Caitlin N Dorsey; Jonathan Purtle; Maura M Kepper; Ana A Baumann; Ross C Brownson
Journal:  Implement Sci       Date:  2020-06-19       Impact factor: 7.960

View more
  4 in total

1.  Using Microsimulation Modeling to Inform EHE Implementation Strategies in Los Angeles County.

Authors:  Emmanuel F Drabo; Corrina Moucheraud; Anthony Nguyen; Wendy H Garland; Ian W Holloway; Arleen Leibowitz; Sze-Chuan Suen
Journal:  J Acquir Immune Defic Syndr       Date:  2022-07-01       Impact factor: 3.771

2.  Economic evaluation of a multi-strategy intervention that improves school-based physical activity policy implementation.

Authors:  Cassandra Lane; Nicole Nathan; Penny Reeves; Rachel Sutherland; Luke Wolfenden; Adam Shoesmith; Alix Hall
Journal:  Implement Sci       Date:  2022-06-28       Impact factor: 7.960

3.  Using economic evaluations in implementation science to increase transparency in costs and outcomes for organizational decision-makers.

Authors:  Lisa Saldana; Debra P Ritzwoller; Mark Campbell; Eryn Piper Block
Journal:  Implement Sci Commun       Date:  2022-04-11

4.  The Evolving Economics of Implementation.

Authors:  Kathleen Knocke; Todd W Wagner
Journal:  BMJ Qual Saf       Date:  2021-12-06       Impact factor: 7.418

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.