Literature DB >> 35669171

FAST: A Framework to Assess Speed of Translation of Health Innovations to Practice and Policy.

Enola Proctor1, Alex T Ramsey2, Lisa Saldana3, Thomas M Maddox4,5, David A Chambers6, Ross C Brownson7,8,9.   

Abstract

The 17-year time span between discovery and application of evidence in practice has become a unifying challenge for implementation science and translational science more broadly. Further, global pandemics and social crises demand timely implementation of rapidly accruing evidence to reduce morbidity and mortality. Yet speed remains an understudied metric in implementation science. Prevailing evaluations of implementation lack a temporal aspect, and current approaches have not yielded rapid implementation. In this paper, we address speed as an important conceptual and methodological gap in implementation science. We aim to untangle the complexities of studying implementation speed, offer a framework to assess speed of translation (FAST), and provide guidance to measure speed in evaluating implementation. To facilitate specification and reporting on metrics of speed, we encourage consideration of stakeholder perspectives (e.g., comparison of varying priorities), referents (e.g., speed in attaining outcomes, transitioning between implementation phases), and observation windows (e.g., time from intervention development to first patient treated) in its measurement. The FAST framework identifies factors that may influence speed of implementation and potential effects of implementation speed. We propose a research agenda to advance understanding of the pace of implementation, including identifying accelerators and inhibitors to speed.
© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2022.

Entities:  

Keywords:  Implementation science; Metrics; Rapid cycle research; Speed; Translational science

Year:  2022        PMID: 35669171      PMCID: PMC9161655          DOI: 10.1007/s43477-022-00045-4

Source DB:  PubMed          Journal:  Glob Implement Res Appl        ISSN: 2662-9275


Introduction

The protracted time from discovery to practical application (Lenfant, 2003), known as the 17-year gap, is a unifying challenge for translational research and “raison d’etre” for implementation science. This gap persists: a 2021 study of cancer control research found time to translation averaging 15 years (Khan et al., 2021). The span between generating and applying evidence prolongs human suffering, thwarts the public health benefit of scientific discoveries (Sung et al., 2003), and prompts stakeholders to question implementation success (Smith et al., 2020). The discovery-to-practice gap is strikingly evident but untenable during times of public health crisis, evidenced in the COVID-19 pandemic when speed is all-important. Co-ordinated effort and funding led to rapid implementation at population scale of vaccine development, ventilation protocols, personal protective equipment production, and home monitoring programs (Ball, 2021). Yet science-based public health measures such as mandates for physical distancing, masking, and vaccine uptake continue to be debated vigorously, thwarting the timeliness of policy implementation (Chernozhukov et al., 2021). As global pandemics illustrate, public health crises demand timely implementation of rapidly accruing evidence to reduce morbidity and mortality. Indeed, other social interventions, such as those for the opioid crisis, child maltreatment, and refugee relief require fast implementation as well, with the COVID-19 pandemic offering an opportunity to illustrate that speed can be accomplished when agreed upon by multiple stakeholders. Proctor and Geng call for a lane of science that studies rapid uptake of proven interventions (Proctor & Geng, 2021). There is now national recognition for the need to move faster, as the National Academies have recently formed a Standing Committee for CDC Center for Preparedness and Response—a forum to accelerate research translation in public health emergencies (The National Academies of Sciences, Engineering, n.d.). Despite increased prioritization, speed remains an understudied metric in implementation science. Prevailing definitions of implementation lack a temporal aspect (Smith et al., 2020) and increased use of hybrid effectiveness-implementation trials notwithstanding, current research approaches have not accelerated implementation (Glasgow & Chambers, 2012; Kilbourne et al., 2016; Riley et al., 2013). The Dissemination and Implementation Research in Health funding opportunity announcements (National Institutes of Health, 2019a, b, c) highlight potential “to speed the translation of research into practice” as an “innovation” review criterion. National Cancer Institute-supported Implementation Science Centers for Cancer Control emphasize the need for studying “rapid cycle” implementation (Oh et al., 2021), and funding opportunity announcements for the Clinical and Translational Science Award (CTSA) Program call for “clear and meaningful metrics and outcomes” to assess the speed of research translation and training (National Institutes of Health, 2018). Yet the implementation science literature reflects scant attention to speed (Kessler & Glasgow, 2011), leaving us ill equipped to answer the question: What are the variables and conditions most likely to impact speed of implementation? With dedicated investigation and measurement of speed across a range of innovations and contexts we could, in time, begin to understand how long implementation should take, from discovery to incorporation into routine public health practice or clinical care. Implementation speed is a thorny issue. Its investigation could involve (1) speed of translational processes (e.g., from landmark paper to guideline, guideline to implementation), (2) pace of a given implementation effort (e.g., progression through implementation stages), (3) timeliness of prevention efforts and health care delivery (e.g., to stem more severe illness), or (4) speed of achieving clinical outcomes (e.g., improved functioning after receiving evidence-based care). These orientations range from a largely individual focus (e.g., identifying a critical care need and intervening faster) to a population focus (e.g., translating and scaling up evidence to inform assessment and management practices in critical care). This paper merges the former two orientations as follows: speed of moving from synthesized recommendations based on actionable evidence (e.g., guideline) warranting implementation to the point at which that evidence is identifiable as being used in standard practice, where contextually-appropriate. Cognizant of potential trade-offs in prioritizing implementation speed, we present both sides of an argument about the optimal pace of the implementation process—implementation occurs too slowly, versus implementation should not be rushed. We discuss the balance of risks and benefits of slow or quick implementation. Of course, certain interventions should not be implemented given uncertain evidence or lack of evidence of clinical or population health benefit (Norton et al., 2017). Our work is based on an assumption that the interventions to be accelerated are those with established benefit for the general population or identified subpopulation.

The Debate: Fast or Slow?

The Case for Speed

Several factors compel expeditious roll out of new discoveries, three of which are among the most frequently debated issues in research translation: (1) rapid health and social crises, (2) the typically reactive nature of systems; and (3) healthcare and social inequities. First—evident globally—new epidemics and rapid disease spread require fast response. Traditional research paradigms accommodate slow-moving, incremental improvements (Sverdlov, 2018), but crippling pandemics, epidemics, and other pressing societal needs demand rapid advances. Adoption of new innovations carries perceived and actual risk, but failure to act—or acting too slowly—risks significant harm. Research shows that governmental delays in implementing evidence-based virus mitigation policies are linked to disease spread, with attendant massive health, economic, and social tolls (Chernozhukov et al., 2021; Walker et al., 2020). In contrast, co-ordinated efforts, ingenuity, and “supercharged” funding yield clear payoffs, as with the development of novel messenger RNA-based vaccines and their unprecedented rate of distribution (Ball, 2021). Importantly though, speed of vaccine distribution varies dramatically across communities, highlighting that health equity must be a priority in accelerated translation (Jean-Jacques & Bauchner, 2021). And a mutual desire across systems to reach a solution, along with provision of funding, are not sufficient for rapid change, as evidenced by the ongoing efforts to address the opioid epidemic and resulting consequences across multiple service sectors including criminal justice, child welfare, mental health, and addiction (Morrow et al., 2019). Second, reflected in the challenge of making prevention common practice, social service, healthcare and public health remain largely reactive. Optimally, societies would have armamentaria of new innovations, practice guidelines, and solutions in design and testing queues, ready for quick deployment. However, new solutions tend to be prioritized in crises, such as the opioid epidemic or dementia among a rapidly aging population (Khanassov et al., 2014). Moreover, efforts to understand implementation often wait until innovations have a strong evidence base (Colditz & Emmons, 2018; Proctor et al., 2013). A more anticipatory approach—designing solutions for future implementation—may narrow substantially the research-to-practice gap (Brown et al., 2017). Third, long-persisting racial, economic, and geographical inequities in prevention and care exact an unnecessary toll on human health and well-being. Notably, the “time to implementation” of public health programs and clinical sciences is prolonged among populations from disadvantaged communities. Many individuals from these groups lag in receipt of COVID-19 vaccines, clinician advice against smoking, breast and colorectal care, and lung cancer screening eligibility, utilization, and follow-up care (Khan et al., 2021; Levinson, 2017; Ndugga et al., 2021; Sosa et al., 2021; Ward et al., 2004). Similarly, although the poor social determinants of health associated with child maltreatment and subsequent involvement in the child welfare system have been documented for decades, receipt of evidence-based practice for families demonstrating these risk-factors is lacking (Hunter & Flores, 2021). Designing innovations for implementation in communities where the need is greatest, rather than adapting innovations originally designed for higher-resourced contexts and settings, may accelerate scale-up and the reach of prompt diagnosis and treatment among minoritized populations (Mohr et al., 2017, 2018).

The Case for Pause

Despite recognized urgency to accelerate research translation, several questions signal caution when weighing fast-versus-slow. Does haste make waste? Is fast inherently risky? Can rapid research yield strong evidence, absent replication? Can moving fast co-exist with sustainment, or does this represent a direct trade-off? Does rapid research increase inequities? Does the “life lesson” of Tony award winner Andre De Shields (2019)—slowly is the fastest way to get where you are going—apply to implementation in health and other global domains? The health mandate to “do no harm” cautions against risk, prompting the question, “What is the minimum level of evidence needed for implementation?” (Ramsey et al., 2019). Once again, lessons from the COVID-19 pandemic are useful: rapid development, testing, and approval of COVID-19 vaccines triggered an onslaught of questions about safety and effectiveness (Khuroo et al., 2020). For implementation of health innovations to be successful, safety and effectiveness need to pass the scrutiny of both researchers and the population to be served. Yet generally speaking, “evidence-in-progress” is all we have, reflected in the U.S. Food and Drug Administration’s system of evidence determination: diffusion proceeds while evidence continues to accrue. All researchers believe that better evidence is on the horizon, awaiting the next trial. But real-world problems demand action. The adage “More research is needed…”, while true, can slow science’s capacity to improve the human condition. How soon should we act on evidence, even as it continues to evolve? (Ramsey et al., 2019). And how can we speed the process of translating evidence to practice, fostering change amid complex health and human service settings that are not “built for speed”?

Speed Limit Guidance

Innovative designs promise research efficiencies and faster implementation (Glasgow et al., 2012). Frameworks such as Designing for Accelerated Translation (DART) (Ramsey et al., 2019) and methodological advances such as hybrid designs, user-centered designs, rapid ethnography, and market viability assessment (Curran et al., 2012; Hamilton & Finley, 2019; Proctor, McKay, et al., 2021; Proctor, Toker, et al., 2021) can shorten time from idea to implementation (Vindrola-Padros et al., 2021). We view iterative and concurrent approaches to evidence generation and implementation as practical, representative of real-world needs, and useful in accelerating time-to-benefit. Even when faster translation is neither prudent nor safe, we argue for the importance of systematically measuring the pace of research translation and understanding the influences on and impact of implementation speed. Such research will inform, if not resolve, the debate about fast-versus-slow. Accordingly, we aim to untangle complexities in the study of implementation speed and offer a framework to assess speed of translation (FAST) as a key metric in implementation science. FAST offers potential referents and observation windows to structure assessments of speed and identifies potential influences on and effects of implementation pace. Finally, we propose a research agenda to advance understanding of the pace of implementation, including research to identify accelerators and inhibitors of speed. As noted throughout the paper, the concept of speed cuts across the care continuum (prevention to palliation) and settings. Given its visibility and global urgency, several of our illustrations highlight the COVID pandemic response, although we anticipate that the key concepts and proposed research agenda for studying implementation speed will apply broadly across many other domains and service settings. As a general principle, innovations with sufficient evidence of effectiveness and community or end-user demand, combined with mitigated costs and risks of acting versus not acting on the evidence—as theorized in the DART framework (Ramsey et al., 2019)—are appropriate for prioritizing implementation speed, regardless of the domain.

Complexities of Speed: Who, What, and How?

Several complexities beset studying speed of research translation. We discuss three that are essential for conceptual and methodological advancement: stakeholders’ priorities for speed, referents for measurement, and metrics.

Who Cares About Speed and Why Do They Care?

To whom is the pace of research translation relevant? To whom is it a priority? Table 1 contrasts several stakeholder groups.
Table 1

Stakeholder perspectives and selected priorities on the speed of research translation

StakeholdersPerspectives and priorities (sample questions)
Intervention developers, trainers, and purveyorsHow long until the innovation is adopted?
CliniciansHow long will the innovation take to learn? How long to reach competence? When can the innovation be used?
Clients and patientsHow long until the innovation is available? How long until improvement is seen?
Administrators

How long is the change process?

How quickly will new innovation become routine?

PayersHow long until return on investment?
Policy makersIs the innovation ready at the time when decisions are being made? Can we implement the innovation and demonstrate improvement in time for the re-election cycle?
CommunitiesHow long until users of the innovation are reached? How long until coverage rates are adequate?
Advocates

Does rapid research affect health equity?

How long until equity is realized?

Researchers (*Current*)How long does it take to translate evidence to practice?
Researchers (*Proposed*)

How long will each stage of research translation take for this innovation?

How can we optimize the speed of intervention delivery upon identifying effectiveness?

How can we better measure the speed of change?

What factors will impact speed?

What strategies will enhance speed?

How do we increase speed for disadvantaged groups?

What effects did speed at both the translational research and applied implementation levels have on overall impact of the innovation?

Sample questions are not necessarily mutually exclusive to stakeholder group

Stakeholder perspectives and selected priorities on the speed of research translation How long is the change process? How quickly will new innovation become routine? Does rapid research affect health equity? How long until equity is realized? How long will each stage of research translation take for this innovation? How can we optimize the speed of intervention delivery upon identifying effectiveness? How can we better measure the speed of change? What factors will impact speed? What strategies will enhance speed? How do we increase speed for disadvantaged groups? What effects did speed at both the translational research and applied implementation levels have on overall impact of the innovation? Sample questions are not necessarily mutually exclusive to stakeholder group Stakeholders’ perspectives, preference, and priorities for speed are likely to differ, perhaps even within a given stakeholder group. For instance, community-based stakeholders often move “at the speed of trust” (Covey & Merrill, 2006), justifiably resistant to partner with unfamiliar research teams to adopt innovative programs that may not meet an identified need in the community. At the same time, community health providers often cite the slow pace of research agendas and grant timelines as a key barrier to collaboration (Carter-Edwards et al., 2021). When meaningful research-practice partnerships coalesce around community needs, increased priorities on implementation speed may enhance engagement and the relevance of research for community partners. As another example, healthcare consumers and advocates may view drug approval as too slow, while regulators focused on safety may work to slow the pace. Timing and timeliness may also be viewed in unique ways by policy makers, who are often most concerned about research being available at the time when decisions are being made (Smith et al., 2022). Indeed, Bullock et al. (2021) identified “timing/sequencing” as one of eight categories of policy-related determinants of implementation (Bullock et al., 2021). Researchers should explicate the stakeholder perspectives captured and interpret data in the context of those perspectives.

On What Referent is Speed Being Measured and Analyzed?

The referent of speed—what is being measured—also varies, as demonstrated in Table 2. One referent is phase of implementation. The plethora of phase models within implementation science reflects a prevailing assumption that practice change occurs in sequenced, dynamic patterns of change. Hence studies may focus on the pace of completing stages of implementation (e.g., from Engagement through Competency in the Stages of Implementation Completion model) (Saldana et al., 2020); progressing through various phases of an implementation process (e.g., from Exploration through Sustainment in the EPIS framework) (Moullin et al., 2019), or moving through cyclical models such as Plan-Do-Study-Act (PDSA) phases (Taylor et al., 2014).
Table 2

Potential referents of speed, to be measured per intervention

Speed of what?Examples
Completing phases of the implementation process

How long should the planning for change phase last?

What are the optimal lengths of the exploration phases?

How long does it take to attain organizational readiness to adopt new interventions or programs?

Attainment of implementation outcomes

How quickly provider or system adoption of innovations is attained?

How much training time is needed to attain fidelity to intervention protocols?

How quickly can innovations penetrate groups of health care users or community populations to achieve targeted reach?

How quickly does the innovation become sustained or institutionalized?

Potential referents of speed, to be measured per intervention How long should the planning for change phase last? What are the optimal lengths of the exploration phases? How long does it take to attain organizational readiness to adopt new interventions or programs? How quickly provider or system adoption of innovations is attained? How much training time is needed to attain fidelity to intervention protocols? How quickly can innovations penetrate groups of health care users or community populations to achieve targeted reach? How quickly does the innovation become sustained or institutionalized? Speed may capture “time-to-outcome attainment”. Time to achieving implementation outcomes (e.g., adoption), the most proximal outcomes (Proctor et al., 2009), may function as rate-determining factors in the ultimate speed of implementation. Thus, studies may measure time from an intervention’s availability to its acceptance by providers and patients, time required to train providers to deliver interventions with fidelity, or time to scale-up for penetration within a health system or a community. Some implementation outcomes take longer to attain, and some may be necessary pre-conditions for other outcomes, e.g., implementation feasibility may depend on funding acquisition or new pricing policies. We have scant data on the typical length of time from an intervention’s availability to time to achieving specific implementation outcomes, such as providers’ awareness and decision to adopt, but recent unpublished analyses suggest a high degree of variation depending on the program being implemented (Alley et al., 2022). In early implementation (e.g., engagement and planning), we need to assess which implementation outcomes take longer to achieve (e.g., obtaining resources to offset the cost barriers to implementation) than others (getting to acceptability through use of a champion or communication campaign). Akin to intervention mapping, an implementation team can assess speed, anticipate timing, and prioritize earlier efforts on slower-moving challenges. A common example from healthcare settings is the adoption of a new innovation that requires modifications to the electronic health record system. Because of the effort needed to program these changes, this implementation activity might be prioritized up front, before other seemingly more essential implementation activities such as hiring and training. Similarly, because funding can be challenging to obtain, implementers often wait to learn if they have obtained funding prior to connecting with key stakeholders for implementation, recognizing that delays between engaging with stakeholders and moving to action often undercut engagement and motivation.

Observation Periods: From When-to-When?

Reporting observation periods for data collection is widely expected in reporting research methods. Similarly, in reporting rate or time of research translation, studies need to explicate observation windows. Khan et al. (2021) used the observation period “time from publication to implementation of practice guidelines,” derived from Balas and Boren (2000). We also encourage reporting of more “downstream” observation periods, including “time from actionable evidence-based recommendation to completion of an implementation effort making use of that evidence.” Hence, implementation pace could be reported as time from intervention development to time of first provider training, time of first patient treated, or time to reaching agency sustainment. Brookman-Frazee et al. (2016) measured quarterly the slope of adoption within pre-determined time increments. Quantitative measurement of adoption rates over time, particularly in relation to other sites or prior adoption efforts, can prove helpful. Table 3 below highlights various domains of measurement, with sample metrics.
Table 3

Measurement of speed

Domains for measuring speedExample metrics
Speed in the implementation process
Time elapsed to achieve predefined implementation milestoneNumber of days from starting provider training to first person receiving the intervention
Time elapsed to attain predefined outcome (implementation, service system, clinical outcomes)Number of months to attain 60% of eligible providers delivering the intervention following clinic adoption
Implementation progress between predefined time periodsNumber of implementation steps completed or outcomes attained in 6 months
Rate of progress (or changes in slope) over time or between milestones

% increase in sites adopted in first 6 month period vs. second 6 month period

Visual depiction (i.e., curve) of % increase in providers engaged 6 months prior to readiness assessment vs. 6 months subsequent to readiness assessment

Pace of iterative development or improvementTime elapsed (in days) from start to end of 1st PDSA cycle, 2nd PDSA cycle
Speed in the translation of research
Time spent within a translational stage (and time saved in subsequent iterations within the translational stage)Number of months to develop first versus second iterations of intervention
Time to advance from one translational stage to anotherNumber of months from intervention development to efficacy testing in real-world settings (e.g., from Stage I to Stage III in NIH Stage Model for Behavioral Intervention Development)
Measurement of speed % increase in sites adopted in first 6 month period vs. second 6 month period Visual depiction (i.e., curve) of % increase in providers engaged 6 months prior to readiness assessment vs. 6 months subsequent to readiness assessment

A Proposed Framework to Assess Speed of Translation (FAST)

Implementation science enjoys a wealth of frameworks and conceptual models (Tabak et al., 2012), some focusing on implementation processes as key to sustainment (Shelton & Lee, 2019). Yet very few, notably the Stages of Implementation Completion (Chamberlain et al., 2011), explicitly guide assessment of sequential progress toward implementation. Moreover, extant taxonomies of implementation outcomes do not address speed of attainment. The challenges and uncertainties associated with accelerating implementation efforts warrant a guiding framework. Figure 1 below depicts our proposed Framework to Assess Speed of Translation (FAST). FAST can guide research to inform: (1) an initial set of parameters and metrics for capturing and reporting speed, including identification of the referent (i.e., speed of what?), endpoints (i.e., speed from when to when?), and outcomes to assess speed of implementation and its change over time; (2) factors that affect speed, including those of the innovation, context (e.g., demand, need), adopters, and implementation strategies; and (3) the effects of speeding implementation.
Fig. 1

Depiction of the determinants of implementation pace in the framework to assess speed of translation (FAST)

Depiction of the determinants of implementation pace in the framework to assess speed of translation (FAST) Factors in our proposed FAST framework align well with other widely used frameworks that do not focus as explicitly on speed, including the Interactive Systems Framework (Wandersman et al., 2008), the Consolidated Framework for Implementation Research (Damschroder et al., 2009), and Active Implementation Frameworks (Fixsen et al., 2005). The FAST framework is influenced by these existing frameworks as evidenced by a focus on speed with reference to multiple stakeholder groups (i.e., varying perspectives on speed), the specific innovation and the context in which it is to be implemented (i.e., innovation and adopter factors manifesting as accelerators and inhibitors to speed), and the importance of implementation capacity and other implementation drivers (i.e., determinants and strategies for building speed). The FAST framework aims to build on these and other implementation science frameworks to promote and facilitate the study of factors that influence speed of implementation and the effects of speed on important outcomes across diverse contexts.

A Research Agenda on Speed

We propose a five-pronged research agenda to advance understanding of implementation speed: (1) describe speed and develop metrics, (2) examine innovation, adopter, and contextual influences on speed, (3) identify and test how specific implementation strategies can accelerate speed, (4) assess the effect of implementation speed on key outcomes, and (5) develop designs for testing speed.

Describe Speed and Develop Metrics

Literature currently provides no systematic review of speed of implementation (Smith et al., 2020). Debates about the merits of speed notwithstanding, the field would benefit, at minimum, from better specification and reporting of speed of implementation. A first priority is detailing how fast phases, outcomes (at intermediate and late stages), and implementation processes are achieved. This requires the field to better specify and report on the perspectives (i.e., speed from whose viewpoint), referents (i.e., speed in terms of what), and measurement (i.e., speed defined how) of the speed of translation. This descriptive research will yield valuable information but more importantly will set the stage for exploratory and analytic studies as required for testing influences on speed and effects of speed. An excellent example is provided by Huebner and colleagues, who calculated time to achieving fidelity for delivering Sobriety Treatment and Recovery Team (START) programs for families with co-occurring child maltreatment and substance use disorders (Huebner et al., 2015).

Examine Influences on Speed

What factors influence implementation speed? Similar to a prior call for sustainment research (Proctor et al., 2015), studies should explore such correlates of speed as features of the innovation, characteristics of adopters, and features of the implementation context, as follows.

Features of the Innovation

Diffusion theory has long suggested that properties of innovations affect rate of uptake, with Rogers (2003) positing that compatibility, relative advantage, complexity, and cost affect adoption. Thus, we can form testable hypotheses that some interventions, policies, and programs may be inherently faster to adopt and deliver. Questions to be addressed about innovation features include: Are interventions with certain features (e.g., designed and packaged with end-user in mind) adopted more quickly than others? and Would modifying certain features of the intervention affect pace of implementation?. Emerging evidence suggests that patterns of adoption and penetration vary across evidence-based interventions (Brookman-Frazee et al., 2016). Moreover, an innovation’s developmental state appears to influence the pace of implementation, with formative implementations being less efficient, though not necessarily less effective (Saldana et al., 2020). Researchers should map and report innovation-specific adoption curves, reflecting rollout pace across various phases, benchmarks, and iterations. User-centered design and stakeholder engagement may produce innovations that are faster to implement. There are also likely to be differences in speed of implementation for highly incentivized or mandatory versus voluntary innovations. For example, if a service is covered by Medicare and is an indicator of quality for a health system (e.g., mammography screening), it is more likely to be rapidly adopted than a voluntary innovation (e.g., encouragement of employees to follow new healthy lifestyle guidelines).

Adopter Characteristics: Organizational, Provider, and Patient

Studies can identify the speed with which specific systems, organizations, or communities implement innovations, applying characterizations from Rogers (2003) of early and late adopters, or quick versus slow implementers. Risk tolerance is dynamic and likely varies across stakeholders of any human service system undergoing change. While personality traits of providers and system leaders can remain stable, an entrepreneurial lens suggests that risk tolerance can increase with experience and support (Proctor, McKay, et al., 2021; Proctor, Toker, et al., 2021). Likewise, organizational climate, culture, infrastructure, and experience with innovation adoption vary and are malleable (Glisson, 2007), although individuals may adopt change faster than systems (Klein & Kozlowski, 2000). Systems with embedded implementation staff and resources likely adopt new interventions more quickly than systems without such internal resources or that rely on external consultation. For example, the Veterans Affairs initiated the QUERI program to support implementation and later adopted implementation facilitators to expedite and enhance implementation of evidence-based practices (Ritchie et al., 2021). Research should capture the risk-reward balance in organizations’ willingness to adopt versus maintaining a status quo. The effects of implementation capacity/capital (Neal & Neal, 2019; Wandersman et al., 2008), along with a range of other key implementation drivers (Kaye et al., 2012; Torrey et al., 2012), also warrants further study to understand how individuals and organizations leverage prior experience to improve efficiency of subsequent implementation efforts. Further, the strength of connections across delivery system levels, as highlighted in the Dynamic Sustainability Framework (Chambers et al., 2013) and more specifically, the concept of bridging factors (Lengnick-Hall et al., 2021), emphasizes the importance of specific relational structures across levels and the role of formal arrangements. Implementation supports via bridging factors may include financial investment, contract requirements, or legislative mandates. The unit of implementation also is likely to influence the speed of translation. For example, a set of evidence-based interventions to promote vaccination uptake is likely to be implemented more quickly in a small number of clinics compared with a larger, diverse set of clinics across a state (Dang et al., 2020; Dearing & Cox, 2018). Does prior success in adopting an innovation lead to a more successful second attempt? Conversely, does prior implementation failure lead to later caution? By assessing adopter characteristics as they relate to implementation speed, investigators can compare the rates at which different provider groups or agencies accrue sufficient capability, opportunity, and motivation for change.

Features of Context: Pull, Capacity, and Urgency

Contexts and circumstances for innovation implementation also may affect pace. Demand for both intensifies in public health crises. Accordingly, necessary resources and support may become available to speed intervention deployment. In contrast, implementation may be slower when demand is not apparent or when implementing an innovation carries clinical or financial risk (Ramsey et al., 2019). Implementation researchers must better understand the demand for interventions. Treatment and program developers are strong in “pushing out” discoveries but weaker in cultivating “pull” (i.e., the market demand for an innovation) (Proctor, McKay, et al., 2021; Proctor, Toker, et al., 2021) or recognizing current pull (Orleans, 2007). Push–pull tension may be related to “capacity”, or systems’ potential to deliver value (Curry, 2000; Orleans et al., 1999). Quality improvement efforts often seek to build capacity—with lessons for implementation science to prioritize end-users’ needs, motives, and preferences. Consistent with resource allocation theory (Bower, 2018), capacity is influenced by the number of other interventions and projects already being implemented within a given context, which may prompt de-implementation considerations. Fiscal resources are another influential feature of context. Importantly, the total costs of implementation are inclusive of providing the intervention itself (e.g., services billable to insurance, equipment, materials), as well as costs related to maintaining an effective implementation infrastructure (e.g., ongoing training and coaching costs, data system costs, contracts with developers). Costs for intervention implementation vary and often are obscure (Proctor et al., 2019; Saldana et al., 2022). Adoption pace may be unaffected when costs are low, may slow when implementation costs are high or funding streams are sporadic, and may accelerate when intervention delivery is incentivized. Brookman-Frazee et al. (2016) found accelerated ramp up in response to fiscal mandates to deliver evidence-based interventions. Understanding demand and costs of an innovation may lay groundwork for long-term, implementation success without compromising quality or sustainability (Chambers et al., 2013; Luke et al., 2014). Finally, policy mandates provide context for implementation, but often do not come with the implementation infrastructure necessary for rapid on-the-ground implementation. For example, the Family First Prevention Services Act (FFPSA; P.L. 115-123) enacted in 2018 authorized federal funding for prevention services for mental, substance abuse, and in-home parent skills programs in an effort to reduce the number of entries into foster care. Yet three years into the process, only a limited number of states have approved FFPSA plans (Healthy Families America, 2021) and the number of evidence-based practices that have been reviewed and approved for funding, while growing, are still too few to meet the scale and scope of a federal mandate.

Active Efforts to Generate Speed: Can Implementation Strategies Accelerate Implementation?

Implementation strategies are the “how to” for moving guidelines, policies, and practices into adoption and use (Curran, 2020; Proctor et al., 2009). While published compilations (Powell et al., 2012, 2015) list arrays of different implementation strategies—ranging from provider training, data dashboards, checklists and protocols, policy initiatives, and organizational change methods—few studies describe the time required to use such strategies or the time between starting to use particular strategies and ultimate success in implementation. Calculating time to impact of implementation strategies is a research priority. Studies should examine pace as a function of specific implementation strategies such as various training approaches, stakeholder engagement, task shifting, and champion messages. Clear implementation outcomes need be defined to identify the duration between implementation strategies and these outcomes (e.g., the time from training to serving the first client versus the time from training to achieving fidelity in program delivery). Although partnerships have been shown to be essential facilitators in implementation efforts, partnerships require time to shape and solidify (Proctor, McKay, et al., 2021; Proctor, Toker, et al., 2021), raising important questions: Does collaboration slow things down? Or does stakeholder engagement enhance relevance, thus preventing down-stream stagnancy and lag? Answers to these questions may shift the goal from “going faster” to “going farther, faster”.

Does Speed Matter and If So, What is Its Impact?

Finally, this proposed research agenda includes understanding the outcomes and impacts of faster-versus-slower research translation. Can faster pace of implementation produce sustainable change? Can accelerated implementation yield stakeholder satisfaction and quality outcomes? Or might we learn, akin to Goldilocks, that there is “too fast, too slow, and just right?” More nuanced questions and data may reveal the optimal pace for implementation for different innovations, with different stakeholder groups, at different implementation phases, and under different resource contexts. Answering these and other questions will advance the growing line of research on precision implementation and informing decisions of when, how, and under what circumstances faster implementation is warranted.

Designs for Testing Speed

Understanding speed of research translation, its determinants, and outcomes requires distinct measurement and analytical tools (Guastaferro & Collins, 2019). Data must be gathered over time and analyzed for trends and non-linear patterns. Such data may be captured retrospectively or concurrently from ongoing studies. As in measuring sustainability, capturing speed requires establishing optimal observation periods (Proctor et al., 2015), defined checkpoints, and short- and long-term outcomes that indicate whether speed increases or decreases. These measurements should be relevant to the level of implementation being evaluated for speed. Adoption speed may be most efficiently studied within ongoing implementation projects via natural experiments (Glasgow et al., 2012; Petticrew et al., 2005). Practice-based research can yield “tacit knowledge” or “colloquial evidence”—pragmatic information based on direct experience and action in practice (Kothari et al., 2012; Sharma et al., 2015). Researchers can readily leverage observational studies of implementation processes within their natural context, capturing time-related data along with features of context. Hybrid effectiveness-implementation designs (Curran et al., 2012) are intended to speed knowledge development about interventions and their implementation, incorporating questions about implementation simultaneously into studies of effectiveness (i.e., Type 1) or simultaneously test the impact of one or more implementation strategies alongside the effectiveness of one or more interventions (i.e., Type 2). Other methods for accelerating research translation include rapid ethnographic assessment (Hamilton & Finley, 2019; Reisinger, 2019; Sangaramoorthy & Kroeger, 2020), rapid qualitative syntheses (Brown-Johnson et al., 2020), and rapid cycle research (National Cancer Institute, 2019; Sangaramoorthy & Kroeger, 2020).

Conclusion

This paper addresses the merits for and against accelerating the translation of evidence into practice, along with a conceptual framework for assessing the speed of translation. Research yields many public health and clinical interventions with solid evidence, low risk, and strong adoption potential that have yet to be implemented at scale. Interventions still on-the-shelf reflect unrealized value and provide the ultimate argument for speed. “Fast-tracking” such innovations from research evidence to use-in-practice is critical. Conversely, some interventions currently in use have weak evidence of effectiveness and carry high costs, signaling misuse of resources (e.g., financial, effort). “Flat-lining” such innovations—that is, halting their implementation pending more evidence or de-implementing altogether—may be necessary. Such interventions may provide a cautionary tale and counterpoint to the argument for speed. A key goal is to more quickly distinguish between these typologies to efficiently make decisions regarding implementation and de-implementation. Still other interventions require use of complex implementation strategies, training capacity, and infrastructure details for their delivery. Here, efficiencies can be found to improve the pace of achieving these pre-conditions, and in turn, population health benefit. We propose a framework for assessing speed of implementation along with an agenda for research on speed. The FAST framework identifies multiple determinants that affect the speed of translation, thereby informing a research agenda for the study of speed in implementation science. Executing this agenda requires careful conceptualization and measurement of speed indicators, intervals, and correlates, as well as rigorous testing of how long various implementation strategies take. Researchers have advocated for a reduction in the persistent 17-year evidence-to-practice gap (Lenfant, 2003). While the field of implementation science has grown in capacity, quality and quantity of work, explicit attention to speed has scarcely moved from a conceptual goal to a target for improvement. We have lacked careful, explicit study of determinants of implementation speed, as well as an agenda to build a more robust knowledge base that reduces the time from discovery to widespread use. In a global pandemic, research cannot ignore the need for rapid response. Systems must move nimbly within dynamic contexts and changing course of disease, and research must provide guidance on this process. This article is a first step toward that important endeavor. Accelerating speed requires careful analysis and rigorous research. Need implementation take 15 or 17 years? Can and should we move faster? How can research translation and implementation of evidence-based interventions be accelerated? It is time to accelerate research on speed.
  68 in total

1.  Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact.

Authors:  Geoffrey M Curran; Mark Bauer; Brian Mittman; Jeffrey M Pyne; Cheryl Stetler
Journal:  Med Care       Date:  2012-03       Impact factor: 2.983

2.  Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation.

Authors:  Abraham Wandersman; Jennifer Duffy; Paul Flaspohler; Rita Noonan; Keri Lubell; Lindsey Stillman; Morris Blachman; Richard Dunville; Janet Saul
Journal:  Am J Community Psychol       Date:  2008-06

3.  Achieving the Goals of Translational Science in Public Health Intervention Research: The Multiphase Optimization Strategy (MOST).

Authors:  Kate Guastaferro; Linda M Collins
Journal:  Am J Public Health       Date:  2019-02       Impact factor: 9.308

4.  Vaccine Distribution-Equity Left Behind?

Authors:  Muriel Jean-Jacques; Howard Bauchner
Journal:  JAMA       Date:  2021-03-02       Impact factor: 157.335

5.  Sustainability of evidence-based healthcare: research agenda, methodological advances, and infrastructure support.

Authors:  Enola Proctor; Douglas Luke; Annaliese Calhoun; Curtis McMillen; Ross Brownson; Stacey McCrary; Margaret Padek
Journal:  Implement Sci       Date:  2015-06-11       Impact factor: 7.327

6.  Market viability: a neglected concept in implementation science.

Authors:  Enola K Proctor; Emre Toker; Rachel Tabak; Virginia R McKay; Cole Hooley; Bradley Evanoff
Journal:  Implement Sci       Date:  2021-11-20       Impact factor: 7.327

7.  Opportunities to improve policy dissemination by tailoring communication materials to the research priorities of legislators.

Authors:  Natalie R Smith; Stephanie Mazzucca; Marissa G Hall; Kristen Hassmiller Lich; Ross C Brownson; Leah Frerichs
Journal:  Implement Sci Commun       Date:  2022-03-04

8.  Using economic evaluations in implementation science to increase transparency in costs and outcomes for organizational decision-makers.

Authors:  Lisa Saldana; Debra P Ritzwoller; Mark Campbell; Eryn Piper Block
Journal:  Implement Sci Commun       Date:  2022-04-11

9.  Rapid, responsive, relevant (R3) research: a call for a rapid learning health research enterprise.

Authors:  William T Riley; Russell E Glasgow; Lynn Etheredge; Amy P Abernethy
Journal:  Clin Transl Med       Date:  2013-05-10

10.  Implementation strategies: recommendations for specifying and reporting.

Authors:  Enola K Proctor; Byron J Powell; J Curtis McMillen
Journal:  Implement Sci       Date:  2013-12-01       Impact factor: 7.327

View more
  1 in total

1.  Using Iterative RE-AIM to enhance hospitalist adoption of lung ultrasound in the management of patients with COVID-19: an implementation pilot study.

Authors:  Anna M Maw; Megan A Morris; Russell E Glasgow; Juliana Barnard; P Michael Ho; Carolina Ortiz-Lopez; Michelle Fleshner; Henry R Kramer; Eric Grimm; Kate Ytell; Tiffany Gardner; Amy G Huebschmann
Journal:  Implement Sci Commun       Date:  2022-08-12
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.