Literature DB >> 36217172

Applying implementation frameworks to the clinical trial context.

Kristian D Stensland1,2, Anne E Sales3,4,5,6, Laura J Damschroder4, Ted A Skolarus7,4.   

Abstract

BACKGROUND: Clinical trials advance science, benefit society, and provide optimal care to individuals with some conditions, such as cancer. However, clinical trials often fail to reach their endpoints, and low participant enrollment remains a critical problem with trial conduct. In these ways, clinical trials can be considered beneficial evidence-based practices suffering from poor implementation. Prior approaches to improving trials have had difficulties with reproducibility and limited impact, perhaps due to the lack of an underlying trial improvement framework. For these reasons, we propose adapting implementation science frameworks to the clinical trial context to improve the implementation of clinical trials. MAIN TEXT: We adapted an outcomes framework (Proctor's Implementation Outcomes Framework) and a determinants framework (the Consolidated Framework for Implementation Research) to the trial context. We linked these frameworks to ERIC-based improvement strategies and present an inferential process model for identifying and selecting trial improvement strategies based on the Implementation Research Logic Model. We describe example applications of the framework components to the trial context and present a worked example of our model applied to a trial with poor enrollment. We then consider the implications of this approach on improving existing trials, the design of future trials, and assessing trial improvement interventions. Additionally, we consider the use of implementation science in the clinical trial context, and how clinical trials can be "test cases" for implementation research.
CONCLUSIONS: Clinical trials can be considered beneficial evidence-based interventions suffering from poor implementation. Adapting implementation science approaches to the clinical trial context can provide frameworks for contextual assessment, outcome measurement, targeted interventions, and a shared vocabulary for clinical trial improvement. Additionally, exploring implementation frameworks in the trial context can advance the science of implementation through both "test cases" and providing fertile ground for implementation intervention design and testing.
© 2022. The Author(s).

Entities:  

Keywords:  Clinical trials; Consolidated framework for implementation research; Determinants; Implementation mapping; Implementation outcomes; Implementation research logic model

Year:  2022        PMID: 36217172      PMCID: PMC9552519          DOI: 10.1186/s43058-022-00355-6

Source DB:  PubMed          Journal:  Implement Sci Commun        ISSN: 2662-2211


Clinical trials advance science and benefit society and individuals, but suffer from poor implementation. Existing methods to improve clinical trial conduct have had limited effectiveness, maybe due to the lack of a unifying framework. We address this knowledge gap by adapting implementation science frameworks to the clinical trial context. We provide a worked example of our proposed model applied to an example of poor trial enrollment to demonstrate the utility of this targeted approach.

Introduction

Clinical trials are critical components of research and healthcare infrastructure with hundreds of thousands of participants enrolled and billions of dollars invested annually [1]. In addition to advancing science, trials can ensure adequate and even improved care for patients through a “protocol effect” by building infrastructure and disseminating knowledge about the standard of care treatments [2]. In fact, for some conditions, such as cancer, many consider enrollment in a clinical trial to be the best possible management [3]. Despite these advantages and investments, clinical trials frequently fail to reach their primary endpoints, commonly do not meet enrollment goals, and often take longer than anticipated to enroll and complete [4-7]. In these ways, clinical trials can be considered complex evidence-based interventions with significant benefits to both individuals and society although suffering from poor implementation [8]. The clinical trials system could benefit from implementation science approaches to address this evidence-to-practice gap. While there have been prior attempts to improve clinical trials, interventions have generally not been reproduced, led to sustainable improvement, or been grounded in theory, limiting generalizability [9]. In other words, clinical trials have suffered from poor implementation and limited improvement efforts similar to other complex evidence-based interventions. By considering clinical trials as complex interventions with poor implementation, the existing knowledgebase for assessing and addressing poor implementation of other complex interventions (e.g., smoking cessation, cancer screening) can be applied to the clinical trial context [10, 11]. Building out of existing implementation work rather than establishing entirely de novo techniques for clinical trial implementation can facilitate the application of evidence-based strategies and frameworks to the trial context. Adapting implementation frameworks to the clinical trial context can also advance science via new application of a shared vocabulary and improvement models to generate new knowledge. While some frameworks have been applied to aspects of clinical trials, a global consideration of trials as complex interventions through an implementation science lens could significantly advance both the science and practice of clinical trials and the field of implementation science [12, 13]. For these reasons, we applied implementation science frameworks to the clinical trial context as a worked example of the potential opportunity to advance the practice and science of clinical trials and implementation. Specifically, we adapted Proctor’s implementation outcomes framework (IOF) to develop corresponding clinical trial implementation outcomes informing external validity (e.g., acceptability) in addition to internal validity (e.g., reproducibility). Next, we used the Consolidated Framework for Implementation Research (CFIR) to define context and determinants specific to clinical trial implementation. We then mapped contextual determinants to possible Expert Recommendations for Implementing Change (ERIC) strategies to guide implementation interventions. Finally, we used the implementation research logic model (IRLM) as a rigorous tool to facilitate specification, reproducibility, and testable causal mechanisms of the interventions on implementation and clinical trial outcomes [14-17]. Through this worked example applying implementation science frameworks and approaches to the clinical trial context, we hope to bolster a foundation and build capacity for rigorous, evidence-based clinical trial improvement.

Considering clinical trial implementation outcomes

In the context of clinical trials, “outcomes” generally refer to the primary or secondary outcomes of the trial itself, such as overall survival for a cancer treatment trial. To avoid confusion within this paper, we will refer to these as “endpoints” rather than outcomes. While consideration of trials normally focuses on reaching these endpoints, trials must meet other preconditions to facilitate this objective. For example, a clinical trial must enroll and retain enough participants to answer the trial’s question. However, preconditions and the best ways to meet them to achieve trial endpoints remain poorly defined. Similar to a clinical setting, where client outcomes (e.g., satisfaction, symptomatology) and service outcomes (e.g., effectiveness, safety) are preceded by implementation outcomes (e.g., acceptability, adoption), we suggest clinical trial endpoints are based on certain necessary preconditions well suited as implementation outcomes [15]. As illustrated in Fig. 1, the endpoints of clinical trials correlate to client-side outcomes in Proctor’s implementation outcomes framework (IOF), aligning with successful attainment of intermediate service outcomes and preconditioned on implementation outcomes to reach ultimate client-side success, i.e., reaching clinical trial endpoints and improving patient satisfaction, function, and/or symptoms.
Fig. 1

Implementation, service, and client outcomes adapted to the clinical trial context

Implementation, service, and client outcomes adapted to the clinical trial context Proctor et al. described eight implementation outcomes in the IOF: acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration, and sustainability [15]. Each of these implementation outcomes aligns with important considerations for clinical trial design and implementation. As shown in Table 1, our worked example proposes measures of each of these outcomes in the trial context. Indeed, some measures are currently considered in trial design and analysis, such as feasibility and fidelity, though these terms are not always used. For example, fidelity to a trial’s intervention is sometimes referred to as “contamination” or “crossover.” Reframing these existing concepts within the implementation science context has the potential to realign the direction of improvement efforts towards existing, evidence-based implementation strategies.
Table 1

Implementation outcomes framework applied to the clinical trial context

Proctor’s implementation outcomeExample in the clinical trial context
AcceptabilityPerceived existence of equipoise between intervention arms
Anticipated or possible benefit to a participant over existing options
Acceptable anticipated side effect profile
Reasonable participant logistics (e.g., number of clinic visits, distance traveled to the trial site)
Reasonable additional clinical burden (e.g., minimal additional biopsies or other invasive procedures)
Additional direct time and financial cost to participants is acceptable to participants
AdoptionProportion of providers offering clinical trials to patients
AppropriatenessQuestion is amenable to a clinical trial
Trial design is appropriate for the trial question
FeasibilityPossible to meet enrollment goals
Timeline for enrollment and completion is reasonable
Anticipated effect size is reasonable
FidelityAmount of intervention group crossover
Adherence to trial protocol including follow-up
Implementation costCost of trial administration
Cost of trial intervention vs. standard of care (during trial)
Cost of additional trial staff required
Cost of additional study components (surveys, labs, scans, biopsies)
PenetrationProportion of eligible patients being offered trial
Proportion of eligible patients offered trial who enroll in the trial
Proportion of patients in the global population represented by trial eligibility criteria
SustainabilityMaintenance of accrual rates after trial opens
Sustained physician interest (i.e., physicians continue offering trial to patients throughout the trial period)
Sustained participant interest throughout the trial period
Continued provision of standard of care after the trial concludes
Implementation outcomes framework applied to the clinical trial context Additionally, this may shift focus from retrospective analyses of completed clinical trials to prospective considerations and testable interventions during trial design and implementation, encouraging more efficient clinical trial design. The outcomes serve as both a measure of trial implementation success and as a checklist to encourage consideration of multifaceted factors in trial design and trial site recruitment phases. Shifting assessment of trial success from historical endpoints (i.e., waiting for interim enrollment and endpoint analysis) to consideration of implementation outcomes at the time of trial design and in the early stages of trial implementation could move important assessments up front, saving time and resources for participants, trialists, and sponsors. In other words, implementation outcomes in the trial context may not just measure the implementation success of trials, their use per se may also improve trial success. Finally, clinical trial implementation outcomes could also be used as endpoints in trial improvement studies. For example, a trial of an intervention to improve clinical trial enrollment could benefit from defining the primary endpoints as adoption (the number of providers offering at least one patient enrollment on a clinical trial) and penetration (the proportion of eligible patients offered enrollment on a clinical trial). This would also encourage assessing the factors leading to successful implementation, such as acceptability to clinician trialists. By consistently defining and applying these measures, the body of evidence for trial improvement interventions could be more easily generalized, compared, and consolidated into recommendations for optimal, evidence-based trial conduct.

Clinical trials as test cases of implementation outcomes

In addition to trials benefiting from the use of implementation outcomes, the trial context can contribute to implementation science as a helpful setting for exploring nuanced outcomes and the relationships between them. Indeed, multi-center trials have the advantage of tracking what happens across different settings as every center implements trials somewhat differently although rarely is trial implementation the same across centers raising implementation and causality questions. Next, we provide more concrete examples of implementation outcomes and concepts in the trial context to help clarify these otherwise abstract concepts. For example, there can be considerable overlap in the implementation outcomes of appropriateness (perceived fit of innovation) and acceptability (perceived palatability of innovation), raising the question of why distinguishing between them is important [15]. For a clinical trial, the distinction between these concepts is both clear and highly impactful. A trial is appropriate if a given clinical trial design is the correct way to answer a question. For example, testing an intervention in schools by randomizing individual students to two interventions would not be appropriate; a better design would be a cluster randomized trial. In contrast, the acceptability of a trial reflects the palatability of selected interventions to potential participants or providers. A trial may have low acceptability because one of the interventions is highly toxic, or because there are many required return visits making the trial logistically challenging for participants. A trial may also have low acceptability to providers due to perceived superiority of one intervention (i.e., lack of perceived equipoise in the trial). This may be an explanation for the difficulty in enrolling participants with cancer in a radiation therapy versus surgery trial (appropriate design), as surgeons may not be willing to randomize patients to non-operative care and radiation oncologists may be unwilling to randomize patients to surgery (unacceptable to providers) [18]. Similarly, though a trial of an antibiotic versus placebo for a blood infection may be an appropriate design to demonstrate the effectiveness of the antibiotic, this would be considered highly unethical and thus not an acceptable design. Non-trial settings may also have poorly characterized relationships between implementation outcomes (e.g., relationships between acceptability and appropriateness) [15]. In this regard, the trial context comprises a contained setting facilitating the exploration of explicit implementation outcome trade-offs. In non-trial interventions, a trade-off may only be about the implementation cost, i.e., if more resources are available, problems with feasibility or penetration may be easily addressed. However, other trial implementation outcome trade-offs are more complex. To make a trial more feasible, trial eligibility criteria could be expanded, but this may require larger sample sizes to meet the efficacy endpoint of the trial. As a result, the trial may take longer to enroll, and sustainability may suffer as providers lose interest in the trial resulting in waning adoption and penetration. Alternatively, decreasing the number of follow-up visits in a trial may enhance acceptability to participants and decrease implementation cost, but the trial results may be less useful or reliable, reflecting a lower appropriateness of the trial. While these tradeoffs likely exist in other implementation settings, they are often not as visible or immediate as in clinical trials. Studying these relationships and tradeoffs between outcomes within the clinical trials setting could allow for more rapid study and development of frameworks that could then be highlighted and addressed in the implementation of other evidence-based practices and the field more broadly.

Applying a determinants framework to the trial context

Optimizing trial success through these implementation outcomes requires the identification of implementation determinants. This can aid in identifying barriers and facilitators to trial success and lead to selecting trial improvement interventions in a rigorous, theory-based way to enable testable causal hypotheses advancing implementation science. The Consolidated Framework for Implementation Research (CFIR) is a robust, frequently used determinants framework [16]. Its 37 constructs across 5 domains represent key components of clinical trials as complex interventions likely influencing implementation success. Our suggested adaptation of each construct is shown in Table 2. The overarching CFIR domains containing these constructs reflect multiple levels of trial implementation and connect to the adapted Proctor outcomes described above. We propose considering these domains as follows.
Table 2

Adaptation of the CFIR domains and constructs to the clinical trial context

ConstructConstruct descriptionExample of adaptation to clinical trial context
I. Intervention characteristics
AIntervention sourceThe group that developed and/or visibly sponsored use of the innovation is reputable, credible, and/or trustable

Perception of providers about whether the trial protocol, or experimental intervention, was developed at the provider’s institution

Perception of patients about whether patients were involved in the design of a trial

BEvidence strength and qualityStakeholders’ perceptions of the quality and validity of evidence supporting the belief that the intervention will have desired outcomes

Providers’ or patients’ perception of the quality and validity of existing evidence for the experimental intervention. This may include data from pilot trials or trials of earlier phases. This contributes to the perceived equipoise of a trial

Providers’ or patients’ perception of the quality and validity of existing evidence for the comparison arm, i.e., “standard of care”

CRelative advantageStakeholders’ perception of the advantage of implementing the intervention versus an alternative solutionProviders’ or patients’ perception of the experimental intervention versus the comparison (e.g., standard of care) arm or alternative intervention. This contributes to the perceived equipoise of a trial
DAdaptabilityThe degree to which an intervention can be adapted, tailored, refined, or reinvented to meet local needsThe degree to which a trial protocol can be adapted to local needs or fit into local practice
ETrialabilityThe ability to test the intervention on a small scale in the organization, and to be able to reverse course (undo implementation) if warranted

The ability to switch to standard of care or other treatments if desired

The ability to pilot on a smaller scale before launching a larger trial

FComplexityPerceived difficulty of implementation, reflected by duration, scope, radicalness, disruptiveness, centrality, intricacy, and number of steps required to implementPerceived logistical complexity (number of return visits, labs) and invasiveness (extra procedures including biopsies), and duration of follow-up required for trial participation
GDesign quality and packagingPerceived excellence in how the intervention is bundled, presented, and assembledPerceived excellence in the trial materials and advertising
HCostCosts of the intervention and costs associated with implementing the intervention including investment, supply, and opportunity costs

Cost of trial to patients, institution, and/or insurers

Impact of trial participation on provider reimbursement

II. Outer setting
APatient needs and resourcesThe extent to which patient needs, as well as barriers and facilitators to meet those needs, are accurately known and prioritized by the organization

The extent to which a trial reflects local needs, for example, regional disease prevalence

The extent to which desires of patients/trial participants (e.g., measuring endpoints, desired toxicity cut points, ability to participate) are known and used in trial design

BCosmopolitanismThe degree to which an organization is networked with other external organizationsThe degree to which the trial site organization is networked with other external organizations, such as cancer cooperative groups (e.g., SWOG) or other research networks
CPeer pressureMimetic or competitive pressure to implement an intervention; typically because most or other key peer or competing organizations have already implemented or are in a bid for a competitive edgeCompetitive pressure to run and enroll participants in trials, particularly to compete with other investigators or institutions for NCI designations and institutional rankings
DExternal policy and incentivesA broad construct that includes external strategies to spread interventions, including policy and regulations (governmental or other central entity), external mandates, recommendations and guidelines, pay-for-performance, collaboratives, and public or benchmark reporting

Government and insurance company reimbursement for clinical trials

Financial incentives from cooperative and government grants for trial accrual

III. Inner setting
AStructural characteristicsThe social architecture, age, maturity, and size of an organization

The history of clinical trials at an institution, including past performance and influence on other centers

The social architecture, age, maturity, and size of an organization

BNetworks and communicationsThe nature and quality of webs of social networks and the nature and quality of formal and informal communications within an organization

The nature and quality of social networks and communications within an organization, such as frequency of division and department meetings, tumor boards, and other communication networks

The physical proximity of clinical and clinical research spaces (i.e., opportunities for informal conversation about trials)

CCultureNorms, values, and basic assumptions of a given organizationCulture of research and trials within an institution and department
DImplementation climateThe absorptive capacity for change, shared receptivity of involved individuals to an intervention, and the extent to which use of that intervention will be rewarded, supported, and expected within their organization

Institutional and departmental support/incentives for trial participation, and available resources for trial implementation

Institutional and departmental recognition for trial participation

1Tension for changeThe degree to which stakeholders perceive the current situation as intolerable or needing changeThe degree to which providers perceive there to be a need for a trial (e.g., poor health outcomes for a condition)
2CompatibilityThe degree of tangible fit between meaning and values attached to the intervention by involved individuals, how those align with individuals’ own norms, values, and perceived risks and needs, and how the intervention fits with existing workflows and systems

The degree of fit between a trial and the provider’s norms, values, and perceived risks and needs (i.e., the provider’s practice)

The degree of fit between a trial and the participant’s norms, values, and perceived risks and needs

3Relative priorityIndividuals’ shared perception of the importance of the implementation within the organization

Providers’ shared perception of the importance of the trial within the organization

Providers’ shared perception of the importance of conducting any clinical trial

4Organizational incentives and rewardsExtrinsic incentives such as goal-sharing awards, performance reviews, promotions, and raises in salary, and less tangible incentives such as increased stature or respectProvider incentives such as career advancement, salary support, and increased reputation for trial participation
5Goals and feedbackThe degree to which goals are clearly communicated, acted upon, and fed back to staff, and alignment of that feedback with goalsThe degree to which trial goals (e.g., enrollment goals) are clearly communicated, acted upon, and fed back to staff and aligned with organizational goals
6Learning climateA climate in which: a) leaders express their own fallibility and need for team members’ assistance and input; b) team members feel that they are essential, valued, and knowledgeable partners in the change process; c) individuals feel psychologically safe to try new methods; and d) there is sufficient time and space for reflective thinking and evaluationA climate in which: a) trial leaders (e.g., trialists and institutional leaders) express their own fallibility and need for team members’ assistance and input; b) trial team members feel that they are essential, valued, and knowledgeable partners in the change process; c) individual trialists and providers feel psychologically safe to try new methods; and d) there is sufficient time and space for reflective thinking and evaluation
EReadiness for implementationTangible and immediate indicators of organizational commitment to its decision to implement an interventionTangible indicators of organization commitment to the trial
1Leadership engagementCommitment, involvement, and accountability of leaders and managers with the implementationCommitment, involvement, and accountability of institutional leaders with the trial
2Available resourcesThe level of resources dedicated for implementation and on-going operations, including money, training, education, physical space, and timeThe level of available resources for trials, including staffing (e.g., research nurse, administrative staff), logistics (e.g., IRB and protocol support), time, and contributed resources (e.g., provision of space)
3Access to knowledge and informationEase of access to digestible information and knowledge about the intervention and how to incorporate it into work tasks

Availability of clinical trial training including design, proposals, statistics, logistics and trial-specific training

Availability of trial information, such as which trials are open, eligibility criteria, and protocols

IV. Characteristics of individuals
AKnowledge and beliefs about the interventionIndividuals’ attitudes toward and value placed on the intervention as well as familiarity with facts, truths, and principles related to the interventionProviders’ and patients’ attitude toward and value placed on clinical trials and research as well as familiarity with the science of clinical trials, considerations of equipoise, and trial ethics
BSelf-efficacyIndividual belief in their own capabilities to execute courses of action to achieve implementation goalsBelief of potential participants and involved clinicians (recruiters) that trial will complete and/or have an impact
CIndividual stage of changeCharacterization of the phase an individual is in, as he or she progresses toward skilled, enthusiastic, and sustained use of the interventionIndividual trialist or provider willingness to fulfill trial role with skill and enthusiasm
DIndividual identification with organizationA broad construct related to how individuals perceive the organization, and their relationship and degree of commitment with that organizationStrength of commitment of investigator and trial to sponsoring institution
EOther personal attributesA broad construct to include other personal traits such as tolerance of ambiguity, intellectual ability, motivation, values, competence, capacity, and learning styleCharacteristics of those involved with implementing the trial (i.e., research team, providers), including risk tolerance, acceptance of being the subject of an experiment, personal attitudes towards science, trust in authority, etc
V. Process
APlanningThe degree to which a scheme or method of behavior and tasks for implementing an intervention are developed in advance, and the quality of those schemes or methods

The degree to which trial recruitment plans are developed in advance, including feasibility planning

The degree to which deviations to trial protocols (e.g., changes in standard of care) are anticipated and adaptations developed in advance

BEngagingAttracting and involving appropriate individuals in the implementation and use of the intervention through a combined strategy of social marketing, education, role modeling, training, and other similar activitiesAttracting and involving appropriate providers and patients in trials through a combined strategy of social marketing, education, role modeling, training, and other similar activities
1Opinion leadersIndividuals in an organization who have formal or informal influence on the attitudes and beliefs of their colleagues with respect to implementing the interventionKey leaders in cancer center, trials support unit, and within a department/division
2Formally appointed internal implementation leadersIndividuals from within the organization who have been formally appointed with responsibility for implementing an intervention as coordinator, project manager, team leader, or other similar rolesIndividuals from within the organization who have been formally appointed with responsibility for leading conduct of a trial, including trial principal investigator (PI), trial site PI, trial coordinator, or trials team coordinator
3Champions“Individuals who dedicate themselves to supporting, marketing, and ‘driving through’ an [implementation]”, overcoming indifference or resistance that the intervention may provoke in an organizationIndividuals who dedicate themselves to supporting, marketing, and “driving through” a trial, including advertising a trial to providers and patients, advocating for trials at meetings such as tumor boards, and pushing trial materials through review boards
4External change agentsIndividuals who are affiliated with an outside entity who formally influence or facilitate intervention decisions in a desirable directionOther stakeholders who guide trial conduct such as trial cooperative group members, trial PIs from other institutions for multisite trials, patient advocacy group members
CExecutingCarrying out or accomplishing the implementation according to planCarrying out or accomplishing a trial according to plan
DReflecting and evaluatingQuantitative and qualitative feedback about the progress and quality of implementation accompanied with regular personal and team debriefing about progress and experienceExtent to which enrollment audits and feedback/reports, trial status reports, data safety monitoring board feedback are evaluated and reflected on by trial leaders and teams
Adaptation of the CFIR domains and constructs to the clinical trial context Perception of providers about whether the trial protocol, or experimental intervention, was developed at the provider’s institution Perception of patients about whether patients were involved in the design of a trial Providers’ or patients’ perception of the quality and validity of existing evidence for the experimental intervention. This may include data from pilot trials or trials of earlier phases. This contributes to the perceived equipoise of a trial Providers’ or patients’ perception of the quality and validity of existing evidence for the comparison arm, i.e., “standard of care” The ability to switch to standard of care or other treatments if desired The ability to pilot on a smaller scale before launching a larger trial Cost of trial to patients, institution, and/or insurers Impact of trial participation on provider reimbursement The extent to which a trial reflects local needs, for example, regional disease prevalence The extent to which desires of patients/trial participants (e.g., measuring endpoints, desired toxicity cut points, ability to participate) are known and used in trial design Government and insurance company reimbursement for clinical trials Financial incentives from cooperative and government grants for trial accrual The history of clinical trials at an institution, including past performance and influence on other centers The social architecture, age, maturity, and size of an organization The nature and quality of social networks and communications within an organization, such as frequency of division and department meetings, tumor boards, and other communication networks The physical proximity of clinical and clinical research spaces (i.e., opportunities for informal conversation about trials) Institutional and departmental support/incentives for trial participation, and available resources for trial implementation Institutional and departmental recognition for trial participation The degree of fit between a trial and the provider’s norms, values, and perceived risks and needs (i.e., the provider’s practice) The degree of fit between a trial and the participant’s norms, values, and perceived risks and needs Providers’ shared perception of the importance of the trial within the organization Providers’ shared perception of the importance of conducting any clinical trial Availability of clinical trial training including design, proposals, statistics, logistics and trial-specific training Availability of trial information, such as which trials are open, eligibility criteria, and protocols The degree to which trial recruitment plans are developed in advance, including feasibility planning The degree to which deviations to trial protocols (e.g., changes in standard of care) are anticipated and adaptations developed in advance

Intervention characteristics

The “intervention characteristics” domain applies to both the interventions tested in the trial and the development of the trial protocol itself. The tested intervention, such as an experimental drug, has accompanying characteristics such as the evidence strength for the drug. For example, a drug with proven efficacy in the metastatic cancer setting may be more acceptable as the intervention in a trial in the locally advanced cancer setting. Additionally, the trial itself is an intervention with its own characteristics potentially affecting implementation success. These factors include the selection of a comparison arm, the quality of trial materials and advertising, and how adaptable a trial protocol is for each trial site.

Outer setting

The outer setting is highly important in exerting pressure on trial-side stakeholders. These factors include relationships with other institutions, clinical trial networks (e.g., the cancer clinical trial cooperative group SWOG), and industry groups such as pharmaceutical companies. These determinants apply on an institutional basis (e.g., institutional incentives for trial enrollment) and to individual providers (e.g., pressure to compete with peers and advance careers through international reputation).

Inner setting

In addition to relationships between institutions, characteristics within institutions may factor heavily in the successful implementation of trials. This domain is of particular importance, as there may be more variability between institutions with respect to support of trials. It also may be easier to adapt aspects such as organizational incentives or available resources at the local level to improve the success of trials. The inner setting can be conceptualized as applying to the institution itself (e.g., an academic medical center), or for larger institutions a department within the system (e.g., the department of urology).

Characteristics of individuals

The characteristics of trialists, their teams, and individual providers may influence trial success. These include personal characteristics (e.g., beliefs about specific interventions and enthusiasm for clinical trials) and relational aspects (e.g., identification with the organization or individual sponsoring a given trial). The characteristics of individual potential trial participants are also highly important. Most important may be potential participants’ beliefs about intervention with respect to their belief about the likelihood of benefit, the merits of clinical trials, and familiarity with research. Other personal attributes, such as value placed on science, trust in the medical field and institutions, and cultural influences likely largely impact the likelihood of an individual enrolling in a trial.

Process

The process domain may be most important to improving the success of clinical trials, as it incorporates mechanisms for improvement and consolidates important factors from other domains. Constructs from this domain are more likely to be adaptable and may be modified through tailored trial improvement strategies.

Implementation strategies to foster trial implementation and success

Once barriers to trial success are identified, trial improvement strategies should be purposefully selected to optimize effectiveness. The Expert Recommendations for Implementing Change (ERIC) presents 73 implementation strategies that can be adapted to the clinical trial context [17]. These strategies have been linked to specific CFIR determinants, permitting the identification of potential high-yield implementation strategies for a given context [19]. Linking these strategies to determinants, and creating actionable interventions based on the strategies, has the potential to target improvement interventions to each trial’s context, as opposed to relying on generic strategies that may not address the true root causes of trial problems.

Linking determinants, outcomes, and strategies in a process model for trials

While these frameworks have implied connections, explicitly linking them together can organize the frameworks, identify targeted mechanisms, and suggest solutions in pursuit of trial improvement. For this purpose, the final step of our worked example applies the implementation research logic model (IRLM) as a process model [14]. A process model can “provide practical guidance in the planning and execution of implementation endeavors and/or implementation strategies to facilitate implementation,” in this case supporting and framing trial improvement efforts [20]. For our application of the IRLM, we link trial outcomes with our adapted Proctor’s outcomes and CFIR determinants and identify possible implementation strategies to address these from the ERIC compilation [15-17]. Because we evaluate trial-side outcomes (analogous to clinical/patient outcomes in the original IRLM) such as poor enrollment first, we have arranged our IRLM to begin with the trial outcome, followed by the cause of this outcome (mechanism), the implementation outcome, the CFIR construct, and then a potential implementation strategy to address these barriers (Fig. 2).
Fig. 2

Adapted implementation research logic model (IRLM) applied to the clinical trial-side outcome of poor enrollment. *Note: arrows indicate inferential flow, not causal representations

Adapted implementation research logic model (IRLM) applied to the clinical trial-side outcome of poor enrollment. *Note: arrows indicate inferential flow, not causal representations Our proposed model serves to both explain the connection between the frameworks as they apply to issues with trials and makes the causal mechanism for these explicit, leading to questions that can be answered in specific targeted studies. In many ways, trial coordinating centers and site teams are constantly “solving problems” like poor enrollment and adapting to improve trial implementation. Reframing and documenting this already existing behavior more intentionally to track what worked to solve these problems (i.e., implementation strategies) and how the protocol was adapted to improve implementation outcomes (i.e., adaptation) would be enabled using our approach, promoting generalizability and broadening clinical trial practice. To demonstrate how this model may be applied, we consider worked examples of trial assessment and improvement.

Sample use cases

Designing a trial for successful implementation

First, the implementation of a trial should be considered while a trial is designed and the protocol is written. In a hypothetical trial for a new cancer drug, for example, the acceptability to providers and potential participants of both the new drug and the comparison (including the relative advantage of the new drug in terms of expected efficacy and toxicity) could be formally assessed, for example through surveys, interviews, or focus groups. This could both help predict adoption of the trial by providers and penetration to patients and may also directly increase participation in trials by improving the perception of involvement by both providers and patients in the design of the trial (i.e., the CFIR’s intervention source construct). The acceptability of logistical components could also be considered. Limiting the number of visits (e.g., for lab draws or additional imaging) may improve the recruitment (i.e., penetration) and retention (i.e., fidelity) to trials, but how this affects the ability to assess endpoints and estimate efficacy of interventions (i.e., appropriateness) must be considered. While some of these issues may be indirectly addressed in trial design already, explicitly considering these concepts allows them to be measured and evaluated so ideal tradeoffs for different contexts can be developed.

Struggling enrollment

Next, we consider a prostate cancer trial suffering from low enrollment. This low enrollment could be due to multiple root causes (Fig. 2). For our hypothetical trial, say we query our trial records and find only 2 out of 10 oncologists are enrolling patients onto trials (i.e., low adoption by providers). We conduct a survey or interviews and find providers are largely unaware of existing trials at our institution, and how many of their patients are eligible for these trials, indicating an issue of reflecting and evaluating. These providers may also be sensitive to peer pressure in comparing to their peers within the institution or at other sites. A reasonable ERIC strategy in this case may be to audit and provide feedback, perhaps by sending monthly emails to oncologists detailing how many participants they enrolled in trials as well as their peers’ performance. We could then evaluate how many providers are offering the trials (adoption) and the proportion of eligible patients enrolling (penetration) after the rollout of audit and feedback. In this example, we identified and targeted the root cause of poor enrollment. If we identified other issues, we would likely have selected other strategies. An important first step, for example, would be to assess if our trial enrollment goal is feasible. If there are few cases of prostate cancer in the area (i.e., a CFIR outer setting barrier reflecting low feasibility), we could consider broadening eligibility criteria (ERIC strategy of promote adaptability) or adding more trial sites (change service sites). Alternatively, if provider adoption is high but penetration is low due to difficulties identifying eligible patients (CFIR Process: Executing), developing an electronic medical record patient screening system (ERIC: facilitate relay of clinical data to providers and remind clinicians) or hiring staff to help with trial pre-screening (ERIC: create new clinical teams) may be better suited interventions to improve enrollment. The deliberate directionality of assessing root causes first is key to optimizing the successful design and targeting of interventions. If we started by designing an intervention without assessing determinants, for example, one targeted at increasing acceptability to patients (e.g., patient-designed information brochures), we would not be directly addressing the root cause of low provider adoption. As a result, we may not be optimizing trial enrollment and may not see a maximal return on investment.

Implications for trial improvement research

This potential mistargeting may also explain why some prior trial improvement interventions may seem ineffective: they may not be asking the right questions or solving the right problems. Mistargeting an improvement intervention for a single trial may result in wasted resources, but when developing generalizable interventions for trial improvement, this mistargeting may bias the estimates of trial improvement efficacy towards the null, inappropriately suggesting interventions are ineffective when really they just are not addressing the right problems. For example, consider a randomized study-within-a-trial (SWAT) where trial sites are randomized to receive supplemental research staff or usual research staff, aiming to increase trial enrollment [21]. Such a study may show no beneficial effect of hiring additional study staff, suggesting that hiring staff is not effective at improving enrollment. However, this may be because some of the trial sites may have already reached full penetration, i.e., the trial sites may already reach a high proportion of eligible patients. If this trial included only trial sites with low adoption, especially if this is due to few available resources at trial sites, more trial staff may be highly beneficial. However, because these contextual elements are not assessed, the transferability of these study findings is limited. By specifying the characteristics of trial sites and “diagnosing” determinants of trial success, we can design and evaluate trial improvement interventions for various contexts to maximize value. In addition to informing quality improvement and prospective trial improvement studies, our worked example and proposed model can also serve as a roadmap for data-driven health services research relevant to trials. For example, understanding the interplay between cancer incidence and trial availability can inform projections of trial feasibility through the outer setting for prostate cancer trials. This approach could both inform the feasibility of opening a trial at a given site and highlight areas that may be scientifically underserved (i.e., with a high disease burden but few trials) where trials may thrive [22].

Practical application of the frameworks

Applying these frameworks to diagnose trial problems and design improvements will likely require a multi-pronged approach. Context-specific determinants of trial success could be assessed through a mix of quantitative and qualitative assessment of trial site characteristics, staffing, regional and national policy, investigator, and patient characteristics tailored to specific contexts. For example, assessing acceptability or relative priority of a trial intervention may be best explored through interviews with patients and providers. Assessing trial adoption and penetration would be better assessed using trial management software and facility medical records. Many components of the process domain would likely require interviews with trialists and trial staff and direct observation of the trial setting, including the planning and enrollment phases. Some assessments of trial determinants may be limited by the generally poor granularity of historical trial data, rapid turnover of trial staff, and the high time and resource cost of developing and administering interviews and surveys to trial staff and patients. Developing methods to apply these frameworks to trials efficiently and expeditiously could allow for streamlined assessment and rapid cycle improvement of trial conduct.

Value to implementation science

Considering clinical trials in the context of implementation science could both improve trials and advance the science of implementation. In addition to providing concrete examples of abstract framework constructs as noted above, the clinical trial context has components and characteristics serving as a real-world laboratory for implementation research. First, clinical trials have multiple modifiable levels suitable for implementation interventions that may be altered more easily than other contexts. Clinical trials already require involvement and review at micro (patient-provider trial review), meso (e.g., institutional review board), and macro (e.g., national trial cooperative group) levels, providing opportunities for changes and comparisons among and between these levels. For example, implementation scientists studying adaptation could compare different sites implementing the same clinical trial [23]. Modifications could be made and evaluated at multiple levels with relative ease to compare strategies targeted at different levels, for example altering macro factors (e.g., specifying implementation factors in a trial protocol) versus micro-level factors (e.g., comparing two methods of identifying potential trial participants). Additionally, clinical trials can support efficient implementation research through existing processes, quantity of trials, and speed of outcome generation. Since the same trial protocol (evidence-based intervention) is being implemented at each site, departures from protocols (fidelity) are already documented for data safety monitoring boards, and a key endpoint (enrollment) is already recorded; testing implementation in trials may be more efficient than in other contexts. Trial protocols already incorporate differences from trial to trial, expert trial staff are employed at many trial sites, and modifications to trial processes are expected, making targeted and measured variation in trial implementation a logical next step. Further, iterative design for implementation is made easier by the number of trials. There are thousands of trials opened annually, with over 4700 trials registered on ClinicalTrials.gov opening after January 1, 2022, in oncology alone. Strategies developed as one trial is launched could be incorporated into upcoming trials, with an array of characteristics and settings to choose from resulting in rich inference for implementation research. The speed of some trial endpoints and outcomes also could permit rapid inference and iteration for implementation research. For example, trial enrollment is a continuously recorded leading indicator of trial success. Outcomes like adoption and penetration could be measured continuously while trial implementation studies are conducted, also allowing for advanced implementation trial designs with potentially enhanced inference and efficiency (e.g., crossover, sequential multiple assignment randomized trials (SMART)) [24]. In all, the clinical trial context could likely benefit from implementation science approaches, but also has great potential as an efficient laboratory for implementation research. These refined approaches and frameworks could then be transferred to other evidence-based practice settings.

Conclusion

Clinical trials are complex interventions with evidence-based benefits but frequently suffer from poor implementation. Adapting implementation science frameworks to the clinical trial context can foster shared vocabulary improving the design, implementation, testing, science, and practice of clinical trials. A consolidated, systematic, logical approach to clinical trial improvement appears warranted to address return on investment concerns for the clinical trials enterprise and deliver on the promises of advancing science, patient care, and fostering public health.
  22 in total

1.  Assessing Genitourinary Cancer Clinical Trial Accrual Sufficiency Using Archived Trial Data.

Authors:  Kristian Stensland; Samuel Kaffenberger; David Canes; Matthew Galsky; Ted Skolarus; Alireza Moinzadeh
Journal:  JCO Clin Cancer Inform       Date:  2020-07

2.  Clinical Trials Infrastructure as a Quality Improvement Intervention in Low- and Middle-Income Countries.

Authors:  Avram Denburg; Carlos Rodriguez-Galindo; Steven Joffe
Journal:  Am J Bioeth       Date:  2016-06       Impact factor: 11.229

3.  SPCG-15: a prospective randomized study comparing primary radical prostatectomy and primary radiotherapy plus androgen deprivation therapy for locally advanced prostate cancer.

Authors:  J Stranne; K Brasso; B Brennhovd; E Johansson; F Jäderling; M Kouri; W Lilleby; P Meidahl Petersen; T Mirtti; A Pettersson; A Rannikko; C Thellenberg; O Akre
Journal:  Scand J Urol       Date:  2018-12-26       Impact factor: 1.612

4.  Unsuccessful trial accrual and human subjects protections: an empirical analysis of recently closed trials.

Authors:  Benjamin Carlisle; Jonathan Kimmelman; Tim Ramsay; Nathalie MacKinnon
Journal:  Clin Trials       Date:  2014-12-04       Impact factor: 2.486

5.  Adult cancer clinical trials that fail to complete: an epidemic?

Authors:  Kristian D Stensland; Russell B McBride; Asma Latif; Juan Wisnivesky; Ryan Hendricks; Nitin Roper; Paolo Boffetta; Simon J Hall; William K Oh; Matthew D Galsky
Journal:  J Natl Cancer Inst       Date:  2014-09-04       Impact factor: 13.506

6.  Making sense of implementation theories, models and frameworks.

Authors:  Per Nilsen
Journal:  Implement Sci       Date:  2015-04-21       Impact factor: 7.327

7.  Pragmatic Application of the RE-AIM Framework to Evaluate the Implementation of Tobacco Cessation Programs Within NCI-Designated Cancer Centers.

Authors:  Heather D'Angelo; Alex T Ramsey; Betsy Rolland; Li-Shiun Chen; Steven L Bernstein; Lisa M Fucito; Monica Webb Hooper; Robert Adsit; Danielle Pauk; Marika S Rosenblum; Paul M Cinciripini; Anne Joseph; Jamie S Ostroff; Graham W Warren; Michael C Fiore; Timothy B Baker
Journal:  Front Public Health       Date:  2020-06-12

8.  Choosing implementation strategies to address contextual barriers: diversity in recommendations and future directions.

Authors:  Thomas J Waltz; Byron J Powell; María E Fernández; Brenton Abadie; Laura J Damschroder
Journal:  Implement Sci       Date:  2019-04-29       Impact factor: 7.327

9.  How can behavioural science help us design better trials?

Authors:  Katie Gillies; Jamie Brehaut; Taylor Coffey; Eilidh M Duncan; Jill J Francis; Spencer P Hey; Justin Presseau; Charles Weijer; Marion K Campbell
Journal:  Trials       Date:  2021-12-04       Impact factor: 2.279

10.  Envisioning clinical trials as complex interventions.

Authors:  Kristian D Stensland; Laura J Damschroder; Anne E Sales; Anne F Schott; Ted A Skolarus
Journal:  Cancer       Date:  2022-06-29       Impact factor: 6.921

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.