| Literature DB >> 33948248 |
Lisa C Welch1,2, Andrada Tomoaia-Cotisel3, Farzad Noubary1,2, Hong Chang1,2, Peter Mendel3, Anshu Parajulee1, Marguerite Fenwood-Hughes1, Jason M Etchegaray3, Nabeel Qureshi3, Redonna Chandler4, Harry P Selker1,2.
Abstract
INTRODUCTION: The Clinical and Translational Science Awards (CTSA) Consortium, about 60 National Institutes of Health (NIH)-supported CTSA hubs at academic health care institutions nationwide, is charged with improving the clinical and translational research enterprise. Together with the NIH National Center for Advancing Translational Sciences (NCATS), the Consortium implemented Common Metrics and a shared performance improvement framework.Entities:
Keywords: CTSA; Common Metrics; Performance improvement; clinical and translational science; evaluation
Year: 2020 PMID: 33948248 PMCID: PMC8057371 DOI: 10.1017/cts.2020.517
Source DB: PubMed Journal: J Clin Transl Sci ISSN: 2059-8661
Implementation of Common Metrics and performance improvement activities: definition and point assignments
| Cluster and activities | Points possible |
|---|---|
| Creating the metric | |
| • Collected data | 1.0 |
| • Computed metric result according to operational guideline (self-report) | 1.0 |
| Understanding current performance | |
| • Forecasted future results or compared result to any other data | 1.0 |
| • Specified underlying reasons involving hub leadership/staff/faculty | 0.5 |
| • Specified underlying reasons involving any group outside hub leadership/staff/faculty | 0.5 |
| Developing a performance improvement plan | |
| • Involved hub leadership/staff/faculty when developing improvement plan | 0.5 |
| • Involved any group outside hub leadership/staff/faculty when developing improvement plan | 0.5 |
| • Specified actions for achieving desired outcome | 1.0 |
| • Prioritized actions | 0.5 |
| • When prioritizing actions, considered potential effectiveness of actions or feasibility | 0.5 |
| Implementing the performance improvement plan | |
| • Reached out to specific individuals or institutional partners for help in carrying out improvement plan | 1.0 |
| • Began to implement improvement plan | 1.0 |
| Documenting metric result and plan fully | |
| • Documented five elements in the Common Metric-specific Scorecard: metric result; underlying reasons; potential partners; potential actions; planned actions | 1.0 |
|
|
|
Activities did not have to be conducted sequentially.
Each distinct activity was assigned 1.0 points. For pairs of related activities (e.g., involving different types of stakeholders when specifying underlying reasons), each part of the pair received 0.5 points to equal 1.0.
Fig. 1.Completion of Common Metrics and performance improvement activities per hub: three metrics combined (0–30 points possible).
Completion of Common Metrics and performance improvement activities (N = 59 hubs*)
| Mean (SD), range | By metric | ||||||
|---|---|---|---|---|---|---|---|
| Overall sum | Actual | ||||||
| Possible | Actual | Possible | Careers | IRB | Pilot | P-value | |
| All activities | 30 | 23.7 (6.6) | 10 | 8.09 (2.6) | 7.4 (2.9) | 8.1 (2.5) | 0.44 |
| Clusters of activities[ | |||||||
| Creating metric result | 6 | 5.9 (0.3) | 2 | 2.0 (0.0) | 1.9 (0.3) | 1.9 (0.1) | 0.15 |
| Understanding current performance | 6 | 5.5 (0.8) | 2 | 1.8 (0.4) | 1.8 (0.4) | 1.8 (0.4) | 0.96 |
| Developing improvement plan | 9 | 6.4 (3.1) | 3 | 2.3 (1.2) | 1.9 (1.4) | 2.3 (1.2) |
|
| Implementing improvement plan | 6 | 4.1 (2.1) | 2 | 1.4 (0.9) | 1.2 (0.9) | 1.4 (0.8) | 0.17 |
| Documenting metric result and plan fully | 3 | 1.8 (1.2) | 1 | 0.6 (0.5) | 0.5 (0.5) | 0.6 (0.5) | 0.21 |
SD = Standard Deviation.
One hub did not respond.
Composition of clusters: (1) creating metric result entails data collection and computing metric according to operational guideline; (2) understanding metric result entails forecasting future performance or comparing results to any other data, and specifying underlying reasons with stakeholders; (3) developing improvement plan entails involving stakeholders, specifying actions, and prioritizing actions based on effectiveness or feasibility; (4) implementing the improvement plan entails reaching out to partners for help and starting implementation activities; (5) documenting includes entering metric result, describing underlying reasons, identifying partners, potential actions, and planned actions.
Results of testing for effects of hub characteristics on completion of performance improvement activities (N = 59 hubs)
| Univariable models | Multivariable models | |||||||
|---|---|---|---|---|---|---|---|---|
| Characteristic | Change in hub score | Change in hub score | ||||||
| By metric | Overall sum (0–30) | By metric | ||||||
| Overall sum (0–30) | Careers (0–10) | IRB (0–10) | Pilots (0–10) | Careers (0–10) | IRB (0–10) | Pilots (0–10) | ||
| Model N | 55 | 55 | 55 | 55 | ||||
| Model Adjusted R2 | 0.17 | 0.16 | 0.20 | 0.21 | ||||
|
| ||||||||
| Size[ | ||||||||
| <$4.56 million (Ref) | – | – | – | – | – | – | – | – |
| $4.56–8.04 million | 2.88 | 0.38 | 0.96 | 1.54* | – | – | – | 1.27* |
| ≥$8.05 million | 1.64 | 0.72 | −0.20 | 1.12 | – | – | – | 1.42* |
| Initial funding cohort (tertiles) | ||||||||
| 2010–2015 | 0.69 | −0.14 | 0.63 | 0.20 | 0.89 | −0.37 | 0.29 | 0.95 |
| 2008–2009 | 4.75** | 1.41* | 1.78* | 1.56** | 6.07*** | 1.61** | 1.90** | 2.05*** |
| 2007 or earlier (Ref) | – | – | – | – | – | – | – | – |
|
| ||||||||
| Maturity of performance management system | −0.31 | −0.15 | 0.03 | −0.19 | – | – | – | – |
| Extent of automated data collection | −2.43 | 0.02 | −2.76*** | 0.31 | – | – | −2.16* | 1.73* |
| Extent of data stored in centralized database | −1.57* | −0.52 | −0.58 | −0.47 | – | −0.47 | – | −0.63* |
|
| ||||||||
| Attendance[ | ||||||||
| Training (7 sessions) | 1.21 | 0.22 | 0.35 | 0.64** | 1.05 | – | – | 0.66** |
| Coaching (6 sessions) | 2.25** | 0.43 | 1.10** | 0.72* | 2.00 | – | 1.16** | – |
| Coaching metric | ||||||||
| Careers (ref) | – | – | – | – | – | – | – | – |
| IRB | −1.69 | −1.89** | 1.55 | −1.35 | – | −1.87** | 0.77 | – |
| Pilots | −2.46 | −1.26 | −0.29 | −0.91 | – | −0.72 | −0.77 | – |
| Primary coach | ||||||||
| Coach A (Ref) | – | – | – | – | – | – | – | – |
| Coach B | −0.49 | 0.04 | −0.23 | −0.30 | – | – | – | – |
Ref = reference group (indicated by dashes in cell); CMI = Common Metrics Implementation.
*≤0.10; **≤0.05; ***≤0.01.
One hub did not respond.
CTSA size is defined as total funding from U, T, K, and/or R grants for fiscal year 2015–2016.
Attendance at a training or coaching session is defined as at least one person from the hub attended. Implementation Groups 1 and 2 were offered 7 coaching sessions; Implementation Group 3 was offered 6 coaching sessions.
Challenges to hub progress, with illustrative quotation*
| Hub size and resources |
|---|
| Lack of institutional investment[ |
| So a lot of the metrics, one would certainly hope could be facilitated by informatics systems, and our university, for example, has not invested in a citation index software, that would help a lot as we are trying to find investigator publications… Our…homegrown system works really well for the IRB, but any time anything needs to be added they have to contract with informatics people…, [who] are a scarce resource. So that’s a challenge. |
| Interrupted funding |
| …[G]iven our no-cost extension status, …we do not know yet if we are going to…turn the curve because we are not awarding, for example, …any more pilot awards…or K awards right now. |
| Lack of adequate staffing and expertise[ |
| Well, I can tell you the problem: we only pay a fraction of [his] time for evaluation because he does other functions for us, and our staff person who works with him does not have the capability to do this herself independently. …Nobody really thought about what impact it was going to have on the time allocation for the leadership that was responsible for evaluation… |
| Alignment with needs of Common Metrics Implementation |
| Lack of data system or an existing system that was not aligned with the Common Metrics definitions created more effort for effective tracking[ |
| …our information systems were not automatically and easily aligned to collect information in the form that the initial set of metrics request demanded, and so we discovered…that there were various kinds of gaps and holes in the way various things are tracked. – |
| Lack of alignment with institutional priorities[ |
| We have tried to make sure that the deans and other leaders know about the Common Metrics. I don’t know that those three Common Metrics have been exactly their highest priority. They look at it and they are happy with it. [But] it’s not like they have said, “Oh yeah, we want to adopt that Common Metric for our university over time.” But it’s early in the process and they may. |
| Hub authority |
| Lack of line authority over key drivers |
| One issue with the CTSAs, particularly in a decentralized organization like ours, is we’re responsible for outcomes but do not have authority over them. It is an exercise I am trying to lead from the middle. |
| Hub engagement |
| Annual reporting cycle induced bursts of effort |
| I think a limitation has been this idea that you can report [the metrics] once a year, which is good to report to NCATS, but it is not good as a management tool… |
| Interrupted funding |
| Given our no-cost extension status, we realized that we would not be able to implement all action plans that we proposed or we had outlined…. |
| Reduced motivation due to lack of alignment with existing processes or unclear definitions |
| …[W]hen I ask anybody on my staff to do something, I want to make sure it’s not busy work and I want to make sure it’s something that we’re using. … And so when we did a change of operations to basically…[compute the metric] the other way [for the Common Metrics], … the report at the end wasn’t useful to us…. |
| Stakeholder engagement |
| Lack of a direct line of consistent communication with other units |
| Unlike some institutions, we do not manage the IRB, and we don’t manage contracting, so we are always the liaison working with those entities, to try and improve their performance. – |
| Securing initial buy-in or sustained cooperation from key stakeholders |
| Well, I think we have the same problems as everybody else. You give somebody a $50,000 pilot grant, and then they forget to cite you on papers. We preach, we give seminars, we hand out mouse pads and mugs and do all kinds of things, and put it in our emails. But people still forget… So it is a constant struggle… |
Unless stated otherwise, themes manifest in more than one way; a quotation represents one manifestation.
Participant is affiliated with a medical center that functions as a CTSA without current CTSA funding.
Indicates that the challenge, under reverse conditions, becomes a facilitator.
Facilitators for hub progress, with illustrative quotation*
| Hub size and resources |
| Availability of institutional resources |
| … we use some IT [and other] resources that are institutionally supported to actually draw metrics for the Common Metrics. Because it’s so highly integrated… we don’t necessarily separate out which effort is completely supported by NIH… [versus] contributions to that task from non-NIH dollars. – |
| Adequate evaluation and other specific expertise |
| We’re fortunate in having a very experienced evaluator, and that’s really made the difference. If we didn’t have anyone who was so skilled in the metrics and assessment, some of these would have been more challenging. |
| Leveraging extended teams |
| Of all the possible factors that I could think of that might dictate whether or not we successfully implement the Common Metrics and whether it is beneficial to us, the structure of the team that was allocated to do the work has the greatest single effect. …I am a department of one, so I need help doing evaluation activities. So, we have evaluation liaisons in every program. We also have a huge number of people on the Common Metrics team, …and…a parallel group of advisers, people who were interested in the Common Metrics. |
| Effective core team |
| And it did help to have one person willing to become the expert at the organization. Like, there isn’t much she doesn’t know about [the Common Metrics] at this point. So you have to have a go-to person who is immersed in it and can really get it done. |
| We have a pretty close-knit leadership team and our evaluator meets with us weekly. So I think there’s the ability to address any of that quickly… That’s a facilitator that we’re working on this together collaboratively. |
| Alignment with needs of Common Metrics Implementation |
| Alignment of Common Metrics with and ability to use existing data collection tools |
| I can tell you that the IRB turnaround time was already being collected by both the IRBs. The pilot program, that was part of our ongoing evaluation to begin with, as was the KL2… – |
| Alignment with institutional priorities |
| The institution is very interested in this. So, I think that this is something the institution is highly invested in doing well on. |
| Hub authority |
| Occupying institutional and integrated leadership roles |
| I think reporting to the Provost helps, too…Some of these data systems are not medical school-specific, so that helps getting access to big picture systems. – |
| Hub engagement—Principal Investigator (PI) |
| Providing strategic guidance |
| [The PI] doesn’t do the day-to-day numbers, but he does the critical thinking of “how could we improve this number?” or “what could we do differently?”. |
| Serving as a champion |
| I would say our PI, I think he has the role of champion on our Common Metrics team and he has definitely…been that. So he welcomes…those process improvement conversations and having a sort of data-driven context that we can use to make sure we’re doing our work as best we can. |
| Facilitating stakeholder engagement |
| Our PI worked with a lot of the stakeholders to reengage them and to emphasize that this was going to be a process that we would have to comply with and that while it required more work up front, it was not only beneficial to the CTSA but it was going to be beneficial to them to have access to the data and the analyses in the long run. |
| Providing hands-on oversight during start-up |
| [The PI] was pretty directly involved with our Director of Evaluation to make sure that things were rolling out according to plan. I would say, compared to a lot of our sort of day-to-day initiatives and day-to-day work, he was more hands-on with the metrics than he is with some of the other things. |
| Stakeholder engagement |
| Personal relationships and cooperative spirit |
| [W]hen there would be meetings and conversations about getting data, and what mechanisms were in place, some of it was based on personal relationships that then needed to be shifted a little bit, with change in personnel. – |
| Integration of Common Metrics with institutional priorities |
| This has been embraced…as a barometer at the institution. …So, for us to have to…look at publication data or Pilot Award data, whatever we’re instrumenting for the Common Metrics for the CTSA, we basically just extend across the institution. |
| CTSA location and hub size can strengthen relationships |
Unless stated otherwise, themes manifest in more than one way; a quotation represents one manifestation.
Indicates that the facilitator, under reverse conditions, becomes a challenge.
Results of testing for effects of hub engagement on completion of performance improvement activities (N = 30 hubs)
| Engagement category | N | Hub score (Mean, SE) | |||
|---|---|---|---|---|---|
| Overall sum (0–30) | By metric | ||||
| Careers (0–10) | IRB (0–10) | Pilots (0–10) | |||
| All active engagement: All participants report active engagement | 10 | 22.8 (2.28) | 8.1 (0.88) | 6.3 (0.93) | 8.3 (0.91)* |
| Mix: Each participant reports both active engagement and compliance approach | 4 | 22.8 (3.60) | 8.5 (1.40) | 6.0 (1.47) | 8.2 (1.44) |
| Mix: Leader reports active engagement; Implementer reports compliance approach | 12 | 23.1 (2.08) | 7.5 (0.81) | 8.0 (0.85) | 7.7 (0.83) |
| All compliance-based engagement: All participants report compliance approach (ref) | 4 | 17.0 (3.60) | 6.4 (1.40) | 5.3 (1.47) | 5.4 (1.44) |
Ref = reference group; SE = standard error.
*p ≤ 0.10.