Literature DB >> 33889729

The Learning Health System Competency Appraisal Inventory (LHS-CAI): A novel tool for assessing LHS-focused education needs.

Alexandra J Greenberg-Worisek1, Nathan D Shippee2, Cory Schaffhausen3, Kelli Johnson2, Nilay D Shah4,5, Mark Linzer6, Timothy Beebe2, Felicity Enders7.   

Abstract

There is increasing interest in health care organizations functioning as learning health systems (LHSs) to improve the quality and efficiency of health care delivery while generating new knowledge. Individuals must be trained in associated concepts and competencies and subsequently positioned (or embedded) within the delivery system for maximum effect as they perform their scholarship. Potential researchers within LHSs come from many different training backgrounds; therefore, each LHS scholar requires a goal-directed plan tailored to his or her needs. There are few tools available to guide development, training, or evaluation of individuals interested in becoming leaders of research in LHSs. In this paper, we present a newly developed tool for guiding the training of such researchers, the Learning Health Systems Competency Appraisal Inventory (LHS-CAI). The LHS-CAI is modeled after the Clinical Research Appraisal Index (CRAI) used within Clinical and Translational Science Award sites across the United States. The LHS-CAI is a tool for trainees at all levels to use with their mentors in an interactive manner. The tool can then identify areas in which more training is needed and at what level to ensure success as a researcher within LHSs. We further modified the CRAI format to better leverage the LHS-CAI as a key part of an LHS scholar's individual development plan. To implement the LHS-CAI, we have identified key points within the Minnesota Learning Health System Mentored Career Development Program (MN-LHS) at which assessment of expertise for each competency would be useful to LHS scholars, mentors, and program leaders. Scholars in this program come from various clinical and academic backgrounds but are all targeting their career trajectories toward leading embedded LHS research. They will reevaluate their expertise upon completion of the program, with comparison to baseline serving as a key program evaluation tool. The LHS-CAI is currently being implemented with the first cohort of scholars in the MN-LHS program.
© 2020 The Authors. Learning Health Systems published by Wiley Periodicals, Inc. on behalf of the University of Michigan.

Entities:  

Keywords:  competency‐based learning; learning health systems; research education

Year:  2020        PMID: 33889729      PMCID: PMC8051341          DOI: 10.1002/lrh2.10218

Source DB:  PubMed          Journal:  Learn Health Syst        ISSN: 2379-6146


INTRODUCTION

In the 18 years since the National Academy of Medicine's report, Crossing the Quality Chasm, there has been increasing emphasis on improving care delivery by ensuring that health care delivery research findings are quickly and safely implemented into clinical practice. Specifically, the report puts forth the learning health system (LHS) as a framework or tool to achieve this goal. LHSs are defined as health care provider, payer, or policy systems “in which science, informatics, incentives, and culture are aligned for continuous improvement and innovation, with best practices seamlessly embedded in the delivery process and new knowledge captured as an integral by‐product of the delivery experience.” This process entails integration of internal data and external evidence to build knowledge, applying knowledge to improve practice, and using data thereby generated to continue the cycle. , In contrast to static or haphazardly evolving organizations, LHSs intentionally generate, curate, adopt, disseminate, and manage evidence to improve care and outcomes. Thus, they require enhanced data infrastructure, culture change emphasizing learning, and support for innovation including empowering staff to make improvements and hiring teams with integral skills (eg, information technology and analytics). Understanding how to operationalize LHS research for use by administrators and clinicians is increasingly important as systems move toward individualized care and a greater focus on patient‐centered outcomes. Yet, while training programs exist for comparative effectiveness and health services research, patient‐centered outcomes, and clinical and translational science, they do not necessarily equip researchers with the practical skills needed to embed their work within the complexities of LHSs (Figure 1). Instead, those training to become leaders in LHS research need tools to conduct rapidly iterating, stakeholder‐informed research, to quickly implement and disseminate findings, and to study the individual and systems impact of interventions in real time. Such activities must encompass not only broadly generalizable knowledge but also research specific to individual departments, organizations, and health care systems. Moreover, the managed and evolving aspects of active organizations such as LHSs themselves add further considerations for researchers within them, requiring attention to multiple systems elements and processes. Accomplishing engagement, design, implementation, and evaluation as a researcher while accounting for contextual elements of the LHS requires training in systems and complexity sciences, mixed methods research, design and implementation science, information systems and analytics, and content expertise on a range of topics including financing, health care organizations, and quality.
FIGURE 1

Detailed crosswalk between Agency for Healthcare Research and Quality Learning Health System (AHRQ LHS) Competencies and Patient‐Centered Outcomes Research Institute (PCORI) Methodology Standards

Detailed crosswalk between Agency for Healthcare Research and Quality Learning Health System (AHRQ LHS) Competencies and Patient‐Centered Outcomes Research Institute (PCORI) Methodology Standards Training researchers who are capable of embedding in LHSs and conducting real‐time implementation of systems improvement require creating both didactic curricula and experiential learning opportunities that meet a unique set of learning objectives and domains. Given the shared focus on LHSs between the Agency for Healthcare Research and Quality (AHRQ) and the Patient‐Centered Outcomes Research Institute (PCORI), learning activities should marry the LHS core competencies promoted by AHRQ with the methodology standards promoted by PCORI (Figure 1). , While other programs offering training in the LHS space address both sets of these competencies and standards in various ways and to varying levels, most do not unite them within the LHSs paradigm. To address the need for robustly trained LHS researchers, AHRQ and PCORI developed the AHRQ‐PCORI Institutional Mentored Career Development Program (K12), for which the first awards were granted in 2018. As part of this K12 network, the University of Minnesota, Mayo Clinic, and Hennepin County Medical Center (Hennepin Healthcare) have created a multi‐institutional training program, the Minnesota Learning Health System Mentored Career Development Program (MN‐LHS) with competitively awarded funding from AHRQ and PCORI. This program includes several components to ensure that scholars are trained to be effective LHS researchers immediately upon completion of the MN‐LHS program, leveraging didactic coursework, “Design Shop” interactive learning sessions with local experts, retreats, and embedded research experiences. Because each scholar enters the program with various skill sets, the program is tailored to “meet the scholar where they are” and help them determine specific areas for growth. It is therefore key to use iterative, standardized feedback, and self‐reflection in establishing each scholar's unique Individual Development Plan (IDP). IDP is already a standard tool used in National Institutes of Health‐funded PhD programs and in many postdoctoral programs. Specifically, the Curriculum and Methods Director and Deputy Director meet initially and quarterly with each scholar, using the IDP and other program materials to guide a review of progress and challenges in the research, education/training, mentoring, and career development toward AHRQ LHS competencies from their baseline clinical or academic expertise. Additionally, there is a need from a programmatic stance to have a systematic means of following individual progress toward improvement and mastery of these competencies. In this report, we present a new tool that is central to the onboarding and ongoing mentoring of our scholars, The Minnesota Learning Health Systems Competency Appraisal Inventory (MN‐LHS CAI).

DEVELOPING A TOOL FOR COMPETENCY‐BASED TRAINING IN LEARNING HEALTH SYSTEMS

Due to the variability of scholar paths entering the training program, IDPs for each of our scholars are very different (Table 1). An additional layer of complexity lies in the fact that LHS researchers require a unique combination of skills (those embodied in AHRQ LHS competencies and PCORI methods standards) across which scholars' prior knowledge and activities vary widely. While this heterogeneity of experience and skill is welcomed, there is a need to systematically examine present and ideal states of mastery across competencies.
TABLE 1

Examples of different potential learning health systems program trainees

Type of ResearcherLooking for:Individualized Development Plan Would Emphasize:

An active clinician or clinician researcher engaging in local quality improvement and implementation activities. A junior MD faculty member who might otherwise apply for an early career, mentored K‐award, or an established clinician involved in research who might otherwise apply to a mid‐career award.

(1) A strengthening of conceptual knowledge and technical skills that could be brought to bear for rigorous, embedded evaluation; and (2) an expanded network of academic collaborators

(1) Intensive coursework in specific research methods, implementation science, and systems science; and (2) a research project involving implementation and rigorous evaluation of a complex intervention in the scholar's own clinical division

Formally trained scientist (PhD) who wants to work in health care in an embedded manner. This could be, for example, a newly graduated health services researcher with training in advanced modeling, large data management, or other portable methodological skills whose knowledge base may depend heavily on methods and policy.

Skills to conduct research on health care delivery in an embedded, applied manner or to better position her/himself for positions in embedded/applied settings.

(1) Less coursework on specific methods, more coursework and mentoring on systems science, improvement and implementation science, stakeholder engagement, leadership, and research management; and (2) an embedded research project created in close partnership with clinical practitioners and patients, which can connect the scholar's existing methods skills with an applied problem.

Mid‐career scientist (PhD) desiring greater focus on examining clinical care in an embedded manner. This could be, for example, a psychometrician or survey researcher whose work has focused on patient‐reported outcomes, or a decision science researcher who has used modeling to help inform clinical guidelines.

Looking for: knowledge and skills to support a mid‐career pivot to embedded research due to personal interest, changes in the field, or new collaborations and projects

IDP would emphasize (1) moderate coursework and mentoring around systems science and some methodological approaches, with a heavy focus on stakeholder engagement and improvement and implementation science; and (2) an embedded research project built to form the structure around a new collaboration with health system‐embedded clinician‐researchers.

Abbreviation: IDP, Individual Development Plan.

Examples of different potential learning health systems program trainees An active clinician or clinician researcher engaging in local quality improvement and implementation activities. A junior MD faculty member who might otherwise apply for an early career, mentored K‐award, or an established clinician involved in research who might otherwise apply to a mid‐career award. (1) A strengthening of conceptual knowledge and technical skills that could be brought to bear for rigorous, embedded evaluation; and (2) an expanded network of academic collaborators (1) Intensive coursework in specific research methods, implementation science, and systems science; and (2) a research project involving implementation and rigorous evaluation of a complex intervention in the scholar's own clinical division Skills to conduct research on health care delivery in an embedded, applied manner or to better position her/himself for positions in embedded/applied settings. (1) Less coursework on specific methods, more coursework and mentoring on systems science, improvement and implementation science, stakeholder engagement, leadership, and research management; and (2) an embedded research project created in close partnership with clinical practitioners and patients, which can connect the scholar's existing methods skills with an applied problem. Mid‐career scientist (PhD) desiring greater focus on examining clinical care in an embedded manner. This could be, for example, a psychometrician or survey researcher whose work has focused on patient‐reported outcomes, or a decision science researcher who has used modeling to help inform clinical guidelines. Looking for: knowledge and skills to support a mid‐career pivot to embedded research due to personal interest, changes in the field, or new collaborations and projects Abbreviation: IDP, Individual Development Plan. To create such a standardized process, we looked toward the Clinical Research Appraisal Inventory (CRAI). The CRAI was developed for Clinical and Translational Science Awardees by Mullikin and colleagues, who faced a similar problem. Briefly, the original CRAI contained 92 items and was built upon two theories often used in the development of such tools, the Self‐Efficacy Theory and the Social Cognitive Career Theory. , Briefly, the Self‐Efficacy Theory states that student achievements, motivation, and goals are dictated by their beliefs in their own ability to control their education experiences and learning environment and to master content. The Social Cognitive Career Theory is used to describe career development through three main areas, which include how trainees develop interest in an academic area or career path, how choices are made in light of those interests, and how success is achieved as trainees progress toward those academic and career goals. Although implementation and dissemination of the original CRAI were wide, it was reported as being cumbersome for LHS researchers‐in‐training to complete repeatedly throughout their program; to that end, Robinson and colleagues conducted exploratory and confirmatory factor analysis to create a shorter, 12‐item CRAI, which has since been implemented in place of the original CRAI at many CTSAs. A shortened version will be considered here once we have sufficient data to assess the instrument's full content and psychometrics. Given the success and broad use of the CRAI, as evidenced by endorsement from the National Center for Advancing Translational Science at the National Institutes of Health, our team used the CRAI framework as a starting point for the development of the Learning Health Systems Competency Appraisal Inventory (LHS‐CAI). Three primary changes were made to the CRAI to adapt it for LHSs. First, we changed the CRAI competencies to AHRQ competencies. Second, we modified the response scale from the CRAI's original scale (0 = no confidence, 10 = mastery) to a scale that is designed to better differentiate current status and scholar goals (0 = no awareness of topic, 3 = aware of topic, no confidence in performing topic, 10 = total confidence in performing topic). Finally, while the CRAI asks scholars only to complete baseline confidence at the start of their program, we also asked scholars to acknowledge their desired confidence for completion of the program at the time they start the program, as scholars need mastery of specific competencies tailored to their desired outcomes. When reviewing a scholar's baseline LHS‐CAI, a color‐coded “fingerprint” is created to help guide review of the scholar's responses. This fingerprint specifically highlights (a) areas of existing strength, (b) areas for which they want only awareness or low competency, (c) areas for which they want mastery (8‐10), and (d) areas of discrepancy with low baseline but desired mastery. Finally, for competencies of the fourth type, the scholar uses the LHS‐CAI to map their IDP coursework to the competencies.

EARLY PILOTING AND IMPLEMENTATION

Given the dynamic nature of the program and these competencies, we are developing a process in which the LHS‐CAI tool is implemented at key milestones during the MN‐LHS training program. First, baseline expertise is assessed at time of program matriculation and will be completed again upon finishing the program (“re‐evaluated baseline expertise”). Reevaluation of perceived baseline knowledge is important for LHS research trainees in formal programs, as many arrive with significant training that impacts the development of their IDP, and upon finishing the program, they may realize that they knew more or less at baseline than they had initially thought. The desired expertise is determined at baseline and will be subsequently reevaluated during each year of the 2‐ to 3‐year program as the scholar's needs change. Note that the definition of “desired expertise” includes expertise for both career and research goals; therefore, a competency may be desired that is not needed for the specific research aims of their project during the program. In addition to using the rating scale to self‐report baseline and desired expertise, scholars are asked to fill out a third column detailing the didactic and experiential training components, which will help them reach their desired competency. To assist scholars with this, courses aligned with the program are available in an additional table, which shows the competencies each course addresses (available from authors upon request). When a scholar needs to take a specific course, they can cross‐reference this course table to see what competencies are covered and map those back to their LHS‐CAI. The program‐required coursework is intended to ensure that all LHS research trainees achieve a 3 or higher for all competencies. Our baseline minimum is awareness for all competencies, with each scholar achieving expertise in select topics aligned with their research and career goals. In our related experience using the CTRAI in CTSAs, scholars frequently misattribute their competence on entering the program. After training, typically, their assessment of where they were at baseline will drop precipitously. Gathering both the true baseline assessment and the reevaluation of the baseline assessment allows us to both better assess baseline competency and observe the value of training in self‐evaluation. When the LHS‐CAI is completed electronically, responses of 8 or higher will trigger the scholar to provide specific examples of their achievement in that area as a means of documenting their mastery for that competency (ie, a component of an implemented project, which utilized knowledge gained in that area). A skip pattern is used on the paper version of the tool to indicate that the scholars must provide examples. Self‐rated scores of 8 or higher across scholars' desired or prioritized expertise areas are expected upon completion of the MN‐LHS program. Research outputs such as manuscripts, completed coursework, or other writing samples for those areas of priority are our planned documentation to assess prioritized competencies at program completion. The LHS‐CAI is currently being piloted with our first cohort of MN‐LHS scholars, who began their program in January 2019.

FUTURE DIRECTIONS

In addition to requiring scholars to complete the LHS‐CAI at the program milestones detailed above, mentors will also complete the LHS‐CAI at the beginning, mid‐point, and completion of the program to provide their perspectives of the scholar's baseline and gained expertise. We will sequentially evaluate the CAI as follows: (a) comparing scores for scholars at baseline, mid‐point, and completion for improvement, thus determining areas in which we are most and least effective in instilling LHS competency; (b) determining accuracy of perceived baseline knowledge with a follow‐up assessment (“look back”); (c) preliminary factor analysis of scholar scores for internal consistency; and (d) development, fielding, and analysis of a short‐form version of the LHS‐CAI. Our hope is that this process will result in the creation of a more efficient, generalizable tool that can be used by other programs and systems seeking to train LHS research leaders for these critical purposes. Once the tool has been utilized with two cohorts of scholars, we plan to perform factor analysis to assess internal consistency of the items.

OTHER DISCLOSURES

None.

ETHICAL APPROVAL

None required.

CONFLICT OF INTEREST

All authors assert that they have no conflicts of interest.
  3 in total

1.  Training the next generation of learning health system scientists.

Authors:  Paula M Lozano; Meghan Lane-Fall; Patricia D Franklin; Russell L Rothman; Ralph Gonzales; Michael K Ong; Michael K Gould; Timothy J Beebe; Christianne L Roumie; Jeanne-Marie Guise; Felicity T Enders; Christopher B Forrest; Eneida A Mendonca; Joanna L Starrels; Urmimala Sarkar; Lucy A Savitz; JeanHee Moon; Mark Linzer; James D Ralston; Francis D Chesley
Journal:  Learn Health Syst       Date:  2022-09-10

2.  Development of a rehabilitation researcher survey of knowledge and interest in learning health systems research.

Authors:  Linda Resnik; Melissa A Clark; Janet Freburger; Christine McDonough; Kathleen Poploski; Kristin Ressel; Margarite Whitten; Joel Stevans
Journal:  Learn Health Syst       Date:  2021-11-18

Review 3.  Identifying requisite learning health system competencies: a scoping review.

Authors:  Paige L McDonald; Jessica Phillips; Kenneth Harwood; Joyce Maring; Philip J van der Wees
Journal:  BMJ Open       Date:  2022-08-23       Impact factor: 3.006

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.