| Literature DB >> 29930966 |
Michael Stoto1, Gareth Parry2, Lucy Savitz3.
Abstract
The last in a series of four papers on how learning health systems can use routinely collected electronic health data (EHD) to advance knowledge and support continuous learning, this review describes how delivery system science provides a systematic means to answer questions that arise in translating complex interventions to other practice settings. When the focus is on translation and spread of innovations, the questions are different than in evaluative research. Causal inference is not the main issue, but rather one must ask: How and why does the intervention work? What works for whom and in what contexts? How can a model be amended to work in new settings? In these settings, organizational factors and design, infrastructure, policies, and payment mechanisms all influence an intervention's success, so a theory-driven formative evaluation approach that considers the full path of the intervention from activities to engage participants and change how they act to the expected changes in clinical processes and outcomes is needed. This requires a scientific approach to quality improvement that is characterized by a basis in theory; iterative testing; clear, measurable process and outcomes goals; appropriate analytic methods; and documented results. To better answer the questions that arise in delivery system science, this paper introduces a number of standard qualitative research approaches that can be applied in a learning health system: Pawson and Tilley's "realist evaluation," theory-based evaluation approaches, mixed-methods and case study research approaches, and the "positive deviance" approach.Entities:
Year: 2017 PMID: 29930966 PMCID: PMC5994957 DOI: 10.5334/egems.253
Source DB: PubMed Journal: EGEMS (Wash DC) ISSN: 2327-9214
Figure 1The Model for Improvement
Source: Langley and colleagues.[7]
Figure 2Why New Improvement Ideas Fail so Often
Source: Parry and colleagues.[23]
Summary of Evaluation Aims and Approaches by Improvement Phase
| PHASE | INNOVATION | TESTING | SCALE-UP AND SPREAD |
|---|---|---|---|
| Generate or discover a new model of care with evidence of improvement in small number of settings | Engage organizations and enable them to test whether a model works or can be amended to work in their context | Engage organizations to adopt models with a high degree of belief in applicability and impact in a broad range of contexts | |
| From a small group of organizations with limited context to:
describe a new content theory estimate the improvement achieved from applying the theory update degree of belief that the content theory will apply in similar contexts from where it was developed | From an initial content theory, with moderate degree of belief,
describe an amended content theory estimate the improvement achieved from applying the amended content theory in specific contexts update degree of belief that the amended content theory will apply in specific contexts describe amended theory for engaging organizations in specific contexts to test and amend new content theory estimate the likely application of testing and amendment of content theory in the future | From an initial content theory, with high degree of belief that it will apply in specific contexts, to
Describe any amendments identified in the spread phase describe amended theory for engaging organizations in specific contexts to test and amend the new theory estimate the likely application of testing and amendment of content theory in the future | |
|
Quantitative measurement system to estimate the impact of variations in the development of the content theory Clarification of content theory
- through qualitative interviews with model developers and those who have tested the model - to draw out the underlying concepts, describe them and indicate how they impact on the results obtained |
Quantitative measurement system to estimate the impact of amendments to execution and content theories Longitudinal quantitative data analysis, including control chart and interrupted time series methods, to provide an estimate of the improvement associated with amended content and execution theories Randomized cluster and stepped-wedge trials Recommendations for how to amend content and execution theories through qualitative methods to identify how teams did or did not learn and apply their learning in their local context Regular, rapid-cycle feedback of the findings to the leads of the testing phase | Measurement system to provide estimates of the impact of amendments to the execution theory Randomized cluster and stepped-wedge designs Longitudinal quantitative data analysis, including control chart and interrupted time series methods, to provide an estimate of the improvement associated with an amended execution theory Recommendations for how to amend execution theory and point to issues with the content theory through qualitative methods to identify how teams did or did not learn and apply their learning, in their local context Regular, rapid-cycle feedback of the findings to the leads of the scale-up and spread phase | |
Source: Parry and colleagues [23].
Processes for Ensuring Rigor in Case Study and Qualitative Data Collection and Analysis
Source: Gilson and colleagues.[34]
Figure 3aGenerative Causation Approach to Realist Evaluation
Source: Pawson.[41]
Figure 3bRealist Synthesis Approach
Source: Pawson.[41]