| Literature DB >> 25810415 |
Margareth Crisóstomo Portela1, Peter J Pronovost2, Thomas Woodcock3, Pam Carter4, Mary Dixon-Woods4.
Abstract
Improvement (defined broadly as purposive efforts to secure positive change) has become an increasingly important activity and field of inquiry within healthcare. This article offers an overview of possible methods for the study of improvement interventions. The choice of available designs is wide, but debates continue about how far improvement efforts can be simultaneously practical (aimed at producing change) and scientific (aimed at producing new knowledge), and whether the distinction between the practical and the scientific is a real and useful one. Quality improvement projects tend to be applied and, in some senses, self-evaluating. They are not necessarily directed at generating new knowledge, but reports of such projects if well conducted and cautious in their inferences may be of considerable value. They can be distinguished heuristically from research studies, which are motivated by and set out explicitly to test a hypothesis, or otherwise generate new knowledge, and from formal evaluations of improvement projects. We discuss variants of trial designs, quasi-experimental designs, systematic reviews, programme evaluations, process evaluations, qualitative studies, and economic evaluations. We note that designs that are better suited to the evaluation of clearly defined and static interventions may be adopted without giving sufficient attention to the challenges associated with the dynamic nature of improvement interventions and their interactions with contextual factors. Reconciling pragmatism and research rigour is highly desirable in the study of improvement. Trade-offs need to be made wisely, taking into account the objectives involved and inferences to be made. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.Entities:
Keywords: Evaluation methodology; Health services research; Quality improvement methodologies; Social sciences; Statistical process control
Mesh:
Year: 2015 PMID: 25810415 PMCID: PMC4413733 DOI: 10.1136/bmjqs-2014-003620
Source DB: PubMed Journal: BMJ Qual Saf ISSN: 2044-5415 Impact factor: 7.035
Principles, strengths, weaknesses and opportunities for study designs for improvement interventions
| Class of studies | Principles | Strengths/weaknesses | Opportunities for methodological improvement | Example | |
|---|---|---|---|---|---|
| Quality improvement projects | Project is set up primarily as an improvement effort, to learn what works in a local context. It is typically motivated by a well-defined problem and oriented towards a focused aim. PDSA cycles are often applied, allowing for testing incremental, cyclically implemented changes, which are monitored through statistical process control | Quality improvement projects should incorporate theoretical base and qualitative methods more systematically to allow for predicting and explaining the mechanisms of change involved; more scientific vigour is needed in the application and reporting of PDSA cycles and other methods/techniques applied | An improvement initiative based on social marketing interventions developed to increase access to a psychological therapy service (especially from areas of high deprivation) involved weekly collection of geo-coded referral data and small-scale tests of change | ||
| Effectiveness studies | RCTs | RCTs may be especially suitable whenever interventions are being considered for widespread use based on their face validity and early or preliminary evidence. Differences in outcomes from delivering two or more interventions to similar groups of people or other entities are attributable to differences between the interventions. Control of confounding factors is an explicit aim | Improvements in the design, conducting, and reporting of RCTs are necessary to limit the high risk of bias observed currently. The awareness of the value of robust design, the need to avoid preconceived judgments about the intervention, and investments in research methods training should be pursued | A study aimed to determine the causal effects of an intervention shown effective in former pre/post studies in reducing central line-associated bloodstream infections in intensive care units. | |
| Quasiexperimental designs | The intervention is implemented and followed-up over time, ideally with a control. Compared with a RCT, the investigator keeps more control over the intervention, but has less control over confounding factors | Whether they have controls or not, quasiexperimental studies will be more powerful if they involve multiple measurements before and after the intervention is applied | A before-after study with concurrent controls sought to evaluate an intervention to reduce inpatient length of stay and considered the effect of the reduction on patient safety | ||
| Observational (longitudinal) studies | The implementation of the intervention is observed over time | Can be useful when other studies are not possible. They must be longitudinal and, ideally, prospective. The absence of an explicit control in the study design may be compensated by statistical techniques | A study aimed to examine the sustainability of an in-hospital quality improvement intervention in AMI, including the identification of predictors of physician adherence to AMI-recommended medication | ||
| Systematic reviews | Combining findings/samples from RCTs and quasiexperimental studies on the effectiveness of an intervention allows for more robust and generalisable QII effectiveness results | The development of systematic reviews on the effectiveness of QIIs has grown. It needs more critical appraisal of the studies included, more meta-analyses, and to deal with complex interventions in diverse contexts | Systematic review with meta-analysis aimed at assessing the effects of QIIs on the management of diabetes | ||
| Process evaluations | Understanding what an intervention is in practice important, especially when the aim is to attribute effects to it | Process evaluations should be embedded in effectiveness studies to capture failures in the QII implementation, and to better understand how QIIs’ components act. They need also to be more oriented towards validating theory-informed strategies | Process evaluation of a cluster randomised controlled trial aimed to examine which components of two hand hygiene improvement strategies were associated with increased nurses’ hand hygiene compliance | ||
| Qualitative studies | It is not enough to know that an expected change happened or did not. It is important to understand why and how | Qualitative studies should be included in quality improvement projects and QIIs’ quantitative evaluative studies for better understanding of outcomes and explanation of mechanisms of change involved. | Study that developed an ex post theory of the Michigan Intensive Care Unit project to explain how it achieved its effects | ||
| Economic evaluations | It is important to know that an intervention is effective and also that the investment required is justifiable | In the literature, studies dedicated to economic evaluations of healthcare QIIs are still lacking, and there is recognition that there should be more of them in the field | Cost-effectiveness analysis of a multifaceted intervention to improve the quality of care of children in district hospitals in Kenya | ||
AMI, acute myocardial infarction; PDSA, Plan-Do-Study-Act; QII, quality improvement intervention; RCTs, randomised controlled trials.