| Literature DB >> 27478402 |
Rob Anderson1, Rebecca Hardwick1.
Abstract
To be successfully and sustainably adopted, policy-makers, service managers and practitioners want public programmes to be affordable and cost-effective, as well as effective. While the realist evaluation question is often summarised as what works for whom, under what circumstances, we believe the approach can be as salient to answering questions about resource use, costs and cost-effectiveness - the traditional domain of economic evaluation methods. This paper first describes the key similarities and differences between economic evaluation and realist evaluation. It summarises what health economists see as the challenges of evaluating complex interventions, and their suggested solutions. We then use examples of programme theory from a recent realist review of shared care for chronic conditions to illustrate two ways in which realist evaluations might better capture the resource requirements and resource consequences of programmes, and thereby produce explanations of how they are linked to outcomes (i.e. explanations of cost-effectiveness).Entities:
Keywords: Realist evaluation; complex interventions; cost-effectiveness; economic evaluation; resource use; shared care
Year: 2016 PMID: 27478402 PMCID: PMC4948109 DOI: 10.1177/1356389016652742
Source DB: PubMed Journal: Evaluation (Lond) ISSN: 1356-3890
Economic and realist evaluation compared.
| Economic evaluation | Realist evaluation | |
|---|---|---|
| Theoretical basis | Ostensibly, welfare economics; | Realism[ |
| Conception of causality | Not generally known or discussed | Generative notion of causation: |
| Research aim | To produce estimates both of costs and effectiveness in a specific context (e.g. alongside a specific effectiveness study) or a particular decision-making jurisdiction (model-based economic evaluation) | To develop and refine programme theories (i.e. potential explanations) about how and why interventions work (i.e. produce beneficial outcomes), including how and why they work differently in different contexts |
| Policy aim | To inform specific decisions amongst a defined number of alternatives | To inform decision-makers about the way that the intervention produces its effects, and what modifies or influences that effectiveness |
| Type of data | Quantitative (especially: resource use, unit costs, and final outcomes (e.g. health)). | Quantitative and qualitative depending on the specific knowledge gaps. Often quantitative to establish the outcome pattern, and qualitative to determine how and why this pattern occurs. |
| Preferred study design? | Either: | Pluralist – no strongly preferred method or design (research question dependent) |
| Generalisability/ | Emphasis on transferability of results (e.g. similar cost-effectiveness) contingent upon key features of context (country, and patient group) | Generalisation is through progressively applying the programme theory to other contexts. Realist approaches recognise that the explanations developed from a realist evaluation are always open to further development and refinement. |
Note: aoriginally, in Chapter 3 of Pawson and Tilley (1997), the basis of realist evaluation was described as scientific realism. In the Preface to Pawson (2013) the author explains how his terminology has evolved to be ‘some type of realism’, while others may see realist evaluation as more closely aligned to, for example, critical realism.
Challenges in the economic evaluation of complex health interventions.
Key to sources: a Godber et al. (1997); b Coast et al. (2000); c Payne et al. (2013); d Weatherly et al. (2009); e Shiell et al. (2008).
Figure 1.Provisional ‘causal map’ of shared care components, mechanisms and outcomes.
Figure 2.Usual care and ‘shifted’ care: hypothetical clinical contact time by disease severity.
Figure 3.‘Tailored care’: clinical contact time by disease severity.
Figure 4.‘Substituted care’: clinical contact time by disease severity.
Figure 5.Cost savings due to a better managed condition.