Literature DB >> 27986901

What we know about designing an effective improvement intervention (but too often fail to put into practice).

Martin Marshall1, Debra de Silva2, Lesley Cruickshank3, Jenny Shand2, Li Wei4, James Anderson2.   

Abstract

Entities:  

Keywords:  Healthcare quality improvement; Implementation science; Nursing homes; Quality improvement methodologies

Mesh:

Year:  2016        PMID: 27986901      PMCID: PMC5502247          DOI: 10.1136/bmjqs-2016-006143

Source DB:  PubMed          Journal:  BMJ Qual Saf        ISSN: 2044-5415            Impact factor:   7.035


× No keyword cloud information.

Intervening to change health system performance for the better

It is temptingly easy to treat improvement interventions as if they are drugs—technical, stable and uninfluenced by the environment in which they work. Doing so makes life so much easier for everyone. It allows improvement practitioners to plan their work with a high degree of certainty, funders to be confident that they know what they are buying and evaluators to focus on what really matters—whether or not ‘it’ works. But of course most people know that life is not as simple as that. Experienced improvers have long recognised that interventions—the specific tools and activities introduced into a healthcare system with the aim of changing its performance for the better1—flex and morph. Clever improvers watch and describe how this happens. Even more clever improvers plan and actively manage the process in a way that optimises the impact of the improvement initiative. The challenge is that while most improvers (the authors included) appreciate the importance of carefully designing an improvement intervention, they (we) rarely do so in a sufficiently clever way. In this article, we describe our attempts as an experienced team of practitioners, improvers, commissioners and evaluators to design an effective intervention to improve the safety of people living in care homes in England. We highlight how the design of the intervention, as described in the original grant proposal, changed significantly throughout the initiative. We outline how the changes that were made resulted in a more effective intervention but how our failure to design a better intervention from the start reduced the overall impact of the project. Drawing on the rapidly expanding literature in the field and our own experience, we reflect on what we would do differently if we could have our time again.

A practical case study—an initiative to improve the safety of people living in care homes

A growing number of vulnerable older people are living in care homes and are at increased risk of preventable harm. We carried out a safety improvement programme with a linked participatory multimethod evaluation2 in care homes in the south east of England. Ninety homes were recruited in four separate cohorts over a 2-year period. Our aim was to reduce the prevalence of three of the most common safety events in the sector—falls, pressure ulcers and urinary tract infections—and thereby to reduce unnecessary attendances at emergency departments and admissions to hospital. In the original proposal submitted to the funding body, we described a multifaceted intervention comprising three main elements: The measurement and benchmarking of (i) the prevalence of the target safety incidents using a nationally designed tool called the NHS Safety Thermometer3 and (ii) rates of emergency department attendances and hospital admissions using routinely collected data. Training in quality improvement methods provided initially by a team of NHS improvement advisors and then, using a ‘train the trainer’ model, by practitioners working with or in the care homes. The use of a specially adapted version of the Manchester Patient Safety Framework,4 (Marshall M, de Silva D, Cruickshank L, et al. Understanding the safety culture of care homes; insights from the adaptation of a health service safety culture assessment tool for use in the care home sector (submitted to BMJ Qual Saf, August 2016), a formative assessment tool which provides insights into safety culture for frontline teams. The intervention was underpinned by a strong emphasis on support and shared learning using communities of practice and online resources facilitated by the improvement team. The programme theory hypothesised that the three main elements of the intervention (benchmarking, learning improvement skills and cultural awareness) would reduce the prevalence of safety events, that this would lead to a reduction in emergency department attendances and hospital admissions and that both outcomes would reduce system costs as well as improving the quality of care for residents. The intervention was co-designed by improvement researchers in the evaluation team, the improvement team in the local government body responsible for commissioning care home services and a senior manager of one of the participating care homes. The design was influenced by a combination of theory, published empirical evidence and the personal knowledge and experience of the commissioners and care home manager. We built in a 6-month preparatory period at the start of the programme, prior to implementing the intervention with the first cohort of care homes. This period was used to recruit staff, establish the project infrastructure and build relationships between the care homes and the improvement and evaluation teams. Only when the programme formally started did we begin to expose some of the deficiencies in the planned intervention. Table 1 describes the different components of the intervention, whether it was part of the original plan or introduced at a later stage, and, based on our participatory evaluation, how it was implemented and the extent to which it was used.
Table 1

The original intervention and how it evolved

Intervention componentOriginal/added laterWays in which the component were implementedExtent to which component was used
NHS Safety Thermometer (NHS designed and owned online tool for collecting process and outcomes data)OriginalImplemented with first cohort and offered to all of second cohort, then replaced by Safety Cross and Monthly Mapping tools (see below)66% of first cohort homes tried the Safety Thermometer. About one-third input data
Active involvement of staff, residents and relatives in sharing data and co-creating improvement solutionsOriginalStaff initially slow to share data but became enthusiastic as project progressed. Residents and relatives hardly actively involved at all but project details and data displayed on public notice boards in most homesFewer than 10% of first cohort homes shared Safety Thermometer data. Eighty per cent of homes used the Safety Cross and displayed this for staff, residents and families to see. Sixty per cent displayed graphs from the Monthly Mapping tool
Training for care home staff in improvement methodologiesOriginalQuality improvement training was provided initially by the NHS staff, then adapted and provided by the improvement teamAll homes took part in training. In first cohort, this was chiefly home managers but in subsequent cohorts some senior carers also attended
Participants able to deliver the training to peers (train-the-trainer)OriginalFormal train-the-trainer model was not implemented though local advocates (‘champions’) were encouraged to roll out learning to othersChampions were found to work well to spread learning informally
Intervention toolkit containing a compendium of evidenced-based interventions for each of the domains of the Safety ThermometerOriginalToolkit with worksheets and information sheets developedAll homes received a hard copy and an online version. Unclear how much they were used by first cohort and then dropped as Safety Thermometer replaced by Safety Cross
Safety culture assessed using the MaPSaF tool at three time points (before, during and after PROSPER), using the tool to understand and address barriers to changeOriginalMaPSaF revised and tested in different ways with various cohortsUse not prioritised by the improvement team or by the homes. Small number of homes actively used it. Progressively more significant changes made to the tool for each cohort to make it more relevant
Communities of practiceOriginalThree community of practice events held throughout projectBetween a half and two-thirds of homes attended the events
Improvement tools and case studies uploaded to resource tool for peer learningOriginalKnowledge hub set up and documents uploaded periodically, mainly copies of things sent by email10% of homes signed up and none of them posted information
Ongoing support from improvement team including meetings, visits and telephone conferencesOriginalFacilitators visited homes with varied frequency. During the intensive phase, some homes were visited monthly and others every 3–4 months. Group telephone conferences were not usedSome homes received regular support and others did not. Some homes reported that they had no contact with their allocated improvement adviser for 6 months
‘Safety Cross' for displaying information about monthly incidents replaced Safety Thermometer (see above)AdditionUsed from cohort two homes onwards then also rolled out to cohort oneAbout 80% of homes reported using it
‘Monthly Mapping tool’ using graphs with monthly data to track changes over time and compare averagesAdditionAll homes were invited to provide data about the monthly incidence of harms. From cohort three onwards, homes were given access to an online toolAbout 60% of homes provided some data. One-quarter used the tool regularly without prompting
Provision of resources such as information posters, certificates of training, mirrors to view pressure ulcers and other tangible resourcesAdditionResources developed ad hocHomes offered tools during community of practice and visits. Variable uptake depending on focus. Resources appeared to be highly appreciated
Provision of additional training beyond improvement methods courses, such as training in infection control and pressure ulcersAdditionTwenty-six training sessions runAbout 50% of homes participated
Coordination with partner organisations in the NHSAdditionVaried by geographical areaVaried by geographical area
Monthly newsletterAdditionSent monthly to participating homesSixty per cent of home managers reported reading it

Green=implemented as planned; Amber=partly implemented as planned; Red=not implemented as planned.

MaPSaF, Manchester Patient Safety Framework.

The original intervention and how it evolved Green=implemented as planned; Amber=partly implemented as planned; Red=not implemented as planned. MaPSaF, Manchester Patient Safety Framework. The evaluation found that four of the nine original components of the intervention were not implemented as planned and two were only partially implemented as planned. Only three of the nine were implemented in line with the original proposal. Five of the six new intervention components, designed and implemented while the initiative was taking place, were fully implemented. Qualitative evaluative data, collected using interviews, surveys and observations, demonstrated changes in the attitudes of frontline staff to safety and changes in their working practices. However, quantitative data suggested only small and variable changes of questionable statistical significance in the prevalence of safety incidents, and no impact on the background rising rates of emergency department attendances and hospital admissions.

Success or failure?

Perhaps we should not be too hard on ourselves. On the surface at least, our intervention was more sophisticated than that seen for most improvement projects.5 The multifaceted intervention had complementary measurement, educational and culture-change elements and was co-designed by a wide group of stakeholders, including a practitioner and experienced improvement science academics. We based the design on a reasonable programme theory and an explicit logic model. We recognised the need to adapt off-the-shelf tools to the local context and to build in a preparatory period prior to formally evaluating the intervention. And we purposefully chose a participatory and formative evaluation model to support a feedback cycle as the initiative progressed. As a project team, we thought that we had designed the original intervention thoughtfully and carefully but the findings of our evaluation suggested that we could have done a lot better. Reflecting towards the end of the programme, we considered a number of possible explanations: we did not put enough time and effort into designing the intervention; we designed a sound intervention which was not implemented sufficiently well or was implemented without an adequate understanding of the context and our expectations were naïve that an intervention at such an early stage of development would have a significant impact. We then revisited the literature to examine these hypotheses.

What the literature suggests we should have done

There is no shortage of increasingly sophisticated theory, empirical evidence and learned commentary that could have guided our design decisions. Much of the thinking about interventions is relatively new; a state-of-the-art review of improvement published in the Lancet more than 15 years ago made no specific reference to the ways in which interventions morph when applied in practice.6 In contrast, more recent international guidance on designing, conducting and writing up improvement projects highlights the importance of describing how improvement interventions change.7 In brief, a number of themes relating to the design of effective interventions are emerging in the literature. First, the importance of using theory (‘a chain of reasoning’) to optimise the design and effectiveness of interventions is highlighted.8 A commonsense rather than an overly academic approach to theory is being advocated as a way of reducing the risk of the ‘magical thinking’, which encourages improvers to use interventions that look superficially attractive but for which the mechanisms of action are unclear.8 9 Alongside the use of theory, there is a growing interest in the application of ‘design thinking’ as a strategy for ensuring that the problem has been clearly identified and a way of addressing complex problems in rapidly changing environments.10 Second, the importance of having an explicit method, such as the Institute for Healthcare Improvement's Model for improvement using Plan-Do-Study-Act cycles, is described and also understanding how to use the methods to their full potential.11 Third, there is a growing emphasis on the extent to which improvement interventions are social as well as technical in nature, and how their effectiveness is a consequence of a complex interaction between people, organisational structures and processes.12 13 Fourth, the literature describes how what people do (intervention), how they do it (implementation) and the wider environment (context) are interdependent and some people are suggesting that the traditional differentiation between this classic triad is no longer helpful.14 Fifth, there is a growing consensus that improvement efforts are being evaluated too early in their development and as a consequence are being judged unfairly as being ineffective.15 16 Instead, there are calls for interventions to be categorised according to the ‘degree of belief’ that they will work16 and how this belief becomes stronger as a project progresses. Interventions in the early ‘innovation’ phase should be evaluated using different methods from those in the later ‘testing’ or ‘spread’ phases. They may also have a different intent, for example, changes in behaviour may be seen as ‘success’ before measurable changes in outcome are achieved. Sixth, drawing on the expanding field of knowledge mobilisation,17 18 experts are calling for a more active process of co-design of improvement initiatives involving service users, practitioners and improvers, and also academics, with all of these stakeholders contributing to participatory models of evaluation.19

What we would do differently?

Having reviewed the literature, we came to the conclusion that each of the post hoc hypotheses were reasonable explanations for what in the field of improvement were not uncommon results, but were nevertheless disappointing. In future, we will put more effort into designing the intervention from the very start. We will think through the design issues in sufficient detail to not only persuade the funder of the project but also to persuade ourselves that it will work in practice. We will describe a programme theory in greater detail based on a better understanding of the contextual factors which could impact on the feasibility and effectiveness of the initiative, and we will use design thinking to rigorously frame the problem from the start. We will work through in more detail and more systematically how to use current thinking about intervention design and its applicability to our project. We will build-in a similar or even longer preparatory period and will use that period to test and refine the intervention. We will not rely on a single senior care home manager to provide a practitioner view for the original proposal and we will seek a wide range of views from frontline staff and from care home residents in an inclusive and iterative way. We will not assume that the intervention can be implemented as described in the proposal and we will be more sensitive to the resource constraints under which the improvement team and the care homes are operating. If we do all of this, the outcome will almost certainly be better.

Final reflections

Improvement initiatives are sometimes planned on the hard high ground, but they are put into effect in the swampy lowlands.20 As we are more than aware, frontline practice is messy. And as we have described in this paper, it is never possible to do things perfectly and good improvers are always learning. But as the improvement movement matures, we are getting to the stage where we could and should be doing better. It needs to be seen as a professional rather than an amateur sport. The importance of understanding that improvement interventions are not like drugs or medical devices, and that flexibility needs to be built into their design and delivery, is uncontestable. But is it no longer acceptable to use the need for flexibility as an excuse for a lack of thought and planning. As improvement becomes more rigorous, perhaps improvement practitioners will be able to plan their work with a higher degree of certainty, funders will be more confident that they know what they are buying and evaluators will be able to focus on whether and how ‘it’ works.
  16 in total

1.  The science of improvement.

Authors:  Donald M Berwick
Journal:  JAMA       Date:  2008-03-12       Impact factor: 56.272

2.  Assessing organisational culture for quality and safety improvement: a national survey of tools and tool use.

Authors:  R Mannion; F H Konteh; H T O Davies
Journal:  Qual Saf Health Care       Date:  2009-04

3.  Bridging the ivory towers and the swampy lowlands; increasing the impact of health services research on quality improvement.

Authors:  Martin N Marshall
Journal:  Int J Qual Health Care       Date:  2013-10-17       Impact factor: 2.038

Review 4.  Recommendations for evaluation of health care improvement initiatives.

Authors:  Gareth J Parry; Andrew Carson-Stevens; Donna F Luff; Marianne E McPherson; Donald A Goldmann
Journal:  Acad Pediatr       Date:  2013 Nov-Dec       Impact factor: 3.107

5.  Explaining Michigan: developing an ex post theory of a quality improvement program.

Authors:  Mary Dixon-Woods; Charles L Bosk; Emma Louise Aveling; Christine A Goeschel; Peter J Pronovost
Journal:  Milbank Q       Date:  2011-06       Impact factor: 4.911

Review 6.  Theorising interventions as events in systems.

Authors:  Penelope Hawe; Alan Shiell; Therese Riley
Journal:  Am J Community Psychol       Date:  2009-06

7.  Patient safety culture in primary care: developing a theoretical framework for practical use.

Authors:  Susan Kirk; Dianne Parker; Tanya Claridge; Aneez Esmail; Martin Marshall
Journal:  Qual Saf Health Care       Date:  2007-08

8.  Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide.

Authors:  Tammy C Hoffmann; Paul P Glasziou; Isabelle Boutron; Ruairidh Milne; Rafael Perera; David Moher; Douglas G Altman; Virginia Barbour; Helen Macdonald; Marie Johnston; Sarah E Lamb; Mary Dixon-Woods; Peter McCulloch; Jeremy C Wyatt; An-Wen Chan; Susan Michie
Journal:  BMJ       Date:  2014-03-07

9.  Ten challenges in improving quality in healthcare: lessons from the Health Foundation's programme evaluations and relevant literature.

Authors:  Mary Dixon-Woods; Sarah McNicol; Graham Martin
Journal:  BMJ Qual Saf       Date:  2012-04-28       Impact factor: 7.035

10.  SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process.

Authors:  Greg Ogrinc; Louise Davies; Daisy Goodman; Paul Batalden; Frank Davidoff; David Stevens
Journal:  BMJ Qual Saf       Date:  2015-09-14       Impact factor: 7.035

View more
  31 in total

1.  A multidomain decision support tool to prevent falls in older people: the FinCH cluster RCT.

Authors:  Philippa A Logan; Jane C Horne; Frances Allen; Sarah J Armstrong; Allan B Clark; Simon Conroy; Janet Darby; Chris Fox; John Rf Gladman; Maureen Godfrey; Adam L Gordon; Lisa Irvine; Paul Leighton; Karen McCartney; Gail Mountain; Kate Robertson; Katie Robinson; Tracey H Sach; Susan Stirling; Edward Cf Wilson; Erika J Sims
Journal:  Health Technol Assess       Date:  2022-01       Impact factor: 4.014

Review 2.  Implementing risk-aligned bladder cancer surveillance care.

Authors:  Florian R Schroeck; Nicholas Smith; Jeremy B Shelton
Journal:  Urol Oncol       Date:  2018-02-13       Impact factor: 3.498

3.  An evaluation of a safety improvement intervention in care homes in England: a participatory qualitative study.

Authors:  Martin Marshall; Nadine Pfeifer; Debi de Silva; Li Wei; James Anderson; Lesley Cruickshank; Kieran Attreed-James; Jenny Shand
Journal:  J R Soc Med       Date:  2018-09-20       Impact factor: 5.344

4.  Improving the appropriateness of antipsychotic prescribing in nursing homes: a mixed-methods process evaluation of an academic detailing intervention.

Authors:  L Desveaux; M Saragosa; J Rogers; L Bevan; H Loshak; A Moser; S Feldman; L Regier; L Jeffs; N M Ivers
Journal:  Implement Sci       Date:  2017-05-26       Impact factor: 7.327

5.  Enabling Continuous Quality Improvement in Practice: The Role and Contribution of Facilitation.

Authors:  Gillian Harvey; Elizabeth Lynch
Journal:  Front Public Health       Date:  2017-02-22

6.  Implementation science at the crossroads.

Authors:  Richard J Lilford
Journal:  BMJ Qual Saf       Date:  2017-11-28       Impact factor: 7.035

7.  Who needs collaborative care treatment? A qualitative study exploring attitudes towards and experiences with mental healthcare among general practitioners and care managers.

Authors:  Marlene Christina Rosengaard Møller; Anna Mygind; Flemming Bro
Journal:  BMC Fam Pract       Date:  2018-05-30       Impact factor: 2.497

8.  Factors Influencing Team Behaviors in Surgery: A Qualitative Study to Inform Teamwork Interventions.

Authors:  Emma-Louise Aveling; Juliana Stone; Thoralf Sundt; Cameron Wright; Francesca Gino; Sara Singer
Journal:  Ann Thorac Surg       Date:  2018-02-07       Impact factor: 4.330

9.  Applying an intersectionality lens to the theoretical domains framework: a tool for thinking about how intersecting social identities and structures of power influence behaviour.

Authors:  Nicole Etherington; Isabel Braganca Rodrigues; Lora Giangregorio; Ian D Graham; Alison M Hoens; Danielle Kasperavicius; Christine Kelly; Julia E Moore; Matteo Ponzano; Justin Presseau; Kathryn M Sibley; Sharon Straus
Journal:  BMC Med Res Methodol       Date:  2020-06-26       Impact factor: 4.615

10.  Medical students as agents of change: a qualitative exploratory study.

Authors:  Emma Burnett; Peter Davey; Nicola Gray; Vicki Tully; Jenna Breckenridge
Journal:  BMJ Open Qual       Date:  2018-09-04
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.