Literature DB >> 34673985

Improving emergency and admission care in low-resource, high mortality hospital settings-not as easy as A, B and C.

Mike English1,2.   

Abstract

Entities:  

Keywords:  Implementation; complexity; quality of care; study design

Mesh:

Year:  2022        PMID: 34673985      PMCID: PMC9189608          DOI: 10.1093/heapol/czab128

Source DB:  PubMed          Journal:  Health Policy Plan        ISSN: 0268-1080            Impact factor:   3.547


× No keyword cloud information.
Strategies to provide emergency care training and promote use of clinical guidelines targeting common severe illness in newborns and children are available but there remain few studies testing the consequences of introducing these through multifaceted interventions. Unlike studies that compare two quite specific treatments (e.g. drug A vs drug B) interventions that target improvement in delivery of emergency care and multiple guidelines require many changes in the organization and behaviour of multiple health workers (e.g. so that a team follows the key A, B and C steps of an emergency protocol and then continues to offer correct care over hours or days). Some approaches to evaluating such multi-faceted interventions try and specify the complicated but logical sequence of changes or steps in care that need to be achieved so that patient outcomes including mortality are improved. These approaches may also recognize the complicated set of contextual factors that may influence implementation itself and patient outcomes. In principle, good study designs and analytic models might then account for all such influential factors to provide good evidence of effect for decision makers. There are concerns that even very carefully designed studies that treat health care delivery as a complicated problem, especially those focused on organizations as the unit of study, may not in the end provide the evidence needed. Instead, we need study designs and research platforms that deal with complex systems. The number and quality of clinical trials examining specific alternative disease treatments (A vs B) in low-resource settings (LRSs) has increased because of dedicated funding streams. Rigorous evaluations of innovations or improvements in service delivery such as that now reported by Hategaka et al are far fewer (Hategeka ). In this example the intervention (an ETAT+ package (Irimu )) aimed to promote use of multiple recommended practices to improve quality of hospital care in Rwanda. These included changing the way individual health-workers and teams organize their practice (e.g. care guided by the A, B and C’s for triage and resuscitation) and correctly identifying and treating children and newborns with specific conditions (e.g. diagnosis and management of severe dehydration or sepsis) (Hategeka ). In many ways improving service delivery seems self-evidently a good thing. Who, anywhere in the world, would not wish their sick child or newborn to have access to a hospital where health workers have the systems and skills to handle serious illness effectively? So, what should we make of a controlled interrupted time-series study of a multi-faceted intervention to achieve this in Rwanda that had no statistically significant impact on all-cause neonatal and paediatric hospital mortality, although a possible impact on the case-fatality of targeted neonatal conditions? (Hategeka ). For some, the answer will be simple. Strictly, this study provides insufficient evidence that the intervention works well enough, so why use it? For others, providing evidence on what improves hospital care and outcomes is more complicated. For yet others, the answer is well, complex. Making things complicated are the number and range of issues that make designing, conducting and interpreting multi-faceted interventions targeting clinical care in hospitals problematic. I reflect briefly on just three; how do interventions work, achieving control in comparative studies and choosing the right measures of success. Firstly, we ought to be able to articulate how our intervention should achieve our desired outcomes. Logic models, theories of change and directed acyclic graphs may help and may be supported by other approaches for understanding implementation (Michie ; Powell ; De Silva ). This conceptual thinking should identify key intermediary clinical practice changes needed to produce mortality effects if this is their goal. Ideally prospectively developed models then drive specific data collection on important intermediary effects. For example, in studies testing interventions to improve triage, emergency and immediate admission care impacts on mortality seem predicated on improving the speed and accuracy with which sick children are seen, assessed and managed. So, do such intermediary outcomes change? Here, we need to distinguish evaluating intermediate effects from fidelity of intervention delivery (Moore ). The latter is primarily concerned with whether or how well what is planned is done. Failure to implement may explain why both key intermediary effects and final outcomes are not achieved. Conversely, if implementation goes as planned but key intermediary effects are not seen how do we interpret any change in our main outcome measure? Examining intermediary effects early may even call into question the value of proceeding to a multi-year mortality endpoint and limit research waste. Unfortunately, an overarching challenge with service delivery interventions and models of them are they can become hugely complicated. A recent typology included over 70 distinct implementation components that might be deployed (Powell ). In one Kenyan hospital improvement project, over 20 were deployed simultaneously and evaluating their successful delivery and multiple intermediary effects can become an overwhelming task (English ). Such complicated interventions also pose challenges for ‘controlled’ studies. Hategaka et al.’s study design is said to control for ‘differences in characteristics between intervention and control hospitals that remained constant or changed over time’ (Hategeka ). When hospitals are the units of intervention the numbers involved in trials are typically few. Conversely, the number of factors that may act as confounders or introduce bias is extremely large, and many may be important but unmeasurable (e.g. clinical leadership) (English ). Is it then reasonable to expect that two groups of hospitals are ‘balanced’ with respect to a huge array of largely unknown but influential factors? Furthermore, such factors may vary across place over time. For example, an influential clinical leader is transferred from one hospital to another. How is such a change controlled for other than by assuming that changes ‘balance out’ over time between intervention and control groups. Is this likely when relatively small numbers of organisations are studied? If we cannot verify assumptions of balance how safe are we ever making assumptions of internal validity? (English ). Impacts on mortality to justify intervention in high-mortality settings is often what we desperately want and what funders demand. It typically means improving many intermediary aspects of quality of care as discussed above. However, is mortality a good aggregate measure of quality care? Many would say no because mortality is highly dependent on case-severity and case-mix that vary across place and time (Lilford and Pronovost, 2010). For, example, we have seen three-fold variation in mortality across hospitals in Kenya (Irimu ). Adjusting for these factors requires detailed individual patient data which is rarely available. Perhaps more critically mortality is strongly influenced by the cumulative quality and safety of care (or its absence) prior to and over days or even weeks of admission. So how reasonable is it to expect that improving important but time-bound aspects of care will impact mortality when whole systems are weak? Some trials of service delivery interventions have demonstrated effects on hospital mortality in LRS, but these have employed randomization at the individual level and well-resourced, sustained and trial-supported change efforts (Biai ; WHO Immediate KMC Study Group, 2021). When interpreting these, the contribution of an often supernumerary ‘trial team’ is frequently ignored as an input. So, if we cannot demonstrate improved mortality are interventions that (only) improve care processes useful? Very small improvements in hospital mortality that are almost impossible to ‘prove’ are associated with intervention may still be highly cost-effective (Barasa ). So should interventions that do not impact mortality but that improve adoption of multiple evidence-based therapies or management steps be taken to scale? We outline above how the challenges of statistically testing A vs B interventions while characterizing and accounting for every complicated aspect of context and intervention delivery may be insuperable. Some now regard pursuit of this elaborate but still reductive approach to miss the point because in high and lower-resource settings we are dealing with Complex (Adaptive) Systems. These defy explanation based on linear cause and effect models and it is beyond the scope of this article to explain these ideas in full (but to introduce the topic see (Greenhalgh and Papoutsi, 2018)). Their importance is what they mean for evaluators of service delivery interventions. Such evaluations are very rarely simple A vs B comparisons. Nor are they even tests of A + B + C intervention packages complicated by factors X, Y and Z (and more), representing measurable variations in context or fidelity. Instead, to develop the field we need strategies for evaluating service delivery interventions that pay attention to complexity and clinical researchers need to partner with and learn from and employ methods often developed by social scientists, economists, engineers and others. Also, critical is ensuring those with knowledge of the context and systems are central to evaluation as ‘insider knowledge’ is key to building understanding. At the same time, we should also examine outcomes valued by policy makers, practitioners and communities who ultimately have the task of sustaining implementation (Gilson ). Embedded, multi-disciplinary research linked to collaborative learning platforms with good historical data on context and key outcomes may be especially helpful in this regard (English ).
  16 in total

1.  Using hospital mortality rates to judge hospital performance: a bad idea that just won't go away.

Authors:  Richard Lilford; Peter Pronovost
Journal:  BMJ       Date:  2010-04-20

2.  Impact of a multifaceted intervention to improve emergency care on newborn and child health outcomes in Rwanda.

Authors:  Celestin Hategeka; Larry D Lynd; Cynthia Kenyon; Lisine Tuyisenge; Michael R Law
Journal:  Health Policy Plan       Date:  2022-01-13       Impact factor: 3.344

3.  Reduced in-hospital mortality after improved management of children under 5 years admitted to hospital with malaria: randomised trial.

Authors:  Sidu Biai; Amabelia Rodrigues; Melba Gomes; Isabela Ribeiro; Morten Sodemann; Fernanda Alves; Peter Aaby
Journal:  BMJ       Date:  2007-10-22

4.  Studying complexity in health services research: desperately seeking an overdue paradigm shift.

Authors:  Trisha Greenhalgh; Chrysanthi Papoutsi
Journal:  BMC Med       Date:  2018-06-20       Impact factor: 8.775

5.  Neonatal mortality in Kenyan hospitals: a multisite, retrospective, cohort study.

Authors:  Grace Irimu; Jalemba Aluvaala; Lucas Malla; Sylvia Omoke; Morris Ogero; George Mbevi; Mary Waiyego; Caroline Mwangi; Fred Were; David Gathara; Ambrose Agweyu; Samuel Akech; Mike English
Journal:  BMJ Glob Health       Date:  2021-05

6.  Immediate "Kangaroo Mother Care" and Survival of Infants with Low Birth Weight.

Authors:  Sugandha Arya; Helga Naburi; Kondwani Kawaza; Sam Newton; Chineme H Anyabolu; Nils Bergman; Suman P N Rao; Pratima Mittal; Evelyne Assenga; Luis Gadama; Roderick Larsen-Reindorf; Oluwafemi Kuti; Agnes Linnér; Sachiyo Yoshida; Nidhi Chopra; Matilda Ngarina; Ausbert T Msusa; Adwoa Boakye-Yiadom; Bankole P Kuti; Barak Morgan; Nicole Minckas; Jyotsna Suri; Robert Moshiro; Vincent Samuel; Naana Wireko-Brobby; Siren Rettedal; Harsh V Jaiswal; M Jeeva Sankar; Isaac Nyanor; Hiresh Tiwary; Pratima Anand; Alexander A Manu; Kashika Nagpal; Daniel Ansong; Isha Saini; Kailash C Aggarwal; Nitya Wadhwa; Rajiv Bahl; Bjorn Westrup; Ebunoluwa A Adejuyigbe; Gyikua Plange-Rhule; Queen Dube; Harish Chellani; Augustine Massawe
Journal:  N Engl J Med       Date:  2021-05-27       Impact factor: 91.245

7.  Strengthening evaluation and implementation by specifying components of behaviour change interventions: a study protocol.

Authors:  Susan Michie; Charles Abraham; Martin P Eccles; Jill J Francis; Wendy Hardeman; Marie Johnston
Journal:  Implement Sci       Date:  2011-02-07       Impact factor: 7.327

8.  What do we think we are doing? How might a clinical information network be promoting implementation of recommended paediatric care practices in Kenyan hospitals?

Authors:  Mike English; Philip Ayieko; Rachel Nyamai; Fred Were; David Githanga; Grace Irimu
Journal:  Health Res Policy Syst       Date:  2017-02-02

9.  Collective sensemaking for action: researchers and decision makers working collaboratively to strengthen health systems.

Authors:  Lucy Gilson; Edwine Barasa; Leanne Brady; Nancy Kagwanja; Nonhlanhla Nxumalo; Jacinta Nzinga; Sassy Molyneux; Benjamin Tsofa
Journal:  BMJ       Date:  2021-02-15

10.  Theory of Change: a theory-driven approach to enhance the Medical Research Council's framework for complex interventions.

Authors:  Mary J De Silva; Erica Breuer; Lucy Lee; Laura Asher; Neerja Chowdhary; Crick Lund; Vikram Patel
Journal:  Trials       Date:  2014-07-05       Impact factor: 2.279

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.