Literature DB >> 29450286

From research to practice: results of 7300 mortality retrospective case record reviews in four acute hospitals in the North-East of England.

Anthony Paul Roberts1, Gerry Morrow2, Michael Walkley3, Linda Flavell2, Terry Phillips2, Eliot Sykes4, Graeme Kirkpatrick5, Diane Monkhouse3, David Laws6, Christopher Gray5.   

Abstract

INTRODUCTION: Monitoring hospital mortality using retrospective case record review (RCRR) is being adopted throughout the National Health Service (NHS) in England with publication of estimates of avoidable mortality beginning in 2017. We describe our experience of reviewing the care records of inpatients who died following admission to hospital in four acute hospital NHS Foundation Trusts in the North-East of England.
METHODS: RCRR of 7370 patients who died between January 2012 and December 2015. Cases were reviewed by consultant reviewers with support from other disciplines and graded in terms of quality of care and preventability of deaths. Results were compared with the estimates published in the Preventable Incidents, Survival and Mortality (PRISM) studies, which established the original method.
RESULTS: 34 patients (0.5%, 95% CI 0.3% to 0.6%) were judged to have a greater than 50% probability of death being preventable. 1680 patients (22.3%, 95% CI 22.4% to 23.3%) were judged to have room for improvement in clinical, organisational (or both) aspects of care or less than satisfactory care.
CONCLUSIONS: Reviews using clinicians within trusts produce lower estimates of preventable deaths than published results using external clinicians. More research is needed to understand the reasons for this, but as the requirement for NHS Trusts to publish estimates of preventable mortality is based on reviews by consultants working for those trusts, lower estimates of preventable mortality can be expected. Room for improvement in the quality of care is more common than preventability of death and so mortality reviews contribute to improvement activity although the outcome of care cannot be changed. RCRR conducted internally is a feasible mechanism for delivering quantitative analysis and in the future can provide qualitative insights relating to inhospital deaths.

Entities:  

Keywords:  morbidity and mortality rounds; patient safety; performance measures; quality improvement; quality measurement

Year:  2017        PMID: 29450286      PMCID: PMC5699137          DOI: 10.1136/bmjoq-2017-000123

Source DB:  PubMed          Journal:  BMJ Open Qual        ISSN: 2399-6641


Background

In July 2013, Professor Sir Bruce Keogh published his overview of the reviews he led into 14 hospital trusts in England that had persistently high hospital standardised mortality ratio or summary hospital-level mortality indicator for 2 years.1 Controversy has continued about the use of such mortality ratios2–9 and Keogh’s first recommendation included commissioning ‘a study into the relationship between “excess mortality rates” and actual “avoidable deaths”. It will involve conducting retrospective case note reviews (RCRR) on a substantial random sample of in-hospital deaths from trusts with lower than expected, as expected and higher than expected mortality rates’. This study, called PRISM 210, is an extension of the earlier PRISM 1 study,11 which was based on earlier work using RCRR.12–14 It sought to assess the rate of preventable deaths using a retrospective case note review method. The PRISM studies were intended to inform the development of a new hospital mortality indicator for the National Health Service (NHS) called ‘5a Deaths attributable to problems in healthcare’.15 However, estimating deaths due to medical error (or problems in care, in the language of the PRISM studies) is controversial16 and the NHS has procured a competing method, the Structured Judgement Review (SJR), delivered by a consortium led by the Royal College of Physicians to deliver training for RCRRs.17 The NHS has recently published draft guidance requiring all NHS Trusts in England to publish estimates of avoidable mortality rates to be based on PRISM, SJR or other evidence-based methods.18 The National Confidential Enquiry into Patient Outcome and Death (NCEPOD)19 has a long-established case note review process for reviewing mortality. To date, 36 of these reports have been published including assessments of the perioperative care of surgical patients20 using a recognised quality of care grading scale. Other prominent bodies, including the Royal College of Surgeons of England,21 have also assessed and reported on the care of surgical patients. NHS Foundation Trusts, providing acute hospital care, in the North-East (NE) of England have established their own hospital mortality review programmes employing an adaptation of PRISM methodology. Four of these trusts combined data from the first 7370 reviews conducted by their central review teams. The central clinical mortality review process complements existing mortality and morbidity meetings and national statistical measures of hospital mortality. Trusts aimed to learn from mortality review in a more systematic way than had hitherto been possible from specialty-based mortality and morbidity meeting-based approaches. The four providers of acute hospital care are City Hospitals Sunderland NHS Foundation Trust, County Durham and Darlington NHS Foundation Trust, Northumbria Healthcare NHS Foundation Trust and South Tees Hospitals NHS Foundation Trust.

Methods

Identification of cases for review

The four trusts used different methods to identify cases, depending on the functional ability of their electronic or paper-based clinical records. Records were selected for review in order to maximise the opportunity to learn from problems in care. Deaths were identified: where a complaint or incident was recorded in incident reporting systems; where one of the various hospital standardised mortality indicator measures used by the NHS suggested mortality might be higher than expected; through referral to the mortality review process by clinical teams; and deaths following elective admission. Hospitals supplemented these with unselected deaths identified through convenience sampling, based on the availability of case notes. A high proportion of deaths were reviewed. Mortality reviews were instigated in these trusts as a practical approach to clinical quality improvement and assurance, rather than as a formal research study to ascertain the rate of potentially avoidable mortality. For the purposes of this study, we removed duplicate patients and any patients with a discharge code that indicated that this was not an inhospital death. A key objective was to test the feasibility of introducing a centralised method for reviewing deaths. Many people believed that reviewing a high proportion of hospital deaths is impractical, given that approximately half of all deaths in the NE of England occur in hospital.

Case note review methods

Basic data from the hospital record, including patient demographics, date of admission, date of death and method of admission (elective or unplanned), were obtained from the hospital’s Patient Administration Systems. The case notes were reviewed against a structured questionnaire with data being entered into spreadsheets or databases at each site. The questionnaires used were local adaptations of PRISM 1. Questions were not identical in all trusts and there was some evolution over time in each, but were broadly similar and concerned: cause of death, prehospital care, initial hospital clerking, first review and/or consultant review, management of deterioration, grading of preventability and quality of care, and a clinical resume of the case. Reviews were conducted largely by consultants, in some cases supported by nursing colleagues. Each hospital organised their review process differently, but all were aware of the work of the other hospitals, partly through a Regional Mortality Group, which had been established to share learning in this area. There was a small amount of peer review work, with teams reviewing cases in each other’s hospitals as a way of increasing the consistency of reviews. When a case was controversial or interpretation problematic, this was discussed within the team present. Grading of care used the 6-point preventability Likert scale used by PRISM and the 5-point NCEPOD grading of quality of care. The choice of open and closed questions facilitated ease of data handling and provided a detailed understanding of the patient’s journey. Data from the four databases were amalgamated into a single database for reporting.

Ethical approval

Mortality reviews were conducted in the four trusts as part of normal clinical governance and quality assurance processes. All trusts sought to fulfil their Duty of Candour22 obligations. They were intended as audits aimed at improving care processes for patients and were not originally conceived as research nor intended for publication. Hence, it was believed that research ethics approval was neither appropriate nor needed. When we analysed the data together, it became clear that we had collected a larger number of reviews than had been published previously and that our estimates of preventability differed from the PRISM studies and we therefore sought to publish our data in order to make it available to inform the current debate about routine publication of mortality review information in the English National Health Service which is due to begin in September 2017.

Feedback methods

The four trusts used a variety of methods to communicate outputs from the RCRR to specialty teams and to the wider organisation. Where problems were identified in the care of specific patients, feedback to the team responsible for the clinical care was provided by the review team. In most cases, the managing team was then asked to review the case and comment. In some cases, incident reporting was already in place (eg, because of a patient fall, hospital-associated infection or other safety event) or an incident report was instigated. If the problem identified was serious, discussion was escalated to the Medical Director of the trust. Individual trusts produced thematic mortality reports (detailing recurring themes within the patient journeys), which were presented to the trusts’ mortality committee or other parts of the governance structure. Reports were also discussed at trust and directorate governance or education meetings. Although we have not performed a formal thematic analysis, we know that problems in care (or documentation of care) at end of life, in identifying and responding to acute deterioration and in communication between multiple clinical teams looking after complex patients were found in all four trusts. Trusts either used or created ‘mortality leads’ in clinical departments and directorates or used existing governance structures. This was in an attempt to provide awareness that the trust had established centralised mortality review systems, which complemented existing specialty-based morbidity and mortality meetings.

Results

Patient characteristics

In total, 7370 mortality review records were analysed; the cumulative figures are provided in table 1. The data are shown in two time periods to show that more than 60% of the reviews were conducted in the latter 2 years and to show that a review rate of more than 50% is achievable once the process is established.
Table 1

Numbers and proportion of deaths reviewed

YearsTotal
2012–2015Inpatient mortality32 441
 Deaths reviewed7370
 Review rate (%)23
2014–2015Inpatient mortality8266
 Deaths reviewed4475
 Review rate (%)54
Numbers and proportion of deaths reviewed

Quantitative scores

Our analysis has focused on two key quantitative scores, NCEPOD scale and Preventability scale. Tables 2 and 3 provide details of results for each of these scores. Number and proportion of reviews by quality of care scale *Percentage calculated from total patients with a score. NCEPOD, National Confidential Enquiry into Patient Outcome and Death. Number and proportion of reviews by Preventability scale *Percentage calculated from total patients with a score. †Given in Table 1 of Hogan et al10

Trust Preventability scales compared with PRISM 2

Figure 1 and table 4 provide a comparison between NE total and the deaths graded as preventable reported in PRISM 2. There were no statistically significant differences between the four trusts (using CI method, not shown).
Figure 1

Comparison of quality and preventability grading between North-East and PRISM 2.

Table 4

Comparison to PRISM 2 preventability scoring

North-East totalPRISM 2
Percentage of deaths preventable score >30.473.0
95% CI lower limit0.31%2.4%
95% CI upper limit0.63%3.7%
Comparison of quality and preventability grading between North-East and PRISM 2. Comparison to PRISM 2 preventability scoring There is a statistically significant difference between the series of deaths in the NE cohort and PRISM 2, as there is a gap of 1.77% CI between the lower limit of PRISM 2 preventability grading and the upper limit of the NE trust group.

Discussion

Main findings

We have provided data for 3 years relating to over 7300 inpatient deaths in four acute trusts in the NE of England. Each trust conducted between 1200 and 2200 reviews over this period. During 2014–2015, this equates to an average of 54% of all inhospital deaths. Data relevant to these deaths was uploaded to a single database. Internal reviews were conducted at each trust and scoring determined across two nationally accepted standardised parameters for each death. Using a unified database and standardised data collection enabled a consistent approach to understanding deaths by each trust, data analysis and the possibility of shared learning across a region. The overwhelming majority of all deaths reviewed were categorised as having had ‘good’ care and were ‘definitely not’ preventable. There was a small number of deaths (range 0.1%–0.8%, average 0.5%) in this cohort, which have been identified as having ‘room for improvement’ in care and a ‘greater than 50% chance of being preventable’. If we extrapolate this finding, it equates to 1143 inhospital deaths across NHS England, which could potentially be described as ‘avoidable’. We believe that this regional unified approach delivers a pragmatic platform to learn from inhospital deaths. In addition, as all trusts seek to implement the recommendations of the Care Quality Commission (CQC) report ‘Learning, candour and accountability’,23 our approach allows providers to learn from our experience.

Strengths and limitations of the study

This is the largest published series of mortality reviews. The reviews were conducted for learning purposes, rather than as research. At the inception of this process, it was not intended that the learning from these reviews would be used more widely or form part of a quantitative study of mortality. Trusts did not believe the reviews would be used to publish estimated rates of preventable deaths. We maintain that this provides evidence that internal trust reviewers were likely to be less tolerant of problems in care at their own trust, as they wished to find opportunities to learn from problems in care they could identify through this process. This retrospective study was conducted in one region, the NE of England, which has the highest mortality rate in NHS England.24 The mortality data are restricted to four NHS acute foundation trusts. There was no external validation of the qualitative or quantitative elements of the reviews. These trusts are self-selected, the deaths in this cohort were not random and to date we have not attempted a thematic analysis of lessons learnt from reviews.

Comparison with existing evidence

Our data shows that the reviews conducted by the four participating trusts by internal reviewers produced lower estimates of preventability than previously published data (PRISM 2) with the trusts all showing significantly lower preventability than the PRISM 2 estimate.11 The PRISM 2 study produced an estimate of the proportion of reviews graded as greater than 50% chance of being preventable (that is preventability >3) of 3.0% with a 95% CI 2.7% to 3.7%. Comparing the NE figures (and their CIs) with this estimate shows that they are significantly lower than the PRISM 2 estimate for preventability. Hence, the reviews carried out by the NE differ significantly from the reviews carried out by the PRISM 2 study. PRISM 2 is used as the benchmark for comparison as it is the largest and most recent study of retrospective mortality review data in England. It is not possible to fully explain this difference, but there are a number of possibilities. First, that the quality of care provided in the NE may be better than elsewhere in NHS England. Two of the trusts have been rated as ‘requires improvement’ by CQC, one as ‘good’ and one as ‘outstanding’ during this period; however, the large difference in results makes this explanation implausible. Second, that the ratings generated by the four trusts are graded on a single review, as opposed to the ‘multiple review’ approach undertaken in the PRISM studies. Third, the NE reviews are non-random and are instead orientated by the availability of notes and identification of cases where problems in care may be present, for example, incident reporting. We note that as the samples are large (over 50% for most trusts), it is unlikely that this potential sampling bias is large enough to account for the resultant difference in grading. Fourthly, that internal reviewers are more ‘lenient’ than external reviewers and produce systematically different results to those produced by external reviewers (as was the case for the PRISM studies). This may be a matter of ‘calibration’: PRISM reported inter-rater reliability as only moderate (κ 0.45), similar to that reported in other studies, and it may be that NE reviewers were less likely to judge a death to be preventable ‘more likely than not’ than PRISM reviewers. Results in table 3 show there is some room for reviewers to judge a death to be preventable without crossing this threshold (Preventability scores 2–6=5.8%, 95% CI 0.0% to 13.7%). The scale encourages reviewers to select a category just above or below the 50% threshold but modest inter-rater reliability may mean where a reviewer allocates a death on this spectrum is subjective. It is possible that as reviewers are examining deaths within their own institutions, they are subject to unconscious biases due to accepting as normal, care processes that would not be so regarded by external reviewers.
Table 3

Number and proportion of reviews by Preventability scale

Preventability scoreNorth-East totalsPercentage of totalPRISM 2 percentage†
1—Definitely not preventable677691.990.6
2—Slight evidence for preventability2903.93.6
3—Possibly preventable less than 50/50941.32.8
4—Probably preventable greater than 50/50230.31.9
5—Strong evidence for preventability90.11.0
6—Definitely preventable20.00.0
Unanswered/unable to grade1762.40.0
Total 4–6: greater than 50% chance of death being preventable340.5* 3.0
Total 2–6: some evidence for preventability4185.8* 9.4

*Percentage calculated from total patients with a score.

†Given in Table 1 of Hogan et al10

Further work is required to examine which of these possible causes (or a combination of these or some other unconsidered factor) is in fact the source of the difference for the order of magnitude difference in results from our internal reviewers compared with PRISM. Perhaps embedding prospective structured mortality reviews into the clinical governance process and generating continuous targeted reminders to departments to be vigilant in areas of care identified as potentially weak had an impact on preventability. Trust processes are iterative and encourage continuous internal review, reflection and recommendations in departments. In contrast, PRISM could only provide feedback to trusts after completion of their study, and while researchers are equally keen to learn, this difference may be affecting results. Further research into the amount of learning taking place and whether avoidable mortality falls over time will be needed to support or reject this suggestion. When NHS providers publish estimates of avoidable mortality from December 2017 (for deaths from April 2017), as was announced by the Secretary of State on 13 December 2016,25 our data suggest the results will be markedly different from PRISM. Exploration of the published data should include a detailed thematic analysis of future prospective mortality reviews. Widening the focus to include multidisciplinary involvement including prehospital input from primary care and ambulance services should also inform improved learning after a death. Overall, we found routine, hospital-based mortality review to be feasible and useful in that it identified quality-of-care issues, but likely to report lower levels of preventability than published research.
Table 2

Number and proportion of reviews by quality of care scale

NCEPOD scoreNorth-East totalsPercentage of total
1—Good Practice552274.9
2—Room for improvement in clinical care6598.9
3—Room for improvement in organisation care79010.7
4—Room for improvement in clinical and organisation care1922.6
5—Less than satisfactory390.5
Unanswered/unable to grade1682.3
Total 2–5: room for improvement in clinical, organisational or both aspects of care, or less than satisfactory care168022.3*

*Percentage calculated from total patients with a score.

NCEPOD, National Confidential Enquiry into Patient Outcome and Death.

  15 in total

Review 1.  The hospital standardized mortality ratio fallacy: a narrative review.

Authors:  Yvette R B M van Gestel; Valery E P P Lemmens; Hester F Lingsma; Ignace H J T de Hingh; Harm J T Rutten; Jan Willem W Coebergh
Journal:  Med Care       Date:  2012-08       Impact factor: 2.983

2.  Assessing the quality of hospitals.

Authors:  Nick Black
Journal:  BMJ       Date:  2010-04-20

3.  The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II.

Authors:  L L Leape; T A Brennan; N Laird; A G Lawthers; A R Localio; B A Barnes; L Hebert; J P Newhouse; P C Weiler; H Hiatt
Journal:  N Engl J Med       Date:  1991-02-07       Impact factor: 91.245

4.  Statistics behind the headlines. Have there been 13,000 needless deaths at 14 NHS trusts?

Authors:  David Spiegelhalter
Journal:  BMJ       Date:  2013-08-07

5.  Are you 45% more likely to die in a UK hospital rather than a US hospital?

Authors:  David Spiegelhalter
Journal:  BMJ       Date:  2013-09-24

6.  Using hospital standardised mortality ratios to assess quality of care--proceed with extreme caution.

Authors:  Ian A Scott; Caroline A Brand; Grant E Phelps; Anna L Barker; Peter A Cameron
Journal:  Med J Aust       Date:  2011-06-20       Impact factor: 7.738

7.  Estimating deaths due to medical error: the ongoing controversy and why it matters.

Authors:  Kaveh G Shojania; Mary Dixon-Woods
Journal:  BMJ Qual Saf       Date:  2016-10-12       Impact factor: 7.035

8.  Incidence and types of adverse events and negligent care in Utah and Colorado.

Authors:  E J Thomas; D M Studdert; H R Burstin; E J Orav; T Zeena; E J Williams; K M Howard; P C Weiler; T A Brennan
Journal:  Med Care       Date:  2000-03       Impact factor: 2.983

9.  Adverse events and potentially preventable deaths in Dutch hospitals: results of a retrospective patient record review study.

Authors:  M Zegers; M C de Bruijne; C Wagner; L H F Hoonhout; R Waaijman; M Smits; F A G Hout; L Zwaan; I Christiaans-Dingelhoff; D R M Timmermans; P P Groenewegen; G van der Wal
Journal:  Qual Saf Health Care       Date:  2009-08

10.  Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis.

Authors:  Helen Hogan; Rebecca Zipfel; Jenny Neuburger; Andrew Hutchings; Ara Darzi; Nick Black
Journal:  BMJ       Date:  2015-07-14
View more
  6 in total

1.  Hindsight bias critically impacts on clinicians' assessment of care quality in retrospective case note review.

Authors:  Edward Banham-Hall; Sian Stevens
Journal:  Clin Med (Lond)       Date:  2019-01       Impact factor: 2.659

Review 2.  Rate of Preventable Mortality in Hospitalized Patients: a Systematic Review and Meta-analysis.

Authors:  Benjamin A Rodwin; Victor P Bilan; Naseema B Merchant; Catherine G Steffens; Alyssa A Grimshaw; Lori A Bastian; Craig G Gunderson
Journal:  J Gen Intern Med       Date:  2020-01-21       Impact factor: 5.128

3.  Examining the Examiners: How Medical Death Investigators Describe Suicidal, Homicidal, and Accidental Death.

Authors:  Adam S Miner; David M Markowitz; Brian L Peterson; Benjamin W Weston
Journal:  Health Commun       Date:  2020-12-01

4.  Contribution of adverse events to death of hospitalised patients.

Authors:  Ellinor Christin Haukland; Kjersti Mevik; Christian von Plessen; Carsten Nieder; Barthold Vonen
Journal:  BMJ Open Qual       Date:  2019-02-13

5.  Ranking Hospitals Based on Preventable Hospital Death Rates: A Systematic Review With Implications for Both Direct Measurement and Indirect Measurement Through Standardized Mortality Rates.

Authors:  Semira Manaseki-Holland; Richard J Lilford; An P Te; Yen-Fu Chen; Keshav K Gupta; Peter J Chilton; Timothy P Hofer
Journal:  Milbank Q       Date:  2019-03       Impact factor: 4.911

6.  NHS 'Learning from Deaths' reports: a qualitative and quantitative document analysis of the first year of a countrywide patient safety programme.

Authors:  Zoe Brummell; Cecilia Vindrola-Padros; Dorit Braun; S Ramani Moonesinghe
Journal:  BMJ Open       Date:  2021-07-07       Impact factor: 2.692

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.