Literature DB >> 20172876

The hospital standardised mortality ratio: a powerful tool for Dutch hospitals to assess their quality of care?

B Jarman1, D Pieter, A A van der Veen, R B Kool, P Aylin, A Bottle, G P Westert, S Jones.   

Abstract

AIM OF THE STUDY: To use the hospital standardised mortality ratio (HSMR), as a tool for Dutch hospitals to analyse their death rates by comparing their risk-adjusted mortality with the national average.
METHOD: The method uses routine administrative databases that are available nationally in The Netherlands--the National Medical Registration dataset for the years 2005-2007. Diagnostic groups that led to 80% of hospital deaths were included in the analysis. The method adjusts for a number of case-mix factors per diagnostic group determined through a logistic regression modelling process.
RESULTS: In The Netherlands, the case-mix factors are primary diagnosis, age, sex, urgency of admission, length of stay, comorbidity (Charlson Index), social deprivation, source of referral and month of admission. The Dutch HSMR model performs well at predicting a patient's risk of death as measured by a c statistic of the receiver operating characteristic curve of 0.91. The ratio of the HSMR of the Dutch hospital with the highest value in 2005-2007 is 2.3 times the HSMR of the hospital with the lowest value. DISCUSSION: Overall hospital HSMRs and mortality at individual diagnostic group level can be monitored using statistical process control charts to give an early warning of possible problems with quality of care. The use of routine data in a standardised and robust model can be of value as a starting point for improvement of Dutch hospital outcomes. HSMRs have been calculated for several other countries.

Entities:  

Mesh:

Year:  2010        PMID: 20172876      PMCID: PMC2921266          DOI: 10.1136/qshc.2009.032953

Source DB:  PubMed          Journal:  Qual Saf Health Care        ISSN: 1475-3898


In recent years, there has been an increasing interest in monitoring standards of clinical care in many countries. In the UK, the Bristol Royal Infirmary Inquiry into paediatric cardiac surgery deaths from 1999 to 20011 raised national awareness of the subject. In The Netherlands, an analysis of the death rates in cardiac surgery at the Radboud University by the Health Inspectorate led, in 2006, to a temporary six-month closure of the cardiac surgical department. Mortality is a “hard” outcome with special relevance to the patient. Measuring death rates has the advantage that death is a definite unique event unlike morbidity, which often represents a spectrum of severity and can be difficult to record accurately. Death rates can, when adjusted for the factors that affect death rates, act as markers of a hard outcome of healthcare. In England, the hospital standardised mortality ratio (HSMR), an overall measure of in-hospital mortality, has been used since 1999.2 About 67% of English acute hospital trusts nowadays use Real Time Monitoring (RTM)3 for monitoring and analysing HSMRs and their component diagnosis-level SMRs in order to deploy possible patient safety improvements. RTM makes the data available, updated monthly, to hospitals via the internet. HSMRs have also been calculated for the USA, Canada, Sweden, Wales, Australia (New South Wales), France, Japan, Hong Kong and Singapore and could be used to assess mortality, identify areas for possible improvement and monitor performance over time. Until now, Dutch mortality figures, as measures of outcome of hospital care, were based on clinical databases and related to certain patient groups or procedures—for example, intensive care admissions,4 high-risk surgery5 and elderly patients.6 In the Dutch healthcare system, assessment of quality by calculating HSMRs has attracted considerable attention from government, patient organisations and the media. A study estimated that every year, more than 1700 avoidable deaths occur in Dutch hospitals.7 Following this study, a national patient safety programme was launched in 2007 by the associations of hospitals, medical specialists and nurses aimed at reducing the number of avoidable deaths. Monitoring the quality of hospitals within this programme by measuring HSMRs is one of the tools being used. Two research organisations—Prismant and De Praktijk Index—developed, with Jarman and colleagues from Imperial College London, and Dr Foster Intelligence, a model to calculate HSMRs using data from the National Medical Registration (LMR) files, which contain all inpatient and day case admissions to hospitals. During the last years of calculating HSMRs for Dutch hospitals8 and using them for improving quality of care, several questions have been raised. The Dutch Minister of Health has announced that all Dutch hospitals should publish their HSMR in 2010.9 This article is intended to explain the current Dutch model and its statistical performance. It is hoped that it may be of assistance to hospitals by helping them to understand the method and that it may be of use for monitoring improvements in the quality of care for their patients.

Methods

The HSMR compares the actual number of hospital deaths with the expected number for those patients with a primary diagnosis within the set of diagnostic groups that account for 80% of all deaths in hospital nationally. The national LMR dataset for 2005–2007 was used as the data source for the logistic regression calculations. The HSMRs were calculated for 2005–2007. In the LMR dataset, diagnoses are coded using the International Classification of Diseases, Ninth Revision (ICD-9), and these are converted to 259 Clinical Classification System (CCS) groups developed by the US Agency for Healthcare Research and Quality.10 From these CCS groups, those responsible for 80% of hospital deaths nationally were determined. Day cases (which have very few deaths) and inpatient admissions were included in the analysis. Logistic regression models were fitted for each of the CCS groups separately in order to generate an expected risk of death for each patient. The HSMR is derived from the sum of the observed deaths and expected risks across the CCS groups. The 2005–2007 LMR data were used to form the model. These were data made available by Prismant, with permission of the Dutch Hospital Association (NVZ) and the Dutch Association of Medical Specialists. Seventeen thousand fifty-six ICD-9 codes in the Dutch hospital data were assigned to the 259 US Agency for Healthcare Research and Quality CCS groups. After removing vague or undetermined diagnoses, the 50 CCS groups that give rise to 80% of all deaths in 2005–2007 were determined (table 1). Patients with lengths of stay under one year were used. The 50 CCS diagnoses covering 159 987 deaths were used for the model. The reported HSMR is then calculated using data from 2005 to 2007.
Table 1

CCS groups included in the model with their c statistics and relevant variables

GroupC statisticAgeCharlsonDeprivationLOSMonthSexSource organisation typeCCS subgroupUrgencyYear
Septicemia (except in labour)0.827YesYesYesYesYesYesYesYes
Cancer of oesophagus0.840YesYesYesYesYesYesYes
Cancer of stomach0.811YesYesYesYesYesYes
Cancer of colon0.857YesYesYesYesYesYesYesYes
Cancer of rectum and anus0.858YesYesYesYesYesYesYes
Cancer of pancreas0.776YesYesYesYesYesYesYes
Cancer of bronchus, lung0.873YesYesYesYesYesYesYesYes
Cancer of breast0.957YesYesYesYes
Cancer of prostate0.925YesYesYesYesYes
Cancer of bladder0.939YesYesYesYesYesYesYes
Non-Hodgkin's lymphoma0.923YesYesYesYesYesYesYesYes
Leukaemias0.930YesYesYesYesYesYesYesYes
Secondary malignancies0.908YesYesYesYesYesYesYesYesYes
Neoplasms of unspecified nature or uncertain behaviour0.916YesYesYesYesYesYesYes
Diabetes mellitus with complications0.848YesYesYesYesYesYesYes
Fluid and electrolyte disorders0.807YesYesYesYesYesYesYesYesYes
Deficiency and other anaemia0.911YesYesYesYesYesYesYesYes
Coma, stupor and brain damage0.728YesYesYesYesYesYesYes
Heart valve disorders0.809YesYesYesYesYesYesYes
Acute myocardial infarction0.782YesYesYesYesYesYesYesYesYes
Coronary atherosclerosis and other heart disease0.832YesYesYesYesYesYesYesYes
Pulmonary heart disease0.798YesYesYesYesYesYes
Cardiac dysrhythmias0.874YesYesYesYesYesYesYesYesYesYes
Cardiac arrest and ventricular fibrillation0.809YesYesYesYesYesYes
Congestive heart failure, non-hypertensive0.677YesYesYesYesYesYesYesYesYesYes
Acute cerebrovascular disease0.775YesYesYesYesYesYesYesYesYesYes
Peripheral and visceral atherosclerosis0.906YesYesYesYesYesYesYesYes
Aortic, peripheral and visceral artery aneurysms0.866YesYesYesYesYesYes
Aortic and peripheral arterial embolism or thrombosis0.880YesYesYesYesYesYesYes
Other circulatory disease0.862YesYesYesYesYesYesYes
Pneumonia0.810YesYesYesYesYesYesYesYesYesYes
Chronic obstructive pulmonary disease and bronchiectasis0.778YesYesYesYesYesYesYesYes
Aspiration pneumonitis, food/vomitus0.718YesYesYesYesYesYes
Pleurisy, pneumothorax, pulmonary collapse0.834YesYesYesYesYesYesYes
Other lower respiratory disease0.877YesYesYesYesYesYesYesYesYes
Intestinal obstruction without hernia0.831YesYesYesYesYesYesYes
Diverticulosis and diverticulitis0.903YesYesYesYesYesYesYesYes
Biliary tract disease0.920YesYesYesYesYesYesYes
Liver disease, alcohol-related0.728YesYesYesYesYesYes
Other liver diseases0.843YesYesYesYesYesYesYesYesYes
Gastrointestinal haemorrhage0.812YesYesYesYesYesYesYesYesYes
Other gastrointestinal disorders0.943YesYesYesYesYesYesYesYes
Acute and unspecified renal failure0.777YesYesYesYesYesYesYesYesYes
Chronic renal failure0.881YesYesYesYesYesYes
Urinary tract infections0.880YesYesYesYesYesYesYesYesYes
Fracture of neck of femur (hip)0.782YesYesYesYesYesYes
Intracranial injury0.884YesYesYesYesYesYesYesYesYesYes
Complication of device, implant or graft0.858YesYesYesYesYesYesYes
Complications of surgical procedures or medical care0.873YesYesYesYesYesYesYesYes
Shock0.802YesYesYesYesYes
Average ROC curve0.845
Overall ROC curve0.910

All models included an intercept term.

CCS, Clinical Classification System; LOS, length of stay; ROC, receiver operating characteristic.

The calculation for non-average hospitals, hospitals with a case mix very different from the national average, which were excluded, was done by: calculating the percentage of expected deaths nationally for each of the diagnostic groups making up the HSMR (leading to 80% of all deaths nationally); calculating as in (a) for each hospital; scale up or down the number of expected deaths by a scaling factor (Sf) for each diagnostic group to make the percentage of expected deaths at each hospital the same as the national %; scale the observed deaths at each hospital by the same scaling factor for each diagnostic group; use the scaled values of the numbers of observed and expected deaths at each hospital to calculate a “scaled HSMR”; calculate the difference, D, between the normal (unscaled) HSMR and the scaled HSMR and for the “average”, or non-specialist, hospitals' D tends to be less than 7.5 for the hospitals that are not specialist hospitals. Patients' age was determined from the date of admission—date of birth age groups used were those for the English Hospital Episode Statistics (HES) (ie, <1 year=1, 1–4 years=2, 5–9 years=3, 10–14 years=4, 15–19 years=4, 20–24=6, 25–29 years=7, 30–34 years=8, 35–39 years=9, 40–44 years=10, 45–49 years=11, 50–54 years=12, 55–59 years=13, 60–64 years=14, 65–69 years=15, 70–74 years=16, 75–79 years=17, 80–84 years=18, 85–89 years=19, 90+ years=20) The number of days of care was coded into length of stay (LOS) categories: 1 day=1; 2–7 days=2; 8–16 days=3; 17–23 days=4; 24–1000 days=5 (but only LOS to 365 was used in the data analysis). The age group, sex, urgency, LOS group, CCS diagnosis, month of admission, social deprivation and year categories were determined for each patient. The source of referral for each patient was coded as 0=own habitat; 1=nursing/elderly home; 2=born in hospital; 21=hospital—academic/top clinical; 22=hospital—general; 23=hospital specialised; 24=other care organisations; 29=hospital—unknown. CCS groups included in the model with their c statistics and relevant variables All models included an intercept term. CCS, Clinical Classification System; LOS, length of stay; ROC, receiver operating characteristic.

Details of Dutch HSMR calculations

The statistical performance of the model was measured by the c statistic (area under the receiver operating characteristic curve) for each SMR and for the hospital level HSMR.11 The c statistic is the probability of assigning a greater risk of death to a randomly selected patient who died compared with a randomly selected patient who survived. A value of 0.5 suggests that the model is no better than random chance in predicting death. A value of 1 suggests perfect discrimination. In general, values above 0.75 suggest good discrimination.

Results

In the 2005–2007 data and for the HSMR CCS groups only, there were 2 363 332 admissions and 90 873 deaths (crude death rate 3.85%). The quality of the data of 15 hospitals did not fulfil the national registration standards in 2007, so we did not include them in a national comparison. Seven out of these 15 hospitals did not fulfil the standard for two or more criteria. Six of the 15 hospitals had more than 5% vague diagnosis, eight hospitals had less than 33% urgent admissions and ten hospitals had a ratio of comorbidity diagnosis to main diagnoses of <0.2. Another six hospitals were excluded because they had a patient population that differed too much from the national average. Four of these hospitals had less than 100 expected deaths in 2007, and finally two more hospitals were excluded because they are non-average hospitals in terms of their case mix. A funnel plot of the HSMRs of the remaining 65 hospitals is shown in figure 1.
Figure 1

Funnel plot showing HSMR variation, 2005–2007 in Dutch hospitals (excluding 24 hospitals) with 95% and 99.8% control limits.

Funnel plot showing HSMR variation, 2005–2007 in Dutch hospitals (excluding 24 hospitals) with 95% and 99.8% control limits. Hospitals in figure 1 that lie within the control limits are said to exhibit common cause variation and those outside special cause variation unlikely to be due to natural random variation (in Shewhart's original terminology). Funnel plots provide a simple and easily understandable way to plot institutional comparisons.12 They have been used to plot anonymised mortality rates by surgeon for paediatric cardiac surgery13 and have been promoted as providing a strong visual indication of divergent performance, with the advantage of displaying actual event rates and allowing an informal check of a relationship between outcome and volume of cases.14 Dutch HSMRs differ widely among hospitals. According to this analysis, the chance of death in the hospital with the highest HSMR is 2.3 times the chance of dying in the hospital with the lowest HSMR, after adjusting for available case-mix factors. The c statistic of the Dutch HSMR model is 0.91, similar to the values found for the other countries. Table 1 shows also the c statistics of all CCS groups: they vary from 0.68 to 0.96. Significant factors determining the total hospital mortality were: primary diagnosis, age, sex, admission urgency (urgent/not-urgent, equivalent to emergency/elective (planned)), LOS, comorbidity (measured by the Charlson Index),15 area-level social deprivation (from the Dutch Central Office of Statistics), month of admission, type of organisation that made the referral and the CCS subgroup. These factors and their coefficients vary among each CCS group. Table 1 gives the significant factors (p<0.05) for every CCS group.

Discussion

HSMRs have been calculated for The Netherlands in a manner similar to that used in several other countries. Currently, almost every Dutch hospital has asked for their HSMR without any pressure from the government or Healthcare Inspectorate. In addition, more than 50 hospitals have ordered a “Hospital Mortality Profile” over the last two years—a brief report giving the HSMR of a hospital broken down into its constituent diagnostic group SMRs and by age group, urgency and length of stay. The following applications of HSMRs are used in Dutch hospitals: With the Hospital Mortality Profile to identify high and low risk “areas” within the hospital. Such a retrospective profile enables more directed intervention for patient safety. Dr Foster's RTM—a tool used in 16 hospitals for early warning, continuous monitoring and analysis of their mortality by diagnosis and procedures using the same risk models underpinning the HSMR. Hospitals use this tool to follow their own progress in decreasing patient safety risks. Some hospitals use HSMRs in combination with clinical audits. They drill down to the level of the mortality risk of individual patients admitted. By doing so, hospitals can select “unexpected cases”. These are patients who die in the hospital but have a relatively low risk of dying in hospital. These cases are perhaps the most useful for case note review and complication analysis and can aid improvement initiatives. Our analysis of data completeness found no missing values of the date of admission, date of discharge, age, sex, urgency of admission or postal code (for social deprivation). However, for the recording of secondary diagnosis in particular, we cannot tell whether there is no comorbidity present or if comorbidity has simply not been recorded. Miscoding may also affect the HSMR. The LMR data use a limited number of clinical variables but for the HSMRs examined in this study, the discrimination of the risk prediction model was very good. A recent UK study concluded that, at least for three common procedures, risk prediction with discrimination comparable with that obtained from clinical databases is possible using routinely collected administrative data.16 Although simplified models of risk prediction might be as effective in predicting outcome as some complex models currently in use,17 18 further improvements to the case-mix model are being evaluated. The numbers of previous admissions within a given time period, which requires the linking of admissions of the same patient, could be of potential use. Other features of the healthcare system that could potentially affect hospital mortality ratios include admission thresholds, the proportion of people in the area dying in hospital, discharge policies or underlying disease rates in the catchment population. It is unclear, however, whether and how one should measure and adjust for these factors. A relevant discussion is also whether the length of stay and the procedure group are factors that are part of the case mix or determine quality. Both are related to the patient's illness but also to treatment. Based on experience in other countries, the introduction of HSMRs raises various questions.19–22 Most recently, attention has been focused on the so-called “constant risk fallacy”23 in which some SMRs—for example, for some Charlson scores, differ from the overall HSMR. One paper suggests at least two mechanisms that might contribute: the first involves differential measurement error, and the second involves inconsistent proxy measures of risk.24 Measurement error, including poor coding, will have an impact on HSMRs, and this is the first thing that a hospital should check. The variation in SMRs can be interpreted in two ways, either as bias or as real differences in risk. Either way, further investigation using local data sources and case note reviews rather than more statistical analysis is suggested. Another often heard query is that the methodology should correct for regional variation in health conditions or in the organisation and performance of healthcare facilities adjacent to the hospital. A multiple regression analysis has been developed for the Dutch HSMRs to find the factors that best explain the variation of HSMRs throughout The Netherlands.25 Depending on the extension of the dataset, further yearly refinements can be made to the models for the yearly releases of the HSMRs and SMRs. The HSMR for The Netherlands appears to be a statistically robust model that can be used as an indicator for hospital deaths to help Dutch hospitals improve their quality of care. The statistical model is robust enough to include all hospitals with more than about 100 deaths per year, an average case mix and good quality data, varying in size and function, into one analysis. However, random variation and data quality issues need to be considered when interpreting the results. HSMRs can be used to highlight hospitals that have significantly high mortality, which may merit further investigation by the hospitals concerned. Furthermore, the impact of interventions designed to reduce mortality can be tracked using this measure. The Dutch Ministry of Health26 27 has put HSMR high on its quality agenda and commissioned RIVM (the National Institute for Public Health and the Environment) to use HSMRs as one of the performance indicators in the Dutch Health Care Performance report. In the future, international comparisons might also be possible.
  19 in total

1.  Funnel plots for institutional comparison.

Authors:  D Spiegelhalter
Journal:  Qual Saf Health Care       Date:  2002-12

Review 2.  Learning from Bristol: report of the public inquiry into children's heart surgery at Bristol Royal Infirmary 1984-1995.

Authors:  G M Teasdale
Journal:  Br J Neurosurg       Date:  2002-06       Impact factor: 1.596

3.  [Practice makes perfect. The favourable effect of experience on the outcome of care].

Authors:  H Obertop
Journal:  Ned Tijdschr Geneeskd       Date:  2004-07-03

4.  A new method of classifying prognostic comorbidity in longitudinal studies: development and validation.

Authors:  M E Charlson; P Pompei; K L Ales; C R MacKenzie
Journal:  J Chronic Dis       Date:  1987

5.  The risks of risk adjustment.

Authors:  L I Iezzoni
Journal:  JAMA       Date:  1997-11-19       Impact factor: 56.272

6.  The Surgical Risk Scale as an improved tool for risk-adjusted analysis in comparative surgical audit.

Authors:  R Sutton; S Bann; M Brooks; S Sarin
Journal:  Br J Surg       Date:  2002-06       Impact factor: 6.939

7.  Mortality rates after surgery for congenital heart defects in children and surgeons' performance.

Authors:  J Stark; S Gallivan; J Lovegrove; J R Hamilton; J L Monro; J C Pollock; K G Watterson
Journal:  Lancet       Date:  2000-03-18       Impact factor: 79.321

8.  [Determinants of hospital mortality in surgical patients aged 80 years and over].

Authors:  E Beenen; M P Simons; A C Vahl
Journal:  Ned Tijdschr Geneeskd       Date:  2003-09-27

9.  [Intensive care medicine in the Netherlands, 1997-2001. I. Patient population and treatment outcome].

Authors:  E de Jonge; R J Bosman; P H van der Voort; H H Korsten; G J Scheffer; N F de Keizer
Journal:  Ned Tijdschr Geneeskd       Date:  2003-05-24

10.  Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals.

Authors:  Mohammed A Mohammed; Jonathan J Deeks; Alan Girling; Gavin Rudge; Martin Carmalt; Andrew J Stevens; Richard J Lilford
Journal:  BMJ       Date:  2009-03-18
View more
  32 in total

1.  Hospital mortality under surgical care.

Authors:  O Aziz; D Fink; L Hobbs; G Williams; T C Holme
Journal:  Ann R Coll Surg Engl       Date:  2011-04       Impact factor: 1.891

2.  A nonparametric updating method to correct clinical prediction model drift.

Authors:  Sharon E Davis; Robert A Greevy; Christopher Fonnesbeck; Thomas A Lasko; Colin G Walsh; Michael E Matheny
Journal:  J Am Med Inform Assoc       Date:  2019-12-01       Impact factor: 4.497

3.  Should measures of patient experience in primary care be adjusted for case mix? Evidence from the English General Practice Patient Survey.

Authors:  Charlotte Paddison; Marc Elliott; Richard Parker; Laura Staetsky; Georgios Lyratzopoulos; John L Campbell; Martin Roland
Journal:  BMJ Qual Saf       Date:  2012-05-23       Impact factor: 7.035

4.  Predicting hospital mortality among frequently readmitted patients: HSMR biased by readmission.

Authors:  Wim F van den Bosch; Johannes C Kelder; Cordula Wagner
Journal:  BMC Health Serv Res       Date:  2011-03-14       Impact factor: 2.655

5.  Mortality and morbidity meetings: an untapped resource for improving the governance of patient safety?

Authors:  Juliet Higginson; Rhiannon Walters; Naomi Fulop
Journal:  BMJ Qual Saf       Date:  2012-05-03       Impact factor: 7.035

6.  Variations in hospital standardised mortality ratios (HSMR) as a result of frequent readmissions.

Authors:  Wim F van den Bosch; Peter Spreeuwenberg; Cordula Wagner
Journal:  BMC Health Serv Res       Date:  2012-04-04       Impact factor: 2.655

7.  The US hospital standardised mortality ratio: Retrospective database study of Massachusetts hospitals.

Authors:  Roxana Alexandrescu; Alex Bottle; Min Hua Jen; Brian Jarman; Paul Aylin
Journal:  JRSM Open       Date:  2015-01-19

8.  Avoidability of hospital deaths and association with hospital-wide mortality ratios: retrospective case record review and regression analysis.

Authors:  Helen Hogan; Rebecca Zipfel; Jenny Neuburger; Andrew Hutchings; Ara Darzi; Nick Black
Journal:  BMJ       Date:  2015-07-14

9.  Hospital standardized mortality ratio: consequences of adjusting hospital mortality with indirect standardization.

Authors:  Maurice E Pouw; Linda M Peelen; Hester F Lingsma; Daniel Pieter; Ewout Steyerberg; Cor J Kalkman; Karel G M Moons
Journal:  PLoS One       Date:  2013-04-09       Impact factor: 3.240

Review 10.  Prognosis Research Strategy (PROGRESS) 3: prognostic model research.

Authors:  Ewout W Steyerberg; Karel G M Moons; Danielle A van der Windt; Jill A Hayden; Pablo Perel; Sara Schroter; Richard D Riley; Harry Hemingway; Douglas G Altman
Journal:  PLoS Med       Date:  2013-02-05       Impact factor: 11.069

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.