Literature DB >> 32617377

Impact of Risk Adjustment Using Clinical vs Administrative Data on Hospital Sepsis Mortality Comparisons.

Chanu Rhee1,2, Zhonghe Li1, Rui Wang1, Yue Song1,3, Sameer S Kadri4, Edward J Septimus1,5, Huai-Chun Chen6, David Fram6, Robert Jin1, Russell Poland1,7, Kenneth Sands1,7, Michael Klompas1,2.   

Abstract

BACKGROUND: A reliable risk-adjusted sepsis outcome measure could complement current national process metrics by identifying outlier hospitals and catalyzing additional improvements in care. However, it is unclear whether integrating clinical data into risk adjustment models identifies similar high- and low-performing hospitals compared with administrative data alone, which are simpler to acquire and analyze.
METHODS: We ranked 200 US hospitals by their Centers for Disease Control and Prevention Adult Sepsis Event (ASE) mortality rates and assessed how rankings changed after applying (1) an administrative risk adjustment model incorporating demographics, comorbidities, and codes for severe illness and (2) an integrated clinical and administrative model replacing severity-of-illness codes with laboratory results, vasopressors, and mechanical ventilation. We assessed agreement between hospitals' risk-adjusted ASE mortality rates when ranked into quartiles using weighted kappa statistics (к).
RESULTS: The cohort included 4 009 631 hospitalizations, of which 245 808 met ASE criteria. Risk-adjustment had a large effect on rankings: 22/50 hospitals (44%) in the worst quartile using crude mortality rates shifted into better quartiles after administrative risk adjustment, and a further 21/50 (42%) of hospitals in the worst quartile using administrative risk adjustment shifted to better quartiles after incorporating clinical data. Conversely, 14/50 (28%) hospitals in the best quartile using administrative risk adjustment shifted to worse quartiles with clinical data. Overall agreement between hospital quartile rankings when risk-adjusted using administrative vs clinical data was moderate (к = 0.55).
CONCLUSIONS: Incorporating clinical data into risk adjustment substantially changes rankings of hospitals' sepsis mortality rates compared with using administrative data alone. Comprehensive risk adjustment using both administrative and clinical data is necessary before comparing hospitals by sepsis mortality rates.
© The Author(s) 2020. Published by Oxford University Press on behalf of Infectious Diseases Society of America.

Entities:  

Keywords:  Adult Sepsis Event; hospital comparisons; outcome measure; risk adjustment; sepsis

Year:  2020        PMID: 32617377      PMCID: PMC7320830          DOI: 10.1093/ofid/ofaa213

Source DB:  PubMed          Journal:  Open Forum Infect Dis        ISSN: 2328-8957            Impact factor:   3.835


Sepsis is a leading cause of death, disability, and cost to the health care system [1, 2]. The high burden of sepsis has spurred national efforts to improve timely treatment and bundle adherence, including the Centers for Medicare & Medicaid Services (CMS) Severe Sepsis/Septic Shock Early Management Bundle (SEP-1) that was implemented in 2015 [3, 4]. CMS is now expanding beyond process metrics and developing a sepsis outcome measure for potential use in future quality and payment programs [5]. A reliable sepsis outcome measure could catalyze additional improvements in sepsis care by identifying best practices, flagging poor performers, and motivating continuous improvement [6]. Prior work has demonstrated substantial variability in diagnosis and coding for sepsis and organ dysfunction among clinicians and hospitals, suggesting that administrative data are not suitable for anchoring comparisons of hospitals’ sepsis outcomes [7, 8]. In 2018, the US Centers for Disease Control and Prevention (CDC) released the “Adult Sepsis Event” (ASE) surveillance definition that uses clinical data routinely available in electronic health record systems (EHRs) to identify sepsis [9]. The ASE is modeled after the International Consensus Sepsis–3 definition but is optimized for automated surveillance across a wide array of hospitals and settings by using streamlined infection and organ dysfunction criteria that can be easily applied using data commonly found in most hospitals’ EHRs [10, 11]. The ASE therefore provides an objective and consistent way to identify sepsis cases across hospitals. However, robust risk adjustment is still required to make fair comparisons of hospitals’ sepsis mortality rates as sepsis mortality is influenced by many factors beyond the care rendered, including patients’ demographics, comorbidities, severity of illness, and site of infection [12]. As an example, an elderly patient with end-stage cancer presenting with septic shock from a perforated bowel has a substantially higher risk of mortality than a young woman presenting with hypotension and acute kidney injury from pyelonephritis, regardless of quality of care. We recently developed risk adjustment models for ASE that utilize EHR clinical data and showed that these have better discrimination for mortality and calibration than models based on administrative data alone [13]. It is unclear, however, whether integrating clinical data into risk adjustment models adds substantial marginal value for hospital comparisons over adjustments using administrative data alone, which are simpler to acquire and analyze. If risk adjustment using administrative models identifies similar high- or low-performing hospitals compared with EHR models, then there is little basis to justify the added complexity of integrating clinical data. On the other hand, if integrating clinical data with administrative data generates substantially different rankings, this would suggest that the extra complexity is necessary for credible benchmarking. In this study, we assessed the impact of risk adjustment on relative rankings of hospitals’ ASE mortality rates when applying administrative vs administrative plus clinical risk adjustment models. We further compared risk-adjusted hospital rankings when sepsis was defined using administrative codes vs ASE clinical criteria.

METHODS

Study Design, Data Sources, and Sepsis Case Definition

We conducted a retrospective study of adults aged ≥20 years who were admitted in calendar years 2013 or 2014 to 200 US acute care hospitals drawn from 3 data sets: Cerner HealthFacts, HCA Healthcare, and the Institute of Health Metrics. These data sets were previously used in a national epidemiologic study of sepsis and include a geographically diverse mix of academic and community hospitals [1]. We identified sepsis using the CDC’s ASE definition, which identifies hospitalizations with presumed serious infection (blood culture order and new antibiotics continued for ≥4 days or until ≤1 day before death, discharge to hospice, or transfer to another hospital) and concurrent organ dysfunction (initiation of vasopressors or mechanical ventilation, elevated lactate, increase in baseline creatinine or total bilirubin, or decrease in baseline platelets) [9]. We excluded hospitals with <50 ASE cases during the study period due to the uncertainty associated with rates estimated from low-volume hospitals, similar to prior studies comparing hospital sepsis outcomes [8, 14]. We further excluded encounters with missing discharge dispositions and those with International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) instead of ICD-9-CM codes [13].

Risk Adjustment Methods and Analyses

Our outcome of interest was all-cause in-hospital mortality. In our primary analysis, we applied 2 sets of risk adjustment methods to patients identified by ASE criteria (Table 1). First, we applied an administrative model developed by Ford et al. that incorporates demographics, comorbidities, and ICD-9-CM codes indicative of severity of illness on admission (mechanical ventilation, shock, hemodialysis, and intensive care unit [ICU] admission) [15]. In a previous analysis, this administrative model achieved an area under the receiver operating characteristics curve (AUROC) of 0.776 (95% CI, 0.770–0.783) in the Cerner data set and 0.771 (95% CI, 0.768–0.773) in the HCA Healthcare data set, with good calibration at low deciles of baseline risk but worse calibration with higher-risk patients [13]. Second, we applied a clinical model that uses similar administrative data (demographics and comorbidities) but also adds site of infection (by ICD-9-CM codes) and days in the hospital until sepsis onset and uses physiologic data rather than codes for severity of illness (laboratory data from chemistries, complete blood cell counts, and liver function tests; vasopressors; and mechanical ventilation). This model had an AUROC of 0.826 (95% CI, 0.820–0.831) in Cerner and 0.827 (95% CI, 0.824–0.829) in the HCA Healthcare data set, with good calibration across all baseline risk deciles [13]. Missing values were imputed using normal values, as we had previously shown that normal value imputation and multiple imputation generate similar model performance [13].
Table 1.

Components of Adult Sepsis Event Risk Adjustment Models: Administrative Model vs Integrated Administrative and Clinical Model

Model ComponentsAdministrative ModelaIntegrated Administrative and Clinical Modelb
Administrative data
 Demographics
 Comorbidities
 Mechanical ventilation
 Shock code
 Hemodialysis code
 ICU admission
 Infection site
Clinical data
 Vasopressors
 Laboratory data: chemistries
 Complete blood cell counts, liver
 Function tests, lactate
 Days to sepsis onset

Abbreviations: ICU, intensive care unit.

aAdministrative data were based on encounter data and ICD-9-CM codes.

bIn the integrated administrative and clinical model, mechanical ventilation, ICU admission, vasopressors, and laboratory data within ±1 calendar day of the day of sepsis onset were used. The day of sepsis onset was defined as the earliest day the blood culture or first qualifying antibiotic day occurred. The worst values for laboratory values within that window were used. Missing values were assumed to be normal.

Components of Adult Sepsis Event Risk Adjustment Models: Administrative Model vs Integrated Administrative and Clinical Model Abbreviations: ICU, intensive care unit. aAdministrative data were based on encounter data and ICD-9-CM codes. bIn the integrated administrative and clinical model, mechanical ventilation, ICU admission, vasopressors, and laboratory data within ±1 calendar day of the day of sepsis onset were used. The day of sepsis onset was defined as the earliest day the blood culture or first qualifying antibiotic day occurred. The worst values for laboratory values within that window were used. Missing values were assumed to be normal. We applied risk adjustment using both methods to calculate each hospital’s expected number of deaths, then divided the hospital-level observed number of deaths by the expected number of deaths to obtain risk-adjusted standardized mortality ratios for each hospital [16]. Using this method, lower ratios indicate better risk-adjusted outcomes than higher ratios. We ranked all hospitals by sepsis-standardized mortality ratios and calculated the number of hospitals in the worst quartile using crude mortality rates that shifted to better quartiles using administrative risk adjustment, and then the number of administratively adjusted hospitals in the worst quartile that shifted to better quartiles using clinical risk adjustment. We further assessed the agreement between hospitals’ crude, administrative-adjusted, and clinical-adjusted relative rankings, divided into quartiles within the study cohort, using a weighted kappa statistic (к). A weighted к is used to calculate agreement for ordinal ratings and gives “partial” credit for close but imperfect agreement [17]. The correlation between hospitals’ standardized mortality ratios using different risk adjustment methods was also assessed using the Spearman correlation coefficient (ρ). As per common convention, we a priori classified agreement and correlation as very strong for к and ρ values ≥0.90, strong from 0.70 to 0.89, moderate from 0.40 to 0.69, weak from 0.10 to 0.39, and negligible from 0.00 to 0.10 [18]. We also explored whether several basic hospital characteristics, including size (<200 beds, 200–499 beds, ≥500 beds), region (Northeast, Midwest, South, West), and teaching status, were associated with a change in quartile ranking based on clinical vs administrative risk adjustment using a logistic regression model [19]. In addition to comparing correlation and agreement between clinical-adjusted vs administrative-adjusted ASE mortality, we compared hospital-standardized mortality ratios using administrative sepsis definitions after risk adjustment using the administrative model described above. We reasoned that if using administrative data for both sepsis identification and risk adjustment yields similar results compared with using ASE criteria with clinical risk adjustment, this would argue against the need to use clinical data at all for hospital comparisons. We used 2 administrative definitions for these comparisons: 1) explicit severe sepsis (ICD-9-CM code 995.92) or septic shock (785.52) codes and 2) either implicit codes for infection and organ dysfunction or explicit severe sepsis/septic shock codes alone (modified Angus criteria) [20, 21]. Compared with implicit sepsis codes, explicit sepsis codes tend to have lower sensitivity and higher positive predictive values and identify a more severely ill cohort of patients [1, 21, 22]. Analyses were conducted using SAS, version 9.4 (SAS Institute, Cary, NC, USA). The study was approved with a waiver of informed consent by the Institutional Review Board at Harvard Pilgrim Health Care Institute.

RESULTS

Study Cohort Hospitals and Case Counts

The study cohort included 200 hospitals. Most hospitals were medium-sized (n = 105, 52.5%), nonteaching (n = 128, 64%), and from the South (n = 120, 60%) (Table 2). There were 4 009 631 adult patients admitted to these hospitals during 2013–2014, including 245 808 sepsis hospitalizations by ASE criteria (median [interquartile range {IQR}], 977.5 [489-1786] per hospital). The crude mortality rate of patients meeting ASE criteria was 15.5% and ranged from 5.4% to 34.8% across hospitals (median [IQR], 15.4% [12.7%–17.8%]).
Table 2.

Characteristics of Study Hospitals

CharacteristicNo. (%) or Median (IQR)
Hospital size
 Small (<200 beds)78 (39.0)
 Medium (200–499 beds)105 (52.5)
 Large (≥500 beds)17 (8.5)
Region
 Northeast19 (9.5)
 Midwest18 (9.0)
 South120 (60.0)
 West43 (21.5)
Teaching status
 Teaching72 (36.0)
 Nonteaching128 (64.0)
Case counts, 2013–2014
 Hospitalizations17 197.5 (9208.5–28 528.5)
 Sepsis cases by Adult Sepsis Event criteria977.5 (489–1786)
 Explicit sepsis cases472.5 (228–735)
 Implicit sepsis cases1924.5 (998–3304)

Abbreviation: IQR, interquartile range.

Characteristics of Study Hospitals Abbreviation: IQR, interquartile range.

Impact of Risk Adjustment Using Administrative Data vs Integrated Clinical Data

Risk adjustment had a large effect on rankings: 22/50 hospitals (44%) in the worst quartile using crude mortality rates shifted into better quartiles after administrative risk adjustment (Figure 1A). A further 21/50 (42%) hospitals in the worst quartile using administrative risk adjustment shifted to better quartiles after incorporating clinical data (Figure 1B). Conversely, 17/50 (34%) hospitals in the best quartile using crude mortality rates shifted to lower quartiles after adjusting for administrative data, and a further 14/50 (28%) hospitals in the best quartile per administrative risk adjustment shifted to worse quartiles after integrating clinical data. The correlation between hospital standardized mortality ratios when risk-adjusting using clinical vs administrative data was strong (ρ, 0.72; 95% CI, 0.65–0.78) (Figure 2A), but the overall agreement between hospital quartile rankings was only moderate (к, 0.55; 95% CI, 0.47–0.63). We did not detect a significant association between a change in quartile rankings based on clinical vs administrative risk adjustment in any of the hospital characteristics we examined.
Figure 1.

Concordance of hospital Centers for Disease Control and Prevention Adult Sepsis Event sepsis mortality rates when ranked into quartiles: (A) unadjusted vs risk-adjusted by administrative data, (B) risk-adjusted using administrative data vs integrated clinical data. The figure shows the impact of risk adjustment on hospitals’ observed Adult Sepsis Event mortality rankings. Bubble sizes are proportional to the number of hospitals in each matched quartile. The actual number of hospitals in each category is denoted within the bubbles. The cohort included 200 hospitals. Lower quartiles indicate better performance (ie, quartile 1 = lowest sepsis mortality rates, quartile 4 = highest mortality rates). The bubbles in black, connected by the dotted lines, indicate where all hospitals would lie if concordance was perfect between the various comparisons. Red bubbles below the dotted line indicate cases in which hospitals’ unadjusted sepsis mortality rankings shift into better quartiles after risk adjustment by the administrative model (Figure 1A), or in which hospitals’ administrative risk-adjusted mortality rankings shift into better quartiles after risk-adjustment by the integrated administrative and clinical model (Figure 1B). Bubbles in green above the dotted line indicate the opposite. For example, Figure 1A shows that 22 (18 + 4) hospitals that were ranked in the worst quartile of unadjusted sepsis mortality rates shifted to better quartiles after risk adjustment using the administrative model. Figure 1B shows that 21 (14 + 7) hospitals in the worst quartile of sepsis mortality after risk adjustment by the administrative model shifted to better quartiles after risk adjustment by the integrated administrative and clinical model.

Figure 2.

Correlation between hospital standardized sepsis mortality ratios: (A) Centers for Disease Control and Prevention (CDC) Adult Sepsis Event risk-adjusted using clinical vs administrative data, (B) clinical-adjusted CDC Adult Sepsis Event mortality ratios vs administrative-adjusted sepsis diagnosis codes. “Explicit” sepsis codes include International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes for severe sepsis (995.92) or septic shock (785.52). “Implicit” sepsis codes include combinations of ICD-9-CM codes for infection and organ dysfunction or explicit sepsis codes alone.

Concordance of hospital Centers for Disease Control and Prevention Adult Sepsis Event sepsis mortality rates when ranked into quartiles: (A) unadjusted vs risk-adjusted by administrative data, (B) risk-adjusted using administrative data vs integrated clinical data. The figure shows the impact of risk adjustment on hospitals’ observed Adult Sepsis Event mortality rankings. Bubble sizes are proportional to the number of hospitals in each matched quartile. The actual number of hospitals in each category is denoted within the bubbles. The cohort included 200 hospitals. Lower quartiles indicate better performance (ie, quartile 1 = lowest sepsis mortality rates, quartile 4 = highest mortality rates). The bubbles in black, connected by the dotted lines, indicate where all hospitals would lie if concordance was perfect between the various comparisons. Red bubbles below the dotted line indicate cases in which hospitals’ unadjusted sepsis mortality rankings shift into better quartiles after risk adjustment by the administrative model (Figure 1A), or in which hospitals’ administrative risk-adjusted mortality rankings shift into better quartiles after risk-adjustment by the integrated administrative and clinical model (Figure 1B). Bubbles in green above the dotted line indicate the opposite. For example, Figure 1A shows that 22 (18 + 4) hospitals that were ranked in the worst quartile of unadjusted sepsis mortality rates shifted to better quartiles after risk adjustment using the administrative model. Figure 1B shows that 21 (14 + 7) hospitals in the worst quartile of sepsis mortality after risk adjustment by the administrative model shifted to better quartiles after risk adjustment by the integrated administrative and clinical model. Correlation between hospital standardized sepsis mortality ratios: (A) Centers for Disease Control and Prevention (CDC) Adult Sepsis Event risk-adjusted using clinical vs administrative data, (B) clinical-adjusted CDC Adult Sepsis Event mortality ratios vs administrative-adjusted sepsis diagnosis codes. “Explicit” sepsis codes include International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes for severe sepsis (995.92) or septic shock (785.52). “Implicit” sepsis codes include combinations of ICD-9-CM codes for infection and organ dysfunction or explicit sepsis codes alone.

Comparison of Hospital Sepsis Mortality Rates by Adult Sepsis Events vs Administrative Sepsis Definitions

The correlation of unadjusted sepsis mortality rates using ASE criteria was moderate compared with explicit administrative definitions (ρ, 0.61; 95% CI, 0.51–0.69) and compared with implicit sepsis codes (ρ, 0.69; 95% CI, 0.61–0.76). The correlation was similar after risk adjustment using clinical data for ASE criteria and administrative data for administrative definitions (explicit sepsis codes: ρ, 0.68; 95% CI, 0.59–0.75; implicit codes: ρ, 0.70; 95% CI, 0.62–0.76) (Figure 2B). Agreement between unadjusted hospital mortality quartile rankings by ASE criteria and administrative definitions was moderate (explicit sepsis codes: к, 0.40; 95% CI, 0.31–0.49; implicit codes: к, 0.50; 95% CI, 0.41–0.58) and remained moderate after risk adjustment using clinical data for ASE and administrative data for administrative definitions (explicit codes: к, 0.52; 95% CI, 0.43–0.61; implicit codes: к, 0.54; 95% CI, 0.46–0.63). Eighteen of 50 (36%) hospitals in the worst quartile by administrative-adjusted explicit sepsis codes shifted to better quartiles by clinical-adjusted ASE criteria, while 17/50 (34%) hospitals in the best quartile shifted to worse quartiles. Similarly, 19/50 (38%) hospitals in the worst quartile by administrative-adjusted implicit sepsis codes shifted to better quartiles by clinical-adjusted ASE criteria, while 19/50 (38%) hospitals in the best quartile shifted to worse quartiles.

DISCUSSION

In this study, we demonstrated that risk adjustment has a large impact on hospitals’ rankings for sepsis mortality rates, as 44% of hospitals in the worst quartile using crude data shifted to higher quartiles after administrative risk adjustment. Moreover, adding clinical data to administrative models and replacing severity-of-illness codes with physiologic data had a further large impact on hospitals’ sepsis-mortality rankings: 42% of hospitals deemed to be in the worst quartile of performance by administrative risk adjustment shifted to more moderate quartiles by clinical risk adjustment. Finally, hospital rankings based entirely on administrative data for sepsis identification and risk adjustment had only moderate agreement with rankings by Adult Sepsis Event criteria after clinical risk adjustment. Sepsis care and outcomes are subjects of intense interest for regulators, payors, quality advocates, and hospitals. CMS and many states now require hospitals to publicly report compliance with sepsis management bundles. Ultimately, however, the goal of sepsis management bundles is to improve sepsis outcomes, and thus there are efforts currently underway by CMS to develop an outcome measure that can be used to further compare and contrast hospitals [5]. Prior studies have demonstrated substantial variability across hospitals in the accuracy of administrative definitions of sepsis, underscoring the importance of anchoring hospital sepsis comparisons to consistent clinical criteria such as the CDC’s Adult Sepsis Event definition [8, 23]. These analyses, however, left unanswered the question of how to credibly risk-adjust hospitals’ sepsis-associated mortality rates. The large effect of risk adjustment using administrative data on hospitals’ sepsis mortality rankings is unsurprising given that there is considerable variation between hospitals in the complexity and level of illness of patients they manage [24]. However, our finding that risk adjustment using clinical vs administrative data alone leads to substantial further changes in hospital rankings has important implications for the operationalization of a sepsis outcome measure. These findings are concordant with a recent analysis that demonstrated major discrepancies in mortality trends for patients hospitalized with heart failure and pneumonia when risk-adjusting using administrative vs clinical risk adjustment variables [25]. Differences in hospital rankings for surgical site infection rates have also been demonstrated when risk-adjusting using clinical vs administrative data [26]. Our findings contrast with a study by Darby et al. that developed an administrative risk adjustment model for sepsis using data from Pennsylvania and showed that correlation with risk-adjusted mortality rates was very high when adding clinical data, with minimal change in observed hospital performance [27]. Their approach differs from ours, however, as the Darby analysis used implicit sepsis codes rather than clinical criteria to identify sepsis and added laboratory values on admission to an administrative model that already captured physiologic severity of illness through organ failure codes. In contrast, our clinical model replaced all physiologic severity-of-illness codes in the administrative model with an array of clinical data, including laboratory results, days to sepsis onset, vasopressor administration, and need for mechanical ventilation on the day of sepsis onset. This is an important distinction, as prior work has demonstrated substantial variability in hospitals’ threshold for diagnosing and coding for organ dysfunction [7]. The practical impact on hospital profiling that we observed when risk-adjusting CDC clinical criteria with the administrative vs clinical model is perhaps surprising as the administrative risk adjustment model performed nearly as well in these data sets for discriminating mortality, with AUROC values of ~0.77 compared with 0.82 for the clinical model [13]. This suggests that model discrimination is not the only important factor when considering the utility of risk adjustment models; researchers and policy makers need to take into account the concordance of simpler vs more sophisticated models in identifying outliers. Interestingly, none of the basic hospital characteristics that were examined in our data set predicted changes in rankings when using clinical risk adjustment. Our study has important limitations. First, we used a convenience sample of hospitals, which may limit the generalizability of our findings. However, our cohort included a large number of geographically diverse academic and community hospitals of varying sizes. Second, we did not perform direct comparisons with other existing ICU severity-of-illness models that use clinical data, such as APACHE or SAPS, as some of the data needed to calculate these scores are not available in many hospitals’ EHR data sets. Furthermore, many patients with sepsis are treated outside of the ICU, and focusing only on ICU populations could create bias due to variability in hospitals’ ICU bed capacities and thresholds to admit to the ICU [28, 29]. Third, we did not have data on mortality that might occur shortly after hospital discharge and so focused only on in-hospital death as an outcome. Fourth, based on data availability, we only examined basic hospital characteristics and their associations with changes in hospital rankings when using clinical vs administrative risk adjustment, and we cannot rule out the importance of other factors such as hospital ownership, specific diagnosis, and coding practices within health care systems, safety net status, and more. Lastly, the applicability of our findings in the ICD-10 era is unknown and warrants additional research. It is unlikely, however, that the shift from ICD-9 to ICD-10 decreased variation in coding given ICD-10’s greater complexity and use of multidirectional mappings [30, 31]. In conclusion, risk adjustment for sepsis mortality substantially changes hospitals’ relative rankings, and integrating clinical data into risk adjustment models generates very different rankings compared with using administrative data alone. Comprehensive risk adjustment using both administrative and clinical data is necessary before comparing hospitals’ sepsis mortality rates.
  28 in total

1.  The CMS Sepsis Mandate: Right Disease, Wrong Measure.

Authors:  Michael Klompas; Chanu Rhee
Journal:  Ann Intern Med       Date:  2016-06-14       Impact factor: 25.391

2.  Time to Treatment and Mortality during Mandated Emergency Care for Sepsis.

Authors:  Christopher W Seymour; Foster Gesten; Hallie C Prescott; Marcus E Friedrich; Theodore J Iwashyna; Gary S Phillips; Stanley Lemeshow; Tiffany Osborn; Kathleen M Terry; Mitchell M Levy
Journal:  N Engl J Med       Date:  2017-05-21       Impact factor: 91.245

3.  The complexity and challenges of the International Classification of Diseases, Ninth Revision, Clinical Modification to International Classification of Diseases, 10th Revision, Clinical Modification transition in EDs.

Authors:  Jacob Krive; Mahatkumar Patel; Lisa Gehm; Mark Mackey; Erik Kulstad; Jianrong John Li; Yves A Lussier; Andrew D Boyd
Journal:  Am J Emerg Med       Date:  2015-03-07       Impact factor: 2.469

4.  Mortality Measures to Profile Hospital Performance for Patients With Septic Shock.

Authors:  Allan J Walkey; Meng-Shiou Shieh; Vincent X Liu; Peter K Lindenauer
Journal:  Crit Care Med       Date:  2018-08       Impact factor: 7.598

5.  Incidence and Trends of Sepsis in US Hospitals Using Clinical vs Claims Data, 2009-2014.

Authors:  Chanu Rhee; Raymund Dantes; Lauren Epstein; David J Murphy; Christopher W Seymour; Theodore J Iwashyna; Sameer S Kadri; Derek C Angus; Robert L Danner; Anthony E Fiore; John A Jernigan; Greg S Martin; Edward Septimus; David K Warren; Anita Karcz; Christina Chan; John T Menchaca; Rui Wang; Susan Gruber; Michael Klompas
Journal:  JAMA       Date:  2017-10-03       Impact factor: 56.272

6.  Identifying patients with severe sepsis using administrative claims: patient-level validation of the angus implementation of the international consensus conference definition of severe sepsis.

Authors:  Theodore J Iwashyna; Andrew Odden; Jeffrey Rohde; Catherine Bonham; Latoya Kuhn; Preeti Malani; Lena Chen; Scott Flanders
Journal:  Med Care       Date:  2014-06       Impact factor: 2.983

7.  A Severe Sepsis Mortality Prediction Model and Score for Use With Administrative Data.

Authors:  Dee W Ford; Andrew J Goodwin; Annie N Simpson; Emily Johnson; Nandita Nadig; Kit N Simpson
Journal:  Crit Care Med       Date:  2016-02       Impact factor: 7.598

8.  Comparison of hospital surgical site infection rates and rankings using claims versus National Healthcare Safety Network surveillance data.

Authors:  Chanu Rhee; Rui Wang; Maximilian S Jentzsch; Carly Broadwell; Heather Hsu; Robert Jin; Kelly Horan; Grace M Lee
Journal:  Infect Control Hosp Epidemiol       Date:  2018-12-04       Impact factor: 3.254

9.  Prevalence, Underlying Causes, and Preventability of Sepsis-Associated Mortality in US Acute Care Hospitals.

Authors:  Chanu Rhee; Travis M Jones; Yasir Hamad; Anupam Pande; Jack Varon; Cara O'Brien; Deverick J Anderson; David K Warren; Raymund B Dantes; Lauren Epstein; Michael Klompas
Journal:  JAMA Netw Open       Date:  2019-02-01

10.  Risk Adjustment for Sepsis Mortality to Facilitate Hospital Comparisons Using Centers for Disease Control and Prevention's Adult Sepsis Event Criteria and Routine Electronic Clinical Data.

Authors:  Chanu Rhee; Rui Wang; Yue Song; Zilu Zhang; Sameer S Kadri; Edward J Septimus; David Fram; Robert Jin; Russell E Poland; Jason Hickok; Kenneth Sands; Michael Klompas
Journal:  Crit Care Explor       Date:  2019-10-14
View more
  3 in total

1.  Inclusion of social determinants of health improves sepsis readmission prediction models.

Authors:  Fatemeh Amrollahi; Supreeth P Shashikumar; Angela Meier; Lucila Ohno-Machado; Shamim Nemati; Gabriel Wardi
Journal:  J Am Med Inform Assoc       Date:  2022-06-14       Impact factor: 7.942

2.  Prevalence and Clinical Characteristics of Patients With Sepsis Discharge Diagnosis Codes and Short Lengths of Stay in U.S. Hospitals.

Authors:  Ifedayo Kuye; Vijay Anand; Michael Klompas; Christina Chan; Sameer S Kadri; Chanu Rhee
Journal:  Crit Care Explor       Date:  2021-03-16

3.  Adjusting Client-Level Risks Impacts on Home Care Organization Ranking.

Authors:  Aylin Wagner; René Schaffert; Julia Dratva
Journal:  Int J Environ Res Public Health       Date:  2021-05-21       Impact factor: 3.390

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.