E L Hannan1, M J Racz, J G Jollis, E D Peterson. 1. Department of Health Policy and Management, School of Public Health, State University of New York at Albany 1244-3456, USA.
Abstract
OBJECTIVES: To assess the relative abilities of clinical and administrative data to predict mortality and to assess hospital quality of care for CABG surgery patients. DATA SOURCES/STUDY SETTING: 1991-1992 data from New York's Cardiac Surgery Reporting System (clinical data) and HCFA's MEDPAR (administrative data). STUDY DESIGN/SETTING/SAMPLE: This is an observational study that identifies significant risk factors for in-hospital mortality and that risk-adjusts hospital mortality rates using these variables. Setting was all 31 hospitals in New York State in which CABG surgery was performed in 1991-1992. A total of 13,577 patients undergoing isolated CABG surgery who could be matched in the two databases made up the sample. MAIN OUTCOME MEASURES: Hospital risk-adjusted mortality rates, identification of "outlier" hospitals, and discrimination and calibration of statistical models were the main outcome measures. PRINCIPAL FINDINGS: Part of the discriminatory power of administrative statistical models resulted from the miscoding of postoperative complications as comorbidities. Removal of these complications led to deterioration in the model's C index (from C = .78 to C = .71 and C = .73). Also, provider performance assessments changed considerably when complications of care were distinguished from comorbidities. The addition of a couple of clinical data elements considerably improved the fit of administrative models. Further, a clinical model based on Medicare CABG patients yielded only three outliers, whereas eight were identified using a clinical model for all CABG patients. CONCLUSIONS: If administrative databases are used in outcomes research, (1) efforts to distinguish complications of care from comorbidities should be undertaken, (2) much more accurate assessments may be obtained by appending a limited number of clinical data elements to administrative data before assessing outcomes, and (3) Medicare data may be misleading because they do not reflect outcomes for all patients.
OBJECTIVES: To assess the relative abilities of clinical and administrative data to predict mortality and to assess hospital quality of care for CABG surgery patients. DATA SOURCES/STUDY SETTING: 1991-1992 data from New York's Cardiac Surgery Reporting System (clinical data) and HCFA's MEDPAR (administrative data). STUDY DESIGN/SETTING/SAMPLE: This is an observational study that identifies significant risk factors for in-hospital mortality and that risk-adjusts hospital mortality rates using these variables. Setting was all 31 hospitals in New York State in which CABG surgery was performed in 1991-1992. A total of 13,577 patients undergoing isolated CABG surgery who could be matched in the two databases made up the sample. MAIN OUTCOME MEASURES: Hospital risk-adjusted mortality rates, identification of "outlier" hospitals, and discrimination and calibration of statistical models were the main outcome measures. PRINCIPAL FINDINGS: Part of the discriminatory power of administrative statistical models resulted from the miscoding of postoperative complications as comorbidities. Removal of these complications led to deterioration in the model's C index (from C = .78 to C = .71 and C = .73). Also, provider performance assessments changed considerably when complications of care were distinguished from comorbidities. The addition of a couple of clinical data elements considerably improved the fit of administrative models. Further, a clinical model based on Medicare CABG patients yielded only three outliers, whereas eight were identified using a clinical model for all CABG patients. CONCLUSIONS: If administrative databases are used in outcomes research, (1) efforts to distinguish complications of care from comorbidities should be undertaken, (2) much more accurate assessments may be obtained by appending a limited number of clinical data elements to administrative data before assessing outcomes, and (3) Medicare data may be misleading because they do not reflect outcomes for all patients.
Authors: E S Fisher; F S Whaley; W M Krushat; D J Malenka; C Fleming; J A Baron; D C Hsia Journal: Am J Public Health Date: 1992-02 Impact factor: 9.308
Authors: G T O'Connor; S K Plume; E M Olmstead; L H Coffin; J R Morton; C T Maloney; E R Nowicki; D G Levy; J F Tryzelaar; F Hernandez Journal: Circulation Date: 1992-06 Impact factor: 29.690
Authors: William S Weintraub; Maria V Grau-Sepulveda; Jocelyn M Weiss; Elizabeth R Delong; Eric D Peterson; Sean M O'Brien; Paul Kolm; Lloyd W Klein; Richard E Shaw; Charles McKay; Laura L Ritzenthaler; Jeffrey J Popma; John C Messenger; David M Shahian; Frederick L Grover; John E Mayer; Kirk N Garratt; Issam D Moussa; Fred H Edwards; George D Dangas Journal: Circulation Date: 2012-02-23 Impact factor: 29.690
Authors: David M Shahian; Sean M O'Brien; Shubin Sheng; Frederick L Grover; John E Mayer; Jeffrey P Jacobs; Jocelyn M Weiss; Elizabeth R Delong; Eric D Peterson; William S Weintraub; Maria V Grau-Sepulveda; Lloyd W Klein; Richard E Shaw; Kirk N Garratt; Issam D Moussa; Cynthia M Shewan; George D Dangas; Fred H Edwards Journal: Circulation Date: 2012-02-23 Impact factor: 29.690