Literature DB >> 32478600

Assessing the Impacts of Misclassified Case-Mix Factors on Health Care Provider Profiling: Performance of Dialysis Facilities.

Yi Mu1,2, Andrew I Chin3,4, Abhijit V Kshirsagar5,6, Heejung Bang7.   

Abstract

Quantitative metrics are used to develop profiles of health care institutions, including hospitals, nursing homes, and dialysis clinics. These profiles serve as measures of quality of care, which are used to compare institutions and determine reimbursement, as a part of a national effort led by the Center for Medicare and Medicaid Services in the United States. However, there is some concern about how misclassification in case-mix factors, which are typically accounted for in profiling, impacts results. We evaluated the potential effect of misclassification on profiling results, using 20 744 patients from 2740 dialysis facilities in the US Renal Data System. In this case study, we compared 30-day readmission as the profiling outcome measure, using comorbidity data from either the Center for Medicare and Medicaid Services Medical Evidence Report (error-prone) or Medicare claims (more accurate). Although the regression coefficient of the error-prone covariate demonstrated notable bias in simulation, the outcome measure-standardized readmission ratio-and profiling results were quite robust; for example, correlation coefficient of 0.99 in standardized readmission ratio estimates. Thus, we conclude that misclassification on case-mix did not meaningfully impact overall profiling results. We also identified both extreme degree of case-mix factor misclassification and magnitude of between-provider variability as 2 factors that can potentially exert enough influence on profile status to move a clinic from one performance category to another (eg, normal to worse performer).

Entities:  

Keywords:  CMS-2728; USRDS; measurement error; medical evidence form; medicare claims; misclassification; profiling

Year:  2020        PMID: 32478600      PMCID: PMC7265077          DOI: 10.1177/0046958020919275

Source DB:  PubMed          Journal:  Inquiry        ISSN: 0046-9580            Impact factor:   1.730


What do we already know about this topic? Reliability of comorbidity data as case-mix factors adjusted in health policy models has been questioned and its impact of misclassification on profiling has been studied outside dialysis. How does your research contribute to the field? Misclassification on case-mix using different data sources did not meaningfully impact profiling results in dialysis practice. What are your research’s implications toward theory, practice, or policy? Center for Medicare and Medicaid Services (CMS) may continue to use the current sources of comorbidity data in profiling purposes, but still need to monitor extreme degree of case-mix factor misclassification and magnitude of between-provider variability that can potentially influence profile status in end-stage renal disease (ESRD).

Introduction

With the availability of increasingly large amounts of patient outcome data and the growing interest in measuring quality of patient care delivered by health care providers, quantitative metrics have been developed to profile hospitals, dialysis clinics, and even individual providers. Much is at stake for individual facilities as well as organizations, whose profiles are used to compare against national averages or norms in the United States, and may result in reduced reimbursement for services for sub-par performance, increased inspection by regulators, and continuous surveillance for quality assurance.[1,2] Therefore, there is a growing interest on ensuring the validity of the metric, ascertainment of patient characteristics and comorbidities, and statistical methods from which these profiles are developed.[3-5] One major concern is the impact of misclassification of case-mix factors, typically used as adjustment variables, on the outcome of interest. In the United States, the majority of end-stage renal disease (ESRD) patients on dialysis are covered by the Center for Medicare and Medicaid Services (CMS), a federal health insurance program. For this population, Medicare claims and the Medical Evidence report (the CMS-2728 form) represent the 2 primary data sources for comorbidity determination that are presently used in health care policy and research in ESRD. Two main uses in practice are its use in Quality Incentive Program (QIP) and epidemiology research via its availability in US Renal Data System (USRDS). The comorbidity information available on the CMS-2728 form, a data form that is unique to the ESRD population, is a list of known patient comorbidities at incidence of dialysis. These data, not meant for direct reimbursement claims, are entered at the dialysis facility by the physician, nurse, or administrative staff based on hospital and ambulatory care medical records. Center for Medicare and Medicaid Services (via University of Michigan—Kidney Epidemiology and Cost Center [UM-KECC]) methodologies for profiling the USRDS dialysis facilities are based on the previous year’s claims data. Comorbidity assessment from claims data, captured from diagnostic (ICD) and procedure codes (CPT), is generally considered more reliable than assessment based on information available on the CMS-2728 form, which is required to be completed once at incidence of dialysis.[6,7] However, CMS-2728 data are still used for health care policy development because they are much easier to access and process, compared to the resources required to create claims-based models. However, there has been a concern for many years regarding the accuracy of data in CMS-2728.[8,9] Earlier studies have attempted to validate comorbid conditions reported on CMS-2728 versus clinical data or claims data; the results showed sensitivity <0.6, specificity >0.9, agreement and Kappa statistics <0.5.[7,8,10-12] On the other hand, case-mix adjustment based on administrative claims data (compared to more reliable medical records) is generally considered suitable for profiling hospital performance.[13] In other words, using information garnered from claims data in case-mix profile development models appear to have acceptable quality. With this background, we decided to assess the impact of misclassification in case-mix factors on profiling in dialysis. In this article, we compared dialysis profiling results using comorbidity data from Medicare claims versus CMS-2728 with 30-day standardized readmission ratio (SRR) as the outcome metric. In addition, we conducted simulation studies to examine the potential effect of misclassification on the estimation of regression coefficients in the statistical models used in the development of profiling strategies as well as profiling itself. We sought to check if real data analysis and simulation study provide consistent results and messages.

Methods

Underlying Models

CMS has employed a hierarchical logistic regression exchangeable model for profiling health care providers.[14,15] Given binary outcome for the patient and discharge in the provider (i = 1, . . ., n; j = 1, . . ., n), and case-mix factors , the model can be written in a simple form: where , the provider-specific intercepts or random effect are , and X and Z are accurately measured covariates. The CMS model adopted in practice may be written with error-prone W in place of generally unmeasured X: where the superscript ME denotes measurement error and indicates parameters to be estimated with observed covariate W. When X is categorical, for example, is a binary variable such as true baseline comorbidity status (1 = yes, 0 = no), ME is often called misclassification, and the relationship between X and W may be quantified via sensitivity (SN) and specificity (SP)[16,17]: We assume that W only depends on X, not Y, that is, ME is non-differential.[18]

Profiling Schemes

SRR for the provider can be defined as: where is the logistic function. Here, denotes the “expected” outcome rate based on fixed effect parameters , and denotes the “predicted” outcome rate based on both fixed effect parameters and provider-specific random effect in model (1). Let be the corresponding estimations based on model (2) with X replaced by W. Bootstrap algorithm for profiling providers was proposed by CMS.[1,19] We obtained 95% confidence interval (CI) of the SRR for the provider: profiling as “worse” (ie, under performance) if lower 2.5% limit >1; “better” if upper 2.5% limit <1; and “normal” otherwise. For our simulation later, a provider was assigned to true “worse” if upper limit of theoretical CI given random intercepts ; true “better” if lower limit of theoretical CI; otherwise true “normal.”[20] To assess the profiling performance, we focused on the 2 evaluation criteria—profiling sensitivity and specificity. Of note, the identification of truly “worse” providers could be of particular importance as they could face financial penalty in the form of reduced reimbursement for services rendered. Sensitivity (SN) for profiling as worse providers is Specificity (SP) for profiling as worse providers is where “Normal” performance implies “No reduction in payment,” when quality linked to payment.[21]

USRDS Example

In this section, we conducted a case study using 30-day unplanned hospital readmission (namely, SRR) as the profile outcome. We analyzed SRR and the subsequent effects on dialysis facility rating scores using either Medicare claims or CMS-2728 (see Supplemental Table S3), the 2 commonly used sources of comorbidity data in nephrology. We wanted to determine if case-mix adjustment using different data sources would alter the final dialysis facility rating. CMS utilizes 2-stage model: the first stage of the model is a double random effect logistic regression model where both dialysis facilities and hospitals are modeled as random effects; the second stage is a mixed effect logistic regression model to calculate SRR when profiling dialysis facilities, in which dialysis facilities are modeled as fixed effects and hospitals are modeled as random effects with its standard deviation estimated from the first stage. For each index hospitalization, past year comorbidity based on Medicare claims were grouped as the Hierarchical Condition Categories, see Supplemental Table S4 for the list. In this analysis, we assessed misclassification under a simplified model only including dialysis facilities as random effects. This random intercept logistic regression model was used by CMS for hospital-wide readmission measure, and we followed the set of guidelines provided by CMS for data processing.[19,22] The algorithm to assign index discharges and unplanned post-index readmission within 30-day from index discharge was derived from the hospital-wide all-cause unplanned readmission measure, and we modeled the case-mix-adjusted 30-day SRR. For case-mix, we adjusted the following factors: age, sex, body mass index, primary cause and years of ESRD, duration of index hospitalization, and a total of 11 comorbidities (alcohol dependence; drug dependence; tobacco use; diabetes; cancer; chronic obstructive pulmonary disease; and cardiovascular diseases including atherosclerotic heart disease, congestive heart failure [CHF], cerebrovascular disease, peripheral vascular disease, and other cardiac).[8,10] The dialysis facility profile that used claims data prior year to dialysis initiation was regarded as the reference standard.[10] We compared it against the 2 alternative approaches using comorbid conditions captured from CMS-2728: (1) using CHF as recorded on CMS-2728, while all other conditions from claims, and (2) using all of 11 comorbidities from CMS-2728. We chose 11 comorbidities as in previous studies on concordance of data in CMS-2728 and claims.[8,10] These 11 comorbidities on CMS-2728 can be compared with those with ICD-9 codes. The other variables such as “institutionalization” does not have ICD-9 codes. Also, CHF is among the important risk factors in kidney disease (https://nccd.cdc.gov/CKD/Calculators.aspx) and its prevalence is not only relatively high but also differs substantially between the 2 data sources (57% based on claims and 39% based on 2728 form, to be shown below). We selected CHF to examine the impact of misclassification as an illustrative purpose. Also, the list of the final risk adjusters could differ year to year, as reflected in different years’ manuals.[23] Data analyses were carried with SAS® 9.4, following the technical notes from the CMS guidelines.[19] Among 90 373 elderly patients 67 years old or older captured from the USRDS starting dialysis during July 1, 2006, to June 30, 2009, we extracted hospitalization information during January 1, 2010, to June 30, 2012. After excluding small facilities with 10 or less index discharges, there were 63 142 index discharges corresponding to 20 744 patients discharged from 2740 dialysis facilities. The overall 30-day unplanned all-cause readmission rate was about 29%, similar to 30% national readmission rate in the 2014 Dialysis Report.[22] The number of index discharges per facility showed the mean and median of 23 and 20 with standard deviation of 12. Table 1 shows that after using CHF information recorded on the CMS-2728 in place of the claims data, the estimated odds ratio for each predictor did not change or only minimally changed in the multiple regression. However, there were 3 facilities whose profile status did change; 2 were upgraded and 1 downgraded in their performance ratings, as seen in Table 2. We further computed the prevalence of CHF, SN, and SP among 2740 facilities and reported the results in Supplemental Table S1. The prevalence dropped from 56.6% using claims data to 38.9% when using CMS-2728. However, the prevalence of CHF among the 2 upgraded facilities remained similar; worse to normal: 86.8% (claims) versus 84.2% (CMS-2728), and normal to better: 64.3% (claims) versus 67.9% (CMS-2728). In contrast, the prevalence of CHF dropped from 100% (claims) to 0% (CMS-2728) in the facility downgraded from normal to worse. This may imply that extreme under-reporting (eg, no recording of a key factor) can make a difference in the end result.
Table 1.

USRDS Case Study: Model Fits with Hierarchical Logistic Regression.

VariableLevelModel AModel BModel C
OR95% CI P OR95% CI P OR95% CI P
Age at hospitalization[75, 85)0.930.90-0.97.0010.930.90-0.97.0010.940.90-0.98.002
Ref: [67, 75)≥850.930.87-0.98.0090.930.87-0.98.0090.930.88-0.99.02
Time on ESRD (year)1-210.90-1.10.93510.90-1.10.92810.91-1.10.987
Ref: <12-30.990.90-1.09.7940.990.90-1.09.7830.990.90-1.09.863
3-60.950.87-1.05.3270.950.87-1.05.3150.960.87-1.05.358
Length of stay (day)51.051.00-1.11.0631.051.00-1.11.0681.050.99-1.11.076
Ref: <561.21.13-1.28<.00011.21.13-1.28<.00011.211.13-1.28<.0001
> 61.331.28-1.39<.00011.331.28-1.39<.00011.331.28-1.39<.0001
GenderMale0.880.85-0.91<.00010.870.84-0.91<.00010.870.84-0.91<.0001
BMI category[20, 25)1.010.94-1.09.7541.010.94-1.10.7091.020.94-1.10.696
Ref: <20[25, 30)0.990.92-1.07.8850.990.92-1.07.8910.990.92-1.07.881
[30, 35)0.920.85-1.00.0610.930.85-1.01.0670.930.85-1.01.068
≥350.880.81-0.96.0040.880.81-0.96.0040.880.81-0.96.005
Diabetes as primary ESRD causeY1.010.97-1.06.5811.010.97-1.05.6770.990.94-1.05.769
Alcohol dependenceY1.170.97-1.42.1061.190.99-1.45.070.870.67-1.14.32
AHDY1.11.05-1.14<.00011.111.06-1.15<.00011.040.99-1.08.115
CancerY0.980.93-1.03.4440.980.93-1.03.4780.940.89-1.01.078
CHFY1.121.07-1.16<.00011.11.06-1.15<.00011.111.07-1.16<.0001
COPDY1.141.09-1.19<.00011.151.10-1.20<.00011.191.12-1.26<.0001
CBVDY1.071.02-1.11.0041.071.02-1.12.0031.071.01-1.13.026
DiabetesY0.980.94-1.03.4630.990.95-1.04.6931.030.97-1.09.375
Drug dependenceY1.230.99-1.54.0641.230.99-1.54.06421.25-3.21.004
Other cardiacY1.010.97-1.05.7851.020.98-1.06.4331.081.03-1.13.001
PVDY1.041.00-1.08.0511.051.01-1.09.0230.980.93-1.03.467
Tobacco userY1.11.03-1.18.0031.111.04-1.18.0021.211.10-1.33<.0001

Note. Models: A = 11 types of comorbidity conditions based on past year claims prior to dialysis initiation. B = Replace CHF from CMS 2728 form. C = Replace all 11 types of comorbid conditions based on CMS-2728 form. USRDS = US Renal Data System; OR = odds ratio; CI = confidence interval; ESRD = end-stage renal disease; BMI = body mass index; AHD = atherosclerotic heart disease; CHF = congestive heart failure; COPD = chronic obstructive pulmonary disease; CBVD = cerebrovascular disease; PVD = peripheral vascular disease.

Table 2.

USRDS Case Study: Profiling.

Profile (model A)Profile (model B)Profile (model C)Total
BetterNormalWorseBetterNormalWorse
Better3 (0.1%)0 (0%)0 (0%)3 (0.1%)0 (0%)0 (0%)3 (0.1%)
Normal1 (<0.1%)2663 (97%)1 (<0.1%)0 (0%)2661 (97%)4 (0.1%)2665 (97%)
Worse0 (0%)1 (<0.1%)71 (2.6%)0 (0%)8 (0.3%)64 (2.3%)72 (2.6%)
Total4 (0.1%)2664 (97%)72 (2.6%)3 (0.1%)2669 (97%)68 (2.5%)2740 (100%)

Note. Models A = Comorbidity based on past year claims prior to dialysis initiation. B = Replace CHF from CMS 2728 form. C = Replace all 11 types of comorbidity conditions based on CMS-2728 form. USRDS = US Renal Data System.

USRDS Case Study: Model Fits with Hierarchical Logistic Regression. Note. Models: A = 11 types of comorbidity conditions based on past year claims prior to dialysis initiation. B = Replace CHF from CMS 2728 form. C = Replace all 11 types of comorbid conditions based on CMS-2728 form. USRDS = US Renal Data System; OR = odds ratio; CI = confidence interval; ESRD = end-stage renal disease; BMI = body mass index; AHD = atherosclerotic heart disease; CHF = congestive heart failure; COPD = chronic obstructive pulmonary disease; CBVD = cerebrovascular disease; PVD = peripheral vascular disease. USRDS Case Study: Profiling. Note. Models A = Comorbidity based on past year claims prior to dialysis initiation. B = Replace CHF from CMS 2728 form. C = Replace all 11 types of comorbidity conditions based on CMS-2728 form. USRDS = US Renal Data System. Next, SRR estimates and profiling status were compared when all of the 11 comorbid conditions were obtained from claims data versus CMS-2728. Figure 1 demonstrates that the bootstrapped means of SRR obtained with the 2 data sources were highly correlated . The median value of the relative differences was −0.06 percentage points, with its range in −12.2% to 9.7%. With the reference to claims-based comorbidity adjustment, 8 (out of 72) worse providers were upgraded to normal and 4 normal providers were downgraded to worse when the same model was derived with CMS-2728 ; see Table 2.
Figure 1.

Standardized readmission ratio (SRR) derived from claims data versus CMS-2728 data using bootstrap.

Note. CMS = Center for Medicare and Medicaid Services.

Standardized readmission ratio (SRR) derived from claims data versus CMS-2728 data using bootstrap. Note. CMS = Center for Medicare and Medicaid Services.

Simulation Study

We further designed a set of simulation studies to address the 2 objectives: (1) to investigate the effect of misclassification on estimations of fixed coefficients and random intercepts and (2) to compare profiling behavior/performance under different misclassification settings. Guided by the original CMS model developers, we chose to approximate the national readmission rate among dialysis facilities (~30%).[24] X and Z were generated independently from Bernoulli distribution with probability 0.5, with the associated coefficients, , respectively. The simulations were carried out under a fixed number of providers (100) and a fixed volume . The unobserved (X) and observed covariates were generated from multivariate Bernoulli distribution, where were set to be conditionally independent on X, with varied SN/SP in the 7 misclassification scenarios. We also examined between-provider variability, for example, low and high in equation (1), informed by previous studies.[20,24] Simulations were conducted using R version 3.3.3, including lme4 and bindata packages.[25,26] The first experiment using 1000 simulations examined the effect of misclassification on regression parameters. From Table 3, when SN or SP for variable decreased, the estimates for fixed effect parameters tended to be attenuated toward the null—a well-known phenomenon in the ME literature.[17,18,27] Given that the empirical variability of the estimates of was stable across settings under varied SN/SP, the increment in absolute bias in led to the increment in mean squared error. In contrast, regarding the precisely measured , which is independent of , neither bias nor variance in the regression coefficient, , was meaningfully affected by the presence of misclassification, with the coverage probability (CP) maintained close to desired 95%.
Table 3.

Effect of Misclassification on the Estimation of Fixed Effect Coefficients.

σ2 SNSPβ0=0.847 (intercept)β1=0.5 (for X)β2=0.5 (for Z)
MeanVarMSECPMeanVarMSECPMeanVarMSECP
0.22 11−0.8430.0020.0020.950.4990.0020.0020.95−0.4980.0020.0020.95
0.90.9−0.7900.0020.0050.740.3990.0020.0120.36−0.4950.0020.0020.94
0.50.9−0.6560.0020.0380.000.2310.0020.0750.00−0.4930.0020.0020.94
0.10.9−0.5850.0010.0700.00−0.0020.0050.2570.00−0.4910.0020.0020.94
0.90.5−0.7560.0030.0110.560.2410.0030.0690.00−0.4930.0020.0020.94
0.90.1−0.5890.0050.0720.060.0040.0050.2520.00−0.4910.0020.0020.94
0.50.5−0.5850.0020.0700.000.0000.0020.2520.00−0.4910.0020.0020.94
111−0.8340.0110.0110.960.4960.0020.0020.94−0.4960.0020.0020.96
0.90.9−0.7810.0110.0150.920.3950.0020.0130.39−0.4940.0020.0020.96
0.50.9−0.6490.0100.0500.520.2310.0020.0750.00−0.4910.0020.0020.96
0.10.9−0.5780.0100.0830.25−0.0030.0060.2590.00−0.4900.0020.0020.96
0.90.5−0.7470.0110.0210.870.2390.0030.0710.00−0.4910.0020.0020.96
0.90.1−0.5810.0140.0850.410.0030.0060.2530.00−0.4900.0020.0020.96
0.50.5−0.5780.0110.0830.27−0.0020.0020.2540.00−0.4900.0020.0020.96

Note. SN = 1 and SP = 1 represents no misclassification. Results are based on 1000 simulations. Data are generated from equation (1). SN = sensitivity; SP = specificity; Var = Variance; MSE = mean squared error; CP = coverage probability.

Effect of Misclassification on the Estimation of Fixed Effect Coefficients. Note. SN = 1 and SP = 1 represents no misclassification. Results are based on 1000 simulations. Data are generated from equation (1). SN = sensitivity; SP = specificity; Var = Variance; MSE = mean squared error; CP = coverage probability. Table 4 summarizes CP based on whether the 95% Wald’s CI contains the true value of random intercept for the provider, grouped by true profiling status. When SN or SP of X decreased, CP for random intercepts was stable across the 3 types of true profiling status. However, when increased from to 1, CP for true worse and true better providers increased markedly, while CP for normal decreased minutely, which implies that higher variability improved sensitivity among better or worse.
Table 4.

Effect of Misclassification on the Estimation of Coverage Probability for Random Intercepts Based on True Profiling Status.

SNSP σ2=0.22 σ2=1
BetterNormalWorseBetterNormalWorse
110.530.960.570.830.930.89
0.90.90.520.950.560.830.930.88
0.50.90.510.950.550.820.930.87
0.10.90.500.950.560.820.930.87
0.90.50.510.950.560.820.930.87
0.90.10.500.950.560.810.930.87
0.50.50.500.950.560.820.930.87

Note. 1000 simulations are used. SN = sensitivity; SP = specificity.

Effect of Misclassification on the Estimation of Coverage Probability for Random Intercepts Based on True Profiling Status. Note. 1000 simulations are used. SN = sensitivity; SP = specificity. The second experiment, using 100 simulations, investigated the effect of misclassification on profiling under the same set of simulation parameters as in the first experiment. Simulation findings indicate that profiling results appeared to be robust. The case of showed low sensitivity for both true worse (eg, SN 0.26) and true better (SN 0.11) providers, but higher sensitivity for true normal (SN 0.98). To compare, the case of showed highest sensitivity for both true worse (SN 1) and true better (SN 1) providers, but lower sensitivity for true normal providers (SN 0.4); see Table 5. We also observed that, under high between-provider variability, a substantial number of normal performers (~30%) were declared to be worse performers. Such downgrading of clinic ratings may subject those clinics to unjust penalties.[20]
Table 5.

Effect of Misclassification on Profiling.

Low variability (σ2=0.22)High variability (σ2=1)
MisclassificationSRR profilingTrue profilingSRR profilingTrue profilingSRR profiling
SNSPBetterNormalWorseSNSPBetterNormalWorseSNSP
11Better0.270.7200.110.992.4527.7701.000.72
Normal2.1893.221.810.980.19037.7100.401.00
Worse01.150.650.260.99029.612.461.000.70
0.90.9Better0.280.6700.110.992.4527.6701.000.72
Normal2.1793.31.830.980.19037.7100.401.00
Worse01.120.630.260.99029.712.461.000.70
0.50.9Better0.280.6700.110.992.4527.6301.000.72
Normal2.1793.311.840.980.18037.8400.401.00
Worse01.110.620.250.99029.622.461.000.70
0.10.9Better0.270.6500.110.992.4527.6101.000.72
Normal2.1893.351.840.980.18037.8400.401.00
Worse01.090.620.250.99029.642.461.000.70
0.90.5Better0.280.6600.110.992.4527.6601.000.72
Normal2.1793.351.830.980.19037.8200.401.00
Worse01.080.630.260.99029.612.461.000.70
0.90.1Better0.280.6700.110.992.4527.6501.000.72
Normal2.1793.311.830.980.19037.8100.401.00
Worse01.110.630.260.99029.632.461.000.70
0.50.5Better0.270.6600.110.992.4527.6301.000.72
Normal2.1893.311.830.980.18037.8200.401.00
Worse01.120.630.260.99029.642.461.000.70

Note. 100 simulations are used. SRR = standardized readmission ratio; SN = sensitivity; SP = specificity.

Effect of Misclassification on Profiling. Note. 100 simulations are used. SRR = standardized readmission ratio; SN = sensitivity; SP = specificity.

Discussion

In this era of “pay for performance” and initiatives to enhance patient choice in choosing health care, it is important to understand how case-mix adjustments using various data sources can affect the results of profiling health care providers.[1] For patients on dialysis with Medicare coverage and for research purposes, there are 2 major data sources for comorbidity ascertainment in the USRDS: Medical Claims and CMS-2728 Medical Evidence form (incident dialysis comorbidity information). In health care policy, CMS-2728 is used to capture the comorbidities in the development of the standardized mortality ratio (SMR) and standardized hospitalization ratio (SHR), which are the 2 components of the “Dialysis Facility Compare Star Rating” (https://www.medicare.gov/dialysisfacilitycompare/), a program aimed to provide consumers with information when choosing outpatient dialysis services.[28-30] However, the SRR in the ESRD QIP, another program implemented by CMS, used prior year claims data for comorbidity adjustment. Thus, the method for case-mix adjustment in dialysis clinic profiling differs even within the same cohort of ESRD patients and the same operating agency and may change over different years. The QIP has been used for both payment reduction for facilities that underperform and a publicly available online rating on the CMS “Dialysis Facility Compare” Web site to inform consumers.[31,32] In this study based on both real and simulated data, we found that commonly encountered, moderate miscoding in covariates or case-mix may have limited influence on profiling. This phenomenon might be partly explained by similarity in profiling versus prediction, where there is no strong need for the modeling of ME to play an important role in prediction problems. In contrast, misclassification generally affects the regression coefficients (measure of association) in the statistical model, well explained by mathematical theory; that is, regression dilution.[18] Between-provider variance can play an important role in the profiling results.[20,33] Simulation results without misclassification in predictor in Table 5 agree with those from a previous study.[20] For true worse or true better providers, simulations suggest low SN (0.11 for true better, 0.26 for true worse)/high SP (0.99) under smaller variance versus high SN (1.0)/not high SP (0.7) under larger variance. For true normal providers, simulations suggest high SN (0.98)/low SP (0.19) under smaller variance versus low SN (0.4)/highest SP (1.0) under larger variance. Given that true worse/better providers were based on upper/lower 2.5% under our simulations (unlike 20% better in Ding et al[33]), profiling based on random intercept model can be more useful under smaller between-provider variance if the goal is to flag out a small percent of outliers, that is, to avoid misclassification of a large number of true normal providers. On the other hand, the case of larger variance showed improved coverage probability overall for the random intercept indicating each provider, and high sensitivity and specificity (in sum as summary measure, which is called the Youden Index). From our USRDS data example, the variance of the random intercepts for facilities on the logit scale was estimated to be A total of 72 out of 2740 facilities were flagged as “worse,” and only 3 facilities were flagged as “better,” as presented in Table 2. In addition, we found that regardless of adjusting comorbidities from either the CMS-2728 form or claims data, SRR estimates from the 2 approaches agreed closely (; median relative difference = −0.06). In prior studies, investigators also observed that hospital readmission rates developed using different data sources and adjusters were similar.[13,34,35] Along the line, it has also been reported that relative profiling approaches for pay-for-performance were more robust to missing data than absolute profiling approaches,[36] where missing data can be viewed as an extreme, special case of misclassification. Also, in studies using simulations to evaluate the impact of under-coding of cardiac disease severity on hospital profiles or report cards, investigators found that the outlier status of most hospitals was robust to under-coding. However, miscoding of very influential predictors of mortality, such as shock or renal failure, could lead to a change in the 30-day mortality rate profile.[37] In our real data analysis example, the prevalence of individual comorbid conditions was lower when taken from the CMS-2728 form (Supplemental Table S2), but similar profiling results were observed with the same statistical model using either data source. However, it was also revealed that profiling status can change in the extreme facilities when misclassification severely varied across providers; see Supplemental Tables S1 and S2. When we replaced 1 covariate (CHF) ascertained from CMS-2728 form, 3 out of 2740 (0.1%) facilities changed the profiling status (facility #1 to #3, Supplemental Table S2). When we replaced all 11 types of comorbidity conditions with different data sources, 12 out of 2740 facilities (0.4%) changed profiling status (facility #2 to #13, Supplemental Table S2). A total of 4 facilities (facility #3 to #6) can newly face penalty when CMS-2728 form (less reliable data source) was used. In CMS dry run of SRR for dialysis facilities, CHF was removed from past-year comorbidity due to its presence in many ESRD patients and modifiability.[23] Our real data analysis (Supplemental Table S2, facility #1, #2, and #3) may suggest a potential flaw in current dialysis facility QIP when using SMR as outcome. Standardized mortality ratio was adjusted for comorbidities from patient’s CMS-2728 form, for example, CHF.[23] There is already existing literature on the agreement between different data sources for comorbidities (eg, CMS 2728 vs claims) in ESRD and on the impact of using different data sources on profiling models outside ESRD. Thus, we consider our work as the combination of these two, accompanied by “statistical” evaluation (eg, mean squared error, coverage probability and sensitivity/specificity) and the first study of its kind in ESRD. Readers may find our findings are generally supported by theory, empirical real-world data analysis, and statistical simulation (where truth is known), and in agreement with previous related findings. Other unanswered questions include whether the duration of time between dialysis initiation and the CMS-2728 form completion date affects misclassification, and if facilities with dialysis patients of greater vintage (prevalent time on dialysis) may also face more misclassification. The process of data input onto the CMS2728 is extremely variable and done to various degrees of accuracy. It is supposed to be done within 45 days of first dialysis treatment for ESRD, at the dialysis outpatient clinic, not in the hospital. Notably, there is no penalty if the completion and submission of the form to the local dialysis network are delayed. The local dialysis network will generate a form listing the incomplete 2728 submissions. There are no published data frequency of incomplete submissions at 45 days. These could serve as good future research questions.[7,8,10,38,39] The limitations of our study should be noted. First, in the simulation study, we only considered simple scenarios with limited configurations; for example, misclassifications and size constant across providers, non-differential ME, and 2 covariates. Although simple settings can better elucidate mechanisms and facilitate interpretations, future investigations are warranted under more complicated settings. Second, there are different profiling models besides the CMS model/method that we selected. For example, random versus fixed effects, 2-stage, Cox and piecewise Poisson model, and observed or predicted value (vs expected value in standardized ratios) have been used and results with different policy implications have been observed.[1,10,12,24,40] These contradictions can be investigated for further elucidation and possible resolution in future. Third, we did not have a gold standard for comorbidity determination so claims data served as the reference standard, which is currently utilized by CMS for profiling hospitals based on 30-day readmission ratios.[13,19] Based on simulation and real data example, we conclude that misclassification on covariates can affect regression coefficients in the models used for profiling, but less on profiling itself. However, extreme scenarios (such as in completely missing or omitted data in an important covariate) and between-provider variability can influence and make a difference in the final profile status. Click here for additional data file. Supplemental material, Supplemental_material for Assessing the Impacts of Misclassified Case-Mix Factors on Health Care Provider Profiling: Performance of Dialysis Facilities by Yi Mu, Andrew I. Chin, Abhijit V. Kshirsagar and Heejung Bang in INQUIRY: The Journal of Health Care Organization, Provision, and Financing
  20 in total

1.  Discrepancy between Medical Evidence Form 2728 and renal biopsy for glomerular diseases.

Authors:  J Bradley Layton; Susan L Hogan; Caroline E Jennette; Barbara Kenderes; Jenna Krisher; J Charles Jennette; William M McClellan
Journal:  Clin J Am Soc Nephrol       Date:  2010-08-05       Impact factor: 8.237

2.  Rethinking Thirty-Day Hospital Readmissions: Shorter Intervals Might Be Better Indicators Of Quality Of Care.

Authors:  David L Chin; Heejung Bang; Raj N Manickam; Patrick S Romano
Journal:  Health Aff (Millwood)       Date:  2016-10-01       Impact factor: 6.301

3.  Bias Correction Methods for Misclassified Covariates in the Cox Model: comparison offive correction methods by simulation and data analysis.

Authors:  Heejung Bang; Ya-Lin Chiu; Jay S Kaufman; Mehul D Patel; Gerardo Heiss; Kathryn M Rose
Journal:  J Stat Theory Pract       Date:  2013-01-01

4.  Validation of reported predialysis nephrology care of older patients initiating dialysis.

Authors:  Jane Paik Kim; Manisha Desai; Glenn M Chertow; Wolfgang C Winkelmayer
Journal:  J Am Soc Nephrol       Date:  2012-04-19       Impact factor: 10.121

5.  Comorbidity ascertainment from the ESRD Medical Evidence Report and Medicare claims around dialysis initiation: a comparison using US Renal Data System data.

Authors:  Mahesh Krishnan; Eric D Weinhandl; Scott Jackson; David T Gilbertson; Eduardo Lacson
Journal:  Am J Kidney Dis       Date:  2015-05-23       Impact factor: 8.860

6.  An investigation of the MC-SIMEX method with application to measurement error in periodontal outcomes.

Authors:  Elizabeth H Slate; Dipankar Bandyopadhyay
Journal:  Stat Med       Date:  2009-12-10       Impact factor: 2.373

7.  Accounting For Patients' Socioeconomic Status Does Not Change Hospital Readmission Rates.

Authors:  Susannah M Bernheim; Craig S Parzynski; Leora Horwitz; Zhenqiu Lin; Michael J Araas; Joseph S Ross; Elizabeth E Drye; Lisa G Suter; Sharon-Lise T Normand; Harlan M Krumholz
Journal:  Health Aff (Millwood)       Date:  2016-08-01       Impact factor: 6.301

8.  Assessing the accuracy of profiling methods for identifying top providers: performance of mental health care providers.

Authors:  Victoria Y Ding; Rebecca A Hubbard; Carolyn M Rutter; Gregory E Simon
Journal:  Health Serv Outcomes Res Methodol       Date:  2012-09-18

9.  Patient Satisfaction Is Associated With Dialysis Facility Quality and Star Ratings.

Authors:  Abhijit V Kshirsagar; Amir Alishahi Tabriz; Heejung Bang; Shoou-Yih D Lee
Journal:  Am J Med Qual       Date:  2018-09-18       Impact factor: 1.852

10.  Area-level poverty, race/ethnicity & dialysis star ratings.

Authors:  Abhijit V Kshirsagar; Raj N Manickam; Yi Mu; Jennifer E Flythe; Andrew I Chin; Heejung Bang
Journal:  PLoS One       Date:  2017-10-17       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.