Jason P Estes1, Damla Şentürk2, Esra Kürüm3, Connie M Rhee4, Danh V Nguyen4. 1. Mountain View, CA 94043, USA. 2. Department of Biostatistics, University of California, Los Angeles, CA 90095, USA. 3. Department of Statistics, University of California, Riverside, CA 92521, USA. 4. Department of Medicine, University of California Irvine, Orange, CA 92868, USA.
Abstract
Profiling or evaluation of health care providers, including hospitals or dialysis facilities, involves the application of hierarchical regression models to compare each provider's performance with respect to a patient outcome, such as unplanned 30-day hospital readmission. This is achieved by comparing a specific provider's estimate of unplanned readmission rate, adjusted for patient case-mix, to a normative standard, typically defined as an "average" national readmission rate across all providers. Profiling is of national importance in the United States because the Centers for Medicare and Medicaid Services (CMS) policy for payment to providers is dependent on providers' performance, which is part of a national strategy to improve delivery and quality of patient care. Novel high dimensional fixed effects (FE) models have been proposed for profiling dialysis facilities and are more focused towards inference on the tail of the distribution of provider outcomes, which is well-suited for the objective of identifying sub-standard ("extreme") performance. However, the extent to which estimation and inference procedures for FE profiling models are effective when the outcome is sparse and/or when there are relatively few patients within a provider, referred to as the "low information" context, have not been examined. This scenario is common in practice when the patient outcome of interest is cause-specific 30-day readmissions, such as 30-day readmission due to infections in patients on dialysis, which is only about ~ 8% compared to the > 30% for all-cause 30-day readmission. Thus, we examine the feasibility and effectiveness of profiling models under the low information context in simulation studies and propose a novel correction method to FE profiling models to better handle sparse outcome data.
Profiling or evaluation of health care providers, including hospitals or dialysis facilities, involves the application of hierarchical regression models to compare each provider's performance with respect to a patient outcome, such as unplanned 30-day hospital readmission. This is achieved by comparing a specific provider's estimate of unplanned readmission rate, adjusted for patient case-mix, to a normative standard, typically defined as an "average" national readmission rate across all providers. Profiling is of national importance in the United States because the Centers for Medicare and Medicaid Services (CMS) policy for payment to providers is dependent on providers' performance, which is part of a national strategy to improve delivery and quality of patient care. Novel high dimensional fixed effects (FE) models have been proposed for profiling dialysis facilities and are more focused towards inference on the tail of the distribution of provider outcomes, which is well-suited for the objective of identifying sub-standard ("extreme") performance. However, the extent to which estimation and inference procedures for FE profiling models are effective when the outcome is sparse and/or when there are relatively few patients within a provider, referred to as the "low information" context, have not been examined. This scenario is common in practice when the patient outcome of interest is cause-specific 30-day readmissions, such as 30-day readmission due to infections in patients on dialysis, which is only about ~ 8% compared to the > 30% for all-cause 30-day readmission. Thus, we examine the feasibility and effectiveness of profiling models under the low information context in simulation studies and propose a novel correction method to FE profiling models to better handle sparse outcome data.
Unplanned readmissions following a hospital discharge are a major source of morbidity and mortality risk for patients on dialysis. The burden of hospitalization is particularly high for patients on dialysis, where the latest U.S. national data shows that the frequency of 30-day readmissions is 31.1%, which is more than double the frequency of readmissions seen in older Medicare beneficiaries without kidney disease (United States Renal Data System/USRDS [1]).Profiling or evaluation of health care providers, such as hospitals, dialysis facilities, and nursing homes among others, serves multiple purposes, including (1) identifying providers with performance below standard by government agencies for regulatory or payment purposes and (2) conveying information and feedback to stakeholders (e.g., the public, patients, providers) regarding the quality of care among providers. The main focus of our work is objective (1), specifically with respect to the goal of identifying providers whose performances (e.g., 30-day readmission) are exceptionally worse (W) and not different (ND) relative to a reference, such as a national “average” standard. Also related to the inferential process of identifying/flagging providers with 30-day readmission rates W and ND from the national rate, it is of interest to obtain accurate estimates of provider-specific effects and associated quality metrics.When the outcome, such as 30-day readmission, is not frequent and/or when there are relatively few patients within a provider, referred to as the “low information” context [2], estimation and inference for profiling models are understandably more challenging. This is the situation when the patient outcome of interest is cause-specific 30-day readmissions, such as 30-day readmission due to infections in patients on dialysis, which is only about ~8% compared to greater than 30% for all-cause 30-day readmission. Infection-related hospitalizations are serious adverse events that are oftentimes preventable. Hence, it is an important performance indicator that is carefully monitored in dialysis facilities.Respecting the data structure that patients are nested within providers, current profiling models for 30-day unplanned hospital readmission are hierarchical logistic regressions of the form outcome ~ provider effects + patient case-mix effects. Thus, patient outcomes vary across providers due to variation in providers’ quality of care (provider-specific effects) and variation in patient-level case-mix effects, which include demographics, comorbidities, and the type of index admission. Because of the nested data structure and the need to stabilize estimation, modeling provider effects as random effects (RE) has been used [2-7].A justification for the use of RE models is that they provide stable provider effect estimates through shrinkage, although several inherent disadvantages have been noted. In particular, RE estimates are biased toward the overall provider average and biased in the presence of confounding between patient risk factors and provider effects [8]. Also, although the overall average error in estimation of provider effects is smaller because mean square error is minimized over the full set of provider effects in the RE approach, fixed effects (FE) estimates have smaller error for outlier ‘providers whose effects are exceptionally large or small’ [8], which are the providers we wish to identify. Our previous works also have shown that the benefit of stabilization comes at a severe cost in substantially biased provider effects estimation and, perhaps more important, at a substantial reduction in the power to identify W providers [9, 10]. Our works and others have used high-dimensional FE models to identify sub-standard (“extreme”) performance, especially for profiling 30-day readmission for dialysis facilities where the outcome is not sparse [3, 8-15]. However, the extent to which FE models are useful in the low information context has not been studied, which is the focus of this work. Thus, we assess the relative performance of the FE model proposed by He et al. [15], including the stability of provider-specific estimates and the ability to identify extreme providers in simulation studies. Briefly, the FE model of He et al. [15] is a high-dimensional parameter model with a unique fixed intercept for each provider and is used in assessing the performance of dialysis facilities [3, 8, 15]; see also Chen et al. [14] and Estes et al. [11, 12] for recent dialysis facility profiling applications. Furthermore, in this work, we also propose and examine the performance of a novel corrected FE model estimation approach geared towards estimation under low information context, where the (uncorrected) FE model estimates of some provider-specific effects may be unreliable.
METHODS: HIGH-DIMENSIONAL FE PROFILING MODELS
We introduce the FE profiling model using the context of hospital readmission as an illustrative example. Let the binary outcome Y equal 1 if patient index discharge j in provider i results in a readmission within 30 days, for patient discharge j = 1,2,…,N in provider (dialysis facility) i = 1,2,…,F. The FE profiling model (He et al. [15]) is
where γ = (γ1,…,γ) are the provider-specific fixed effects, μ ≡ E(Y ∣ Z) = Pr(Y = 1 ∣ β,γ,Z) = p is the expected readmission for patient index discharge j = 1,2,…,N in provider i = 1,2,…,F, and g(p) = log{p/(1 − p)} is the logit function. In profiling model (1), the r patient risk adjustment factors for discharge j in provider i are denoted by the covariate vector Z = (Z1,…,Z) corresponding to parameters β =(β1,…,β). In practice, the process of risk adjustment is complex and depends, in part, on policy objectives and the specific patient population (e.g., general population, dialysis population). However, we point out that it is critical to adequately risk-adjust for patient-level factors and avoid inclusion of variables (e.g., provider-level or patient-level variables) that are/may be related to the process of care (e.g., see [2, 3, 13]).To avoid confusion, we emphasize that the model shown in (1) is not a collection of individual models (i.e., not a separate model for each provider), but rather a single model with high-dimensional parameters and requires simultaneous estimation for thousands of provider-specific effects/parameters ( and β). For example, for profiling dialysis facilities the dimension of γ = (γ1,…,γ) is > 6,000 dialysis facilities across the U.S., and the dimension of β is ~ 40. Standard estimation (e.g., maximum likelihood) and software fails; thus, He et al. (2013) proposed a feasible estimation method based on an alternating one-step Newton-Raphson that iterates between estimation of β and γ.The summary performance index for each provider which incorporates patient-level risk factors (Z’s) used in practice is the standardized readmission ratio (SRR). For FE model (1), given the provider and the patient case-mix effect estimates, denoted by and , respectively, the estimated SRR for provider i is
where is the estimated probability of readmission for patient j in provider i and . The aggregate parameter in the denominator is taken to be the median of the . Thus, the numerator of SRR is the expected total number of readmissions for provider i and the denominator is the expected total number of readmissions for an “average” facility (taken over the population of all providers), adjusted for the particular case-mix of the same patients in provider i. Note that SRR estimates the true quantity .
ESTIMATION AND INFERENCE PROCEDURES
In addition to the challenge of high-dimensional parameters, compounding difficulties are encountered in the low information context where the outcome is sparse, resulting in providers with few readmissions or even no readmission. For very small providers with few patients, there is very low information to assess performance. In extreme cases of providers with no or very low readmission, the FE estimation method [15] leads to unstable estimates for those providers. Thus, in the low information context, we propose a correction to the FE estimates for provider-specific effects.
FE Model Estimation
To describe our proposed FE corrected estimation for provider-specific effects, we first set the notation for the likelihood of the FE model (1) and briefly summarize the alternating Newton-Raphson algorithm proposed by He et al. [15]. For the FE model (1), , and the likelihood function is given byBecause direct maximization of (3) is not feasible with standard methods when F is large (e.g., F ~ 6,000), He et al. (2013) proposed an effective iterative algorithm that alternates between estimation of γ given β and estimation of β given γ using one-step Newton-Raphson updates. More precisely, estimation of the high-dimensional parameters (γ,β) are feasible since the likelihood (3) can be written as L(γ, β) = ∏L(γ, β) where L(γ, β) = ∏ exp{(γ, + β)y}/x{1 + exp(γ + β)} for provider i. Thus, given β , γ can be estimated via a Newton-Raphson procedure that depends only on one variable in the maximization of L(γ,β). Briefly, the estimation procedure proposed by He et al. (2013) is as follows.Set the initial values β(0) and of β and γ, respectively.The (m + 1) th maximization step for β , given , iswhere and .The (m + 1) th maximization step for γ, given β(, is
where andThe above steps are repeated until convergence, defined by , where and ε is some prespecified tolerance level. Denote these final uncorrected provider-specific estimates as .Expressions for , , , and are given in He et al. (2013) and they are provided here for convenience: , , , and . Programs in R, sample data, and tutorial are provided in the online Supplementary Materials. In our implementation, we choose β(0) = 0 and where , the Jeffreys’ prior estimated proportion for facility i (i.e., posterior mean of a Beta distribution, .
Corrected Estimation of Provider Effects
As described earlier, estimation of provider effects, γ for the FE model can be unstable for some providers in the low information context. Thus, we consider an approach to “correct” or stabilize FE estimates. We adapt the Firth correction in (standard) logistic regression [16, 17] to the high-dimensional FE model (1). Recall that for the standard (non-hierarchical data) logistic regression model with N independent units, j = 1,…,N , Pr(Y = 1∣,Z,θ) = , where θ = (θ1,…,θ) are regression coefficients for covariates Z = (Z,…,Z). Firth’s modified score equations [16] for estimation to reduce small sample bias is U*(θ)≡U(θ) + 0.5trace[I(θ)−1{∂I(θ)/∂θ}] = 0 , for r = 1,…, p, where U(θ) = ∂log L/∂θ, I(θ) is the information matrix, and L = L(θ) denotes the likelihood. This is equivalent to using a penalized likelihood L*(θ) = L(θ)∣ I(θ)∣−1/2 [17], where the penalty term ∣I(θ)−1/2 is equivalent to Jeffreys’ prior [18]. Applying this to logistic regression yields the modified estimation equations for r = 1,…,p, with h as the j th diagonal element of the “hat” matrix H = W1/2Z(Z)−1
Z1/2 , with W =diag{π1(1 − π1),…,π(1 − π)} and Z denotes the N × p data matrix. For binary outcome with small sample size, Firth’s logistic regression has become a standard approach to reduce bias in the estimated regression coefficients.We adapt this penalized estimation to the high-dimensional FE model (1) to correct for unstable estimation of γ for providers with low information. We first note that β can be precisely estimated because it is based on data from all providers; therefore, penalization on patient-level risk factors is unnecessary. Direct application of the Firth’s modified score to penalize γ = (γ1,…,γ) is not feasible for FE profiling model (1) due to the challenge of calculating the score penalties. These are obtained via the diagonals of the N × N hat matrix, which in dialysis population applications are in the order of N ~ 500,000 or larger. The size of N is many orders of magnitude larger for profiling applications in the general population. However, estimating β with Firth’s correction, for a fixed β, is equivalent to sequentially estimating γ individually, for a fixed β, using Firth’s correction. This is seen as follows. For a fixed β, the hat matrix used in the estimation of β with Firth’s correction is H = W1/2X(X)−1X1/2, where W = W1 ⊕…⊕W, X = X1 ⊕…⊕X, W =diag{p (1 − p),…, p(1 − p)} are provider-specific weight matrices, X are N × 1 provider-specific design matrices of ones, and ⊕ denotes the matrix direct sum operator, e.g., A ⊕ B is the block diagonal matrix [A, 0;0, B]. As shown in the Supplementary Appendix section, . Thus, the diagonal of H may be obtained sequentially via the diagonals of for each provider i.The i th provider hat matrix reduces to
. Thus,
and for a fixed β, the estimation of β using Firth’s correction can be reduced to a sequence of estimations of a single parameter γ by penalizing the score U, using the weights . More specifically, the provider-specific penalized score equations are , for i = 1,…,F.We propose a simple correction to adjust the estimates from Section 3.1 of provider-specific effects, γ’s, using the modified score . More precisely, first, β is fixed at the estimate resulting from Section 3.1, namely . The provider effects γ’s are then reestimated using the estimation procedure outlined in 3.1 with the following modifications. In Step (i), β(0) is fixed at and is set to . Note that when β(0) is set to the zero vector, the initial value of γ(0) reduces to value previously noted in Step (i) in Section 3.1. In Step (ii), β( is set equal to β(. In other words, β is no longer estimated. Finally, the score in Step (iii) is modified by replacing U with .
Inference: Identifying Extreme Providers
In profiling, one of the main interests is to identify/flag providers that significantly deviate from the national norm (e.g., national average). The current public policy in the U.S. penalizes providers that perform significantly W than the national standard (SRR >1). Thus, in practice, the goal is to flag/identify providers as W or ND from the national standard (SRR not different than 1). Better (B) providers (SRR <1) are not penalized nor incentivized.First, note that for a provider with an adjusted event rate that does not differ from the national norm, γ = γ , which implies SRR =1. When SRR > 1 or SRR < 1, the event rate for provider i is greater than or less than the national norm, respectively. Thus, testing the null hypothesis H0 : γ = γ is of interest and a test statistic is where is an estimate of p.Simultaneously testing the null hypothesis for thousands of providers is computationally expensive. However, one can take advantage of the fact that β and γ can be estimated based on the large data from all providers. Hence, these parameters are estimated and fixed throughout the proposed algorithm below which is based on resampling responses under the null hypothesis. Since the global parameters β and γ are fixed, model fitting to the resampled data only requires estimation of provider-level effects γ . This reduces the computational burden substantially since each γ is estimated using only data from each provider separately. The steps of the procedure for each provider i are as follows.Draw B samples , where each sample and observation is drawn independently from a Bernoulli distribution under the null: , for b = 1,…,B. (We used B = 500 .)Calculate the test statistics for datasets generated/simulated under the null: where, and estimation of only involves steps (iii)-(iv) in Section 3.1 for the uncorrected FE model since β is fixed. For the correction method, the estimation proceeds as described earlier in Section 3.2; that is, the corrected estimation algorithm is applied to the b th dataset to obtain .A nominal two-sided p-value for the ith provider, P, is calculated as
where is calculated based on the original/observed data and I(A) denotes the indicator function for event A.
SIMULATION STUDY DESIGN
We designed simulation studies to assess the performance of the uncorrected and corrected FE model estimation methods, mainly with respect to (A) estimation of provider-specific effects, γ’s and SRR’s; and (B) identification of extreme providers relative to a reference. Data were generated from the model
with i = 1,…,F = 5,000 providers and β = (.25, .25, −.25, −.25, .5, .25, .25, .25, .25, −.25, −.25, −.25,.5,.5,.5). For the patient case-mix vector, Z, the dependence/correlation structure among variables were based on the observed correlations among patient-level variables in real USRDS data. More specifically, Z* is generated from a multivariate normal distribution with means zero and covariance Cov(Z*) = V1/2RV1/2, where and R is the correlation matrix. The first 5 covariates were taken to be continuous: . The remaining 10 covariates, Z6,…,Z15 are binary variables, generated by thresholding corresponding so that Pr(Zr =1)=E(Z)’s are equally spaced between 0.2 and 0.8 (for r = 6,…,15). The correlation matrix and standard deviations of the 15 variables are provided in the Supplementary Appendix.For the provider effects, , 2.5% were under-performers (W: “worse”) and 2.5% were over-performers (B: “better”) whose effects, γ’s, were equally spaced in the intervals [0.4,1.0] and [−1.0, −0.4], respectively. The remaining 95% of providers, with effects not different (ND) from the reference, were generated from a N(0,σ2) distribution with σ2 = 0.22. Note that a constant γ0 has been added to simulation model (4) to conveniently control the baseline rate of readmission (outcome data sparsity), where baseline rates of readmission considered were 20%, 10%, 5%, and 3% corresponding to γ0 = log(1/13.5), log(1/33), log(1/73), and log(1/126), respectively. This setup conveniently regulates the level of outcome data sparsity. For each baseline readmission rate setting, 200 datasets were generated and the estimation (Section 3) and inference procedure (Section 3.3) was applied to each simulated dataset.The provider volume of each generated dataset range from a minimum of 48 to a maximum of 195 patients on average, similar to real USRDS data in applications (e.g., see [14]). More specifically, the number of patients were generated from a truncated Poisson distribution following He et al. (2013), where the number of patients was taken to be with . This process mimics the sparse data structure of dialysis facility (provider) i in practice.
RESULTS
Estimation of Provider-Specific Effects and SRRs
The results for provider-specific estimates of γ’s for the 125 (2.5%) under-performers (γ > 0) and 125 over-performers (γ <0) for the case of 3% overall event rate (most sparse) are provided in Figure 1A where averages of γ estimates over 200 simulated data sets are plotted. As expected, under this extremely low information context, provider effect estimates are unstable for the uncorrected FE method. However, note that these providers are mainly the over-performers (γ < 0) with low or zero events ( are small) leading to “explosion” of the estimates (Figure 1A). It is important to note that these unstable estimates are in the direction of the true effect (negative direction for negative γ’s, where ). Also as expected, estimates for under-performers (γ > 0) are less unstable and more on target for the uncorrected FE method. The corrected estimation approach, which adapts the Firth’s modified score equation for the FE model, largely eliminates the instability and estimates are on more target for the true γ’s (Figure 1B).
Figure 1:
Estimates of provider-specific effects, γ < 0 (over-performers) and γ > 0 (under-performers) (A) for the uncorrected high-dimensional fixed effects (FE) model and (B) for the corrected method at high-level of outcome data sparsity of 3%. Displayed is average for each γ estimate, averaged over 200 simulated data sets.
Figure 2 (left column) shows estimates of γ’s for increasing percentage of overall events, from 3% to 20% for the uncorrected FE method. Clearly, the frequency of unstable estimates for γ < 0 decreased with increasing overall events, although unstable estimates are apparent even at a 10% event rate. However, the magnitude of the unstable estimates declined quickly () as the overall event rate increased (e.g., at 20%).
Figure 2:
Uncorrected (left column) and corrected (right column) estimation of provider-specific effects, γ’s, for 3%, 5%, 10%, and 20% overall outcome event rate. Displayed is average for each γ estimate, averaged over 200 simulated data sets.
Next, we summarize results for estimation of the provider-specific SRRs. As describe in Section 2, SRR is the summary performance index for each provider used in practice which incorporates patient-level risk factors Z and their estimated effects, . More specifically, given the provider and the patient case-mix effect estimates for each approach, denoted by and , respectively, the estimated SRR for provider i is
where , , * and denotes the uncorrected and corrected approach, namely U and C. Figure 3 (left column) summarizes the uncorrected FE model estimates of SRR for 3% to 20% overall outcome event. We note that even though specific γ < 0 were unstable for highly sparse data (e.g., at 3% - 10%; Figure 2), corresponding estimates of SRR’s are stable overall and targets the true SRR, because SRR incorporates patients characteristics, their effects, as well as provider-specific effects as shown in (5); see Figure 3 (left column). Average SRR estimates for the corrected estimation performed well and are summarized in Figure 3 (right column). However, we note that for extremely sparse data (e.g., at 3%), the uncorrected approach slightly overestimate SRRs while the corrected approach slightly underestimate SRR for truly worse providers (true SRR >1; Figure 4 - top). For truly better providers (true SRR < 1), both methods slightly over estimate the true SRRs, although more so with the corrected method. Differences in SRR estimates between the two methods are neglible as the overall percent of events increases (e.g., at 20%; Figure 4 - bottom).
Figure 3:
Uncorrected (left column) and corrected (right column) estimates of standardized readmission ratios (SRRs) for 3%, 5%, 10%, and 20% overall outcome event rate. Displayed is average for each SRR estimate, averaged over 200 simulated data sets.
Figure 4:
Estimation of standardized readmission ratios (SRRs) for 3% and 20% overall outcome event rate for corrected and uncorrected methods among the 125 better (B) and 125 worse (W) providers. Displayed is average for each SRR estimate, averaged over 200 simulated data sets.
Flagging Extreme Providers/Facilities
The overall performance of the uncorrected and corrected FE methods to identify extreme providers are assessed in terms of sensitivity (SEN) to correctly identify providers that under-perform (W: “worse”), over-perform (B: “better”) relative to the reference standard (e.g., national reference), and specificity. Specificity (SPEC) refers to the correct identification/flagging of providers whose performances are not different from the reference standard (ND: “not different”). We note that provider assessment policies in practice focus on identifying under-performing providers (W providers) as those are tied to payment policy or regulatory goals. Figure 5 summarizes the distribution of SEN-W, SEN-B, and SPEC for varying levels of outcome sparsity, ranging from 3% to 20% overall outcome rate. For extremely sparse data of 3% and 5%, the uncorrected method has highest sensitivity to detect under-performing providers (higher SEN-W; left column). This is expected since the for truly worse providers, there are more outcome events (); see Figure 5 (left column). SEN-W rates were similar between uncorrected and corrected methods at 20% overall overall outcome rate.
Figure 5:
Overall performance of the uncorrected and corrected estimation methods to identify truly worse (sensitivity - worse), truly better (sensitivity - better), and specificity (providers not different from the reference) across data sparsity of 3%, 5%, 10%, and 20% overall outcome event rate. Displayed is average for each SRR estimate, averaged over 200 simulated data sets.
Because the event counts are zero or low for truly better providers in the context of sparse outcome data, the unstable/poor estimation of provider effects from the uncorrected method results in lower sensitivity to detect over-performing providers (lower SEN-B) compared to the corrected method (Figure 5 - middle column). However, note that the nominal SEN-B rates are low overall, as expected, compared to nominal SEN-W rates. This is expected in the low information context since B providers would have fewer readmissions, making it difficult to correctly identify B providers when the outcome is sparse. SPEC rates were high and similar between uncorrected and corrected methods (Figure 5 - right column).As mentioned earlier, the main current objective of flagging “extreme” providers in profiling analysis focuses on identifying W providers and ND providers. Providers that over-perform (B providers) are not relevant to current payment policy or regulatory objectives. Therefore, under this regime, it is of interest to ensure that there are no (or low rate of) false negatives that misclassify/flag B provider as W provider (FN). Indeed, there are none, i.e., FN = 0 across all levels of data sparsity (Figure 6), which is not surprising since W and B providers are on the opposite tails of the distribution of providers. This is true with the uncorrected FE model (as well as the corrected estimation method) since the direction of unstable estimates of γ’s are in the same (negative) direction of true γ (as pointed out earlier), despite the unstable provider-specific estimates. However, it is not uncommon for false negative classification of a B provider as a ND provider (FN). Although FN deceases with increasing percentage of overall outcome events as expected, FN is common for the extremely low information context (e.g., 3%, 5% overall event rate; Figure 6). We emphasize that high FN does not affect current public policy because over-performers are not incentivized and are consider “ND” providers anyway. Therefore, the FE profiling model, even uncorrected, is still useful in the low information context with respect to the current public policy goal of identifying W and ND providers. However, if the public policy goal evolves to also incentivize for better performance, then novel methods able to correctly identify B providers with high sensitivity are needed.
Figure 6:
Rate of false negative (FN) for incorrectly flagging better (B) providers as worse (W) providers (FN) and incorrectly flagging B providers as providers not different (ND) from the reference (FN) for the uncorrected and corrected estimation methods across data sparsity of 3%, 5%, 10%, and 20% overall outcome event rate. Displayed is average rates, averaged over 200 simulated data sets.
DISCUSSION
Seminal works by Kalbfleisch and Wolfe [8] and He et al. [15] show that FE model estimates have smaller error for outlier providers whose effects are exceptionally large or small, and these extreme providers are precisely the ones we wish to identify in profiling analysis. The high-dimensional FE models were then used to assess the performance of dialysis facilities (providers) with respect to all-cause hospital readmissions which are frequent outcomes in dialysis patients. Subsequently, our own works have elucidated several operating characteristics [9, 10] of the FE profiling models and have been applied to assess the performance of dialysis facilities with respect to all-cause 30-day readmissions [11, 12, 14]. However, to date there is no work that examines the performance of FE models in the low information context where the outcome is sparse. The current study starts to fill this gap in knowledge. Several findings from this study have important practical impact in the low information context. First, even though the provider-specific estimates with true γ < 0 (truly B providers) are unstable, they are in the same direction as the true effects and the instability has moderated effects on the estimation of SRRs; i.e., SRRs are reasonably well-estimated and are the relevant quantities used in practice as they incorporate patient case-mix. However, if the provider-specific estimates, , are themselves of interest, then our proposed correction method can be used to provide better estimates, especially corresponding to uncorrected that are substantially less than zero. Second, the consequence of sparse outcome data impacts more directly inference for B providers because true over-performers are the ones that contribute no or few events (readmissions); however, this “deficit” in estimation does not greatly impact the identification of W providers/under-performers and ND providers, which is the current focus of profiling in practice. Development of novel methods that have better sensitivity for flagging B providers would be useful when public policies or regulatory goals incorporate an incentive for over-performers.
Table 1:
Correlation Matrix of Z1,…,Z15 and their Standard Deviations
Authors: Leora I Horwitz; Chohreh Partovian; Zhenqiu Lin; Jacqueline N Grady; Jeph Herrin; Mitchell Conover; Julia Montague; Chloe Dillaway; Kathleen Bartczak; Lisa G Suter; Joseph S Ross; Susannah M Bernheim; Harlan M Krumholz; Elizabeth E Drye Journal: Ann Intern Med Date: 2014-11-18 Impact factor: 25.391
Authors: Damla Şentürk; Yanjun Chen; Jason P Estes; Luis F Campos; Connie M Rhee; Kamyar Kalantar-Zadeh; Danh V Nguyen Journal: Commun Stat Simul Comput Date: 2018-11-17 Impact factor: 1.118
Authors: Yanjun Chen; Damla Şentürk; Jason P Estes; Luis F Campos; Connie M Rhee; Lorien S Dalrymple; Kamyar Kalantar-Zadeh; Danh V Nguyen Journal: Commun Stat Simul Comput Date: 2019-03-28 Impact factor: 1.118
Authors: Jason P Estes; Yanjun Chen; Damla Şentürk; Connie M Rhee; Esra Kürüm; Amy S You; Elani Streja; Kamyar Kalantar-Zadeh; Danh V Nguyen Journal: Stat Med Date: 2020-01-30 Impact factor: 2.373
Authors: Harlan M Krumholz; Zhenqiu Lin; Elizabeth E Drye; Mayur M Desai; Lein F Han; Michael T Rapp; Jennifer A Mattera; Sharon-Lise T Normand Journal: Circ Cardiovasc Qual Outcomes Date: 2011-03