Literature DB >> 36039373

Relative contribution of pharmacists and primary care providers to shared quality measures.

Benjamin Y Urick1,2, Shweta Pathak3, Seth D Cook4, Valerie A Smith5,6,7, Patrick J Campbell8, Mel L Nelson9, Lee Holland10, Matthew K Pickering11.   

Abstract

Background: Alternative payment models are common for both primary care providers and pharmacies. These models rely on quality measures to determine reimbursement, and pharmacists and primary care providers can contribute to performance on a similar set of medication-related measures. Therefore, payers need to decide which provider to incentivize for which measures when both are included in alternative payment models.
Objectives: To explore the relative contribution of pharmacies and primary care group practices to a range of quality measures.
Methods: This retrospective cross-sectional study used Medicare Part A, B, and D claims for a 20% random sample of Medicare beneficiaries for 2014-2016. Eight quality measures were selected from the Merit-based Incentive Payment System and Medicare Part D Stars Ratings. Measures included medication adherence measures, appropriate prescribing measures such as high-risk medication use in the elderly, statin use in persons with diabetes (SUPD), and others. The residual intraclass correlation coefficient (RICC) was used to estimate the contribution of pharmacists and primary care providers to measure variation. To estimate the relative contribution across provider types, the pharmacy RICC was divided by the group practice RICC to yield a RICC ratio.
Results: Due to varying measure eligibility requirements, the number of patients per measure ranged from 179,430 to 2,226,129. Across all measures, the RICC values were low, ranging from 0.013 for SUPD to 0.145 for adult sinusitis. Adherence measures had the highest RICC ratios (1.15-1.44), and the annual influenza vaccination measure had the lowest (0.56). Discussion and conclusions: The relative contributions of pharmacists and primary care providers vary across quality measures. As payers design payment models with measures to which pharmacists and primary care providers can contribute, the RICC ratio may be useful in aligning incentives to the providers with the greatest relative contributions. Additional research is needed to validate this method and extend it to additional sets of providers.
© 2022 Published by Elsevier Inc.

Entities:  

Keywords:  Community pharmacies; Medication adherence; Outcome assessment; Primary health care

Year:  2022        PMID: 36039373      PMCID: PMC9418985          DOI: 10.1016/j.rcsop.2022.100165

Source DB:  PubMed          Journal:  Explor Res Clin Soc Pharm        ISSN: 2667-2766


Introduction

Poor healthcare performance and high spending growth rates have created pressure to shift from volume-driven, fee-for-service payment models to value-based and alternative payment models that incentivize spending reductions and quality improvements., Across the range of value-based and alternative payment models, providers who perform better on quality measures generally receive greater financial bonuses or a larger share of savings than providers who perform poorly. These models are becoming the dominant payment method for physician and hospital services in the United States and are increasingly common for payments to community pharmacies., As the use of value-based and alternative payment models expands, there is a need to create common core measure sets across various payment models. As measures are harmonized, different provider types may be subject to the same measures, creating redundancy of incentives if multiple providers can contribute to measuring performance. Insurers may hesitate to create programs with redundant measures as increases in performance would result in payment to multiple types of providers, in a sense “double dipping” on measure performance. For example, many quality measures used to support the Merit-based Incentive Payment System (MIPS) program for physician services through Medicare are medication-related, and community pharmacists have sufficient training and opportunity to impact quality measures directed at non-pharmacist healthcare providers.8., 9, 10 This is especially true for measures such as medication adherence, which is used in many value-based payment models and has been well-studied in the community pharmacy setting.11, 12, 13, 14, 15, 16 Additional studies have demonstrated that pharmacists' impact can extend beyond adherence to measures, including annual influenza vaccination rates, hemoglobin A1c control, high-risk medication use in elderly patients (HRM), and statin use in persons with diabetes (SUPD).,,, Because pharmacists and primary care providers can contribute to a similar set of medication-related measures, payers seeking to avoid redundancy of measure incentives need to decide which provider to incentivize when both are included in alternative payment models. Determining the differential impact of these different provider classes on shared measures may be impossible to assess experimentally, as these different types of providers provide unique quality-related services as a part of their standard of care, and denying patients access to these services would likely be unethical. Additionally, experiments to determine the differential response of the two provider types to incentives would require isolating attributed patient populations to avoid cross-contamination, insurer cooperation in implementing quality improvement incentives across only part of their pharmacy and physician networks, and great expense to cover the cost of incentives for all attributed patients. Statistical methods of variance decomposition provide an alternative to assess provider influence on measures by comparing provider-to-provider variation to total measure variation. The stronger the influence of a given provider type on a measure, the greater provider-to-provider variation exists as some providers outperform others and create separation across providers (assuming a measure is not topped out). While this evidence lacks the strength of experimental or quasi-experimental studies, this statistical evidence can be used as a practical tool for evaluating provider-level contributions to measure scores within a population. One method for calculating the provider-level contribution to quality measures is the residual intraclass correlation coefficient (RICC), which estimates the extent to which a given type of provider to which patients are attributed (e.g., pharmacy or primary care provider) contributes total variation in measure scores., RICCs, and intraclass correlation coefficients, more generally, are frequently used in measure development to estimate the ability of a given measure to detect true performance among a class of providers for which performance is being measured. A common approach for deriving intraclass correlation coefficients is attributing providers to group practices and using hierarchical (or mixed effects) regression models to calculate practice-specific random intercept values. In the quality measure context, this information is often used to derive provider practice-specific reliability estimates, which are averaged to produce an intraclass correlation coefficient that can provide scientific evidence of measure reliability. Similar to the calculation of intraclass correlation coefficients for measure testing, the RICC uses the estimate of provider practice-level variance (the variance for the random intercept variable) and divides this by the sum of practice and patient variance., In this sense, the RICC decomposes variance in an outcome into two terms, a practice-specific and patient-specific term, and estimates the share of total variance attributable to a given class of provider for which performance is being measured. Importantly, this does not result in a provider-specific estimate of impact; rather, this statistic measures the influence of the class of providers on measure variation by dividing the practice-to-practice variance estimate by total measure variance. This allows for a generalized estimate of provider influence, defined as the contribution of a given class of providers to overall measure variation. This statistic is more interpretable than other intraclass correlation coefficients, which are used to calculate reliability and are less directly applicable to understanding the influence of providers on quality measures. Larger RICC values indicate greater variation in performance scores across providers and imply greater provider control over the measure,, thus creating more opportunity for performance improvement as lower performers adopt the successful practices of higher performers. Small RICCs indicate lower variation across providers, suggesting either that providers have little control over the measure and all observable variation is due to statistical noise or that providers may have a high degree of contribution to the measure but that all providers are performing equally well., Understanding which providers contribute more to quality measure performance may be useful while designing value-based arrangements. RICC estimates for a given group of providers and the RICC ratio between various provider types that contribute to a given measure could inform which measures are incorporated in value-based payment models. If the RICC ratio is low for all provider types, the measure may have to be reconsidered as providers may not have a substantial opportunity to improve performance. If, however, one group has a greater RICC than the other, this could suggest that greater incentives be targeted at the group of providers with the larger RICC since this group appears to have a greater contribution to the measure and can therefore respond to incentives and improve overall measure performance. To explore the feasibility of this approach, this study calculated RICCs for pharmacies and primary care group practices across a range of quality measures to explore variation in absolute and relative RICCs. This work can be informative to payers considering aligning quality measures across value-based payment models and can be extended to a range of quality measures and provider types.

Methods

Data for this project came from a 20% sample of Medicare data from 2014 to 2016. The data consisted of Medicare Parts A, B, and D administrative claims and associated beneficiary summary files. The 2015 data comprised the primary dataset to assess the quality measures used for this analysis. The 2014 and 2016 data were needed for two specific quality measures, which require multi-year data. Patients were included in this study if they were eligible for Medicare Parts A, B, and D, living and aged at least 65 for the entirety of 2015, were not admitted to a skilled nursing facility or received hospice, had at least one fill of a prescription drug with a days' supply of 14 or more, and had at least one carrier claim under Medicare Part B.

Measure selection

There were 4 MIPS measures and 4 Medicare Part D Star Ratings measures selected for this study (eAppendix, Table A1). To facilitate MIPS measure selection, the Pharmacy Quality Alliance (PQA) developed and validated a tool, the Quality Measure Impact Tool – Community Pharmacy (QMIT-CP), to separate measures by the degree to which pharmacists were likely to contribute to measuring performance. The goal was to identify measures with a high likelihood of contribution by community pharmacists and those with a low or moderate likelihood. Additional descriptions of these measures and adaptation of these measures to this study's dataset can be found in the online eAppendix.

Attribution and sample selection

To evaluate the relative performance of pharmacists and primary care providers on each measure, it was necessary to attribute patients to practices (e.g., community pharmacies or primary care group practices) for the respective provider types. For both types of providers, every patient who met the overall eligibility criteria described above was attributed to a single pharmacy and/or primary care practice for all measures. Patient attribution for pharmacies was based on the pharmacy at which the patient filled 50% or more of their prescriptions with a days' supply of 14 days or longer. For primary care group practices, patients were attributed to the group, generating the majority of charges for primary care services. Additional descriptions of attribution methods can be found in the eAppendix. Pharmacies and group practices were excluded from analysis if less than 10 patients were attributed for a given measure. Alternative minimums of 20, 30, and 40 were also tested. This cutoff was necessary to ensure the measure included a sufficient number of patients to make reliable inferences about provider performance. In addition to the attributed patient minimums, a maximum was applied at the 99.9th percentile for all pharmacies or group practices. This removed the highest volume group practices for which these results were likely not generalizable and pharmacies that were likely mislabeled as retail pharmacies when they were actually mail-order pharmacies. If a patient was attributed to a pharmacy or group practice that was removed from analysis due to an out-of-range attributed patient count, the patient was also removed.

Statistical analysis

Univariate statistics were used to describe patients attributed to either a pharmacy or a group practice. Characteristics examined included race, gender, end-stage renal disease (ESRD) status, low-income subsidy (LIS), location within a rural county as defined by the rural-urban classification code, count of Medicare Chronic Conditions Warehouse-defined chronic conditions, and age, defined categorically as 65–74, 75–84, and 85+. Once the attributed population was defined for each practice, the proportion of the measure-eligible attributed patients who met the numerator criteria was calculated. This created a separate score for each pharmacy or group practice, and the means and distributions of these scores across provider types were also compared. Hierarchical logistic regression models with a logit link and random intercept for attributed practice were created for each measure [SAS v9.4 (SAS Institute, Cary, NC) PROC GLIMMIX]. Across all models, the outcome of interest was a patient-level indicator of whether a measure-eligible attributed patient met the numerator criteria, and risk adjustment variables were comprised of those described in the univariate statistics paragraph above. Of note, the age category was excluded from SUPD and Diabetic Eye Exam screenings since the maximum age for inclusion in these measures was 75. To calculate the RICC, the practice-to-practice variance was divided by the total variance in the model (sum of practice-level and patient-level variance; Eq. 1). Variance estimates were derived from the risk adjustment model, and greater detail regarding the estimate of variance components can be found in the eAppendix. The RICC statistic ranges from 0 to 1, where 0 would suggest the provider has either no contribution to the measure performance and 100% of the variation is due to patients, or all providers perform equivalently. A value of 1 would suggest that all variation in the measure is due to the provider from which a patient receives care. In addition to a measure of the contribution of a class of providers to measure variation, RICC values are also used to assess the reliability of a measure in accurately measuring provider performance., While RICC values depend greatly on the context and measures chosen, and there are no standard RICC cutoffs for high or low RICCs, RICC values from a study using similar data and methods found RICC values ranged from 0.008 to 0.013 when assessing the relationship between attributed pharmacy and performance on adherence measures. Eq. 1. RICC Calculation Unadjusted and adjusted RICC values were calculated for each measure and each practice type. To calculate the RICC ratio, the pharmacy RICC was divided by the group practice RICC.

Results

There were 2,530,062 Medicare enrollees who met all eligibility criteria for this study. After applying measure-specific eligibility criteria and attributed patient count requirements, the number of eligible patients per measure ranged from 179,430 for adult sinusitis to 2,226,129 for HRM (Table 1). All results shown are for minimum attributed patient counts of 10. It was found that statistical models for some measures with small numbers of eligible patients and providers either did not converge or yielded uninterpretable results when minimum attributed patient count requirements approached 40.
Table 1

Demographics of measure-eligible population.

VariableDiabetic Eye ExamAdult SinusitisAnnual Flu VaccinationSUPDHRMRASA AdherenceStatin AdherenceNIDM Adherence
All Eligible Patients [N]a296,365176,4302,108,796222,3202,226,1291,089,5541,156,931259,383
Race (White) [%]85.793.891.186.191.090.191.287.5
Sex (Female) [%]49.365.558.848.658.855.553.550.4
ESRD [%]1.00.220.40.760.40.290.440.25
Low-income Subsidy [%]4.32.93.24.563.43.83.54.1
Rural [%]22.022.720.823.521.122.020.622.8
Numerator Attainment [%]75.683.360.276.513.182.678.180.0
Age [Mean (SD)]69.8 (3.1)72.4 (6.1)74.4 (7.2)69.7 (3.0)74.7 (7.2)74.5 (7.0)74.3 (6.9)73.7 (6.5)
Condition Countb[Mean (SD)]5.1 (2.2)4.2 (2.4)4.1 (2.4)5.0 (2.2)4.0 (2.4)4.5 (2.3)4.6 (2.4)5.0 (2.2)

ESRD: End-stage Renal Disease; SUPD: Statin Use in Persons with Diabetes; HRM: High-risk Medications in the Elderly; RASA: Renin-angiotensin System Antagonists; NIDM: Non-insulin Diabetes Medications. Higher scores indicate greater quality for all measures except HRM, where lower scores are preferable.

Patients who are attributable to denominator-eligible pharmacies or primary care offices. In the case of Adult Sinusitis, the N is the total number of primary care visits for sinusitis. Patients with more than one visit are counted more than once for the measure.

Condition count is the sum of conditions as defined by the Medicare Chronic Conditions Warehouse.

Demographics of measure-eligible population. ESRD: End-stage Renal Disease; SUPD: Statin Use in Persons with Diabetes; HRM: High-risk Medications in the Elderly; RASA: Renin-angiotensin System Antagonists; NIDM: Non-insulin Diabetes Medications. Higher scores indicate greater quality for all measures except HRM, where lower scores are preferable. Patients who are attributable to denominator-eligible pharmacies or primary care offices. In the case of Adult Sinusitis, the N is the total number of primary care visits for sinusitis. Patients with more than one visit are counted more than once for the measure. Condition count is the sum of conditions as defined by the Medicare Chronic Conditions Warehouse. Differences in eligibility criteria across the quality measures resulted in variation in demographic characteristics (Table 1) and numbers of patients per measure (Table 2). Measure scores also varied between pharmacies and primary care group practices, but the difference in median values was less than 1% for all measures except for adult sinusitis and SUPD, where the difference was 3% and 2%, respectively (Table 2).
Table 2

Comparison of eligibility and measure scores by type of provider.

Variable
Number of Eligible Sites
Number of Eligible Patients
Measure Score [Median (IQR)]
Type of ProviderPharmacyGroup PracticesPharmacy TotalGroup Practices TotalOverlapPharmacyGroup Practices
Diabetic Eye Exam83245917194,630241,935140,2000.75 (0.65–0.83)0.75 (0.67–0.83)
Adult Sinusitis46584532102,471150,43676,4770.83 (0.75–0.92)0.86 (0.77–0.92)
Annual Flu43,53529,0271,930,8032,025,6441,874,4790.60 (0.50–0.69)0.61 (0.48–0.71)
SUPD53914509180,350131,99990,0290.75 (0.67–0.83)0.77 (0.69–0.83)
HRM44,08829,8512,069,0932,172,0692,015,0290.13 (0.08–0.18)0.13 (0.08–0.18)
RASA Adherence32,03018,463964,2471,002,475877,1660.82 (0.75–0.88)0.83 (0.77–0.89)
Statin Adherence32,39618,8961,032,7531,070,054945,8760.77 (0.70–0.84)0.78 (0.71–0.84)
NIDM Adherence66875182209,800161,474111,8910.80 (0.70–0.87)0.80 (0.74–0.86)

SUPD: Statin Use in Persons with Diabetes; HRM: High-risk Medications in the Elderly; RASA: Renin-angiotensin System Antagonists; NIDM: Non-insulin Diabetes Medication

Comparison of eligibility and measure scores by type of provider. SUPD: Statin Use in Persons with Diabetes; HRM: High-risk Medications in the Elderly; RASA: Renin-angiotensin System Antagonists; NIDM: Non-insulin Diabetes Medication RICC values varied substantially across measures and between the two practice types. The measure with the highest RICC for both pharmacies and primary care group practices was the Adult Sinusitis measure, which had a RICC of 0.114 for pharmacies and 0.145 for primary care group practices (Table 3), suggesting that 11.4% of the variation in inappropriate use of antibiotics for sinusitis in 2015 was explained by the attributed pharmacy and the attributed primary care group practice explained 14.5%. The smallest RICC value for pharmacies was for the SUPD measure at 0.013, and the smallest value for primary care group practices was for the RASA adherence measure at 0.015. The RICC ratio analysis found wide variation by measure, and that risk adjustment tended to push ratios farther from 1 (Fig. 1). Annual flu vaccination, SUPD, diabetic eye exam, and adult sinusitis RICC ratios were below 1, indicating that variation in measure performance across pharmacies was less than variation in performance across primary care group practices. However, the RICC ratios for adherence measures were consistently above 1, particularly for the RASA and NIDM adherence measures, which were both near 1.4 after risk adjustment.
Table 3

Pharmacy and primary care group practice RICC statistics.

MeasureUnadjusted RICC Statistics
Adjusted RICC Statistics
PharmacyGroup PracticeRatioPharmacyGroup PracticeRatio
Diabetic Eye Exam0.0370.0550.6670.0380.0590.653
Adult Sinusitis0.1140.1450.7860.1070.1380.775
Annual Flu Vaccine0.0640.1080.5910.0570.1020.555
SUPD0.0130.0200.6890.0130.0190.664
HRM0.0230.0230.9670.0230.0240.950
Statin Adherence0.0200.0181.1110.0170.0151.150
RASA Adherence0.0190.0151.3130.0150.0111.435
NIDM Adherence0.0220.0161.3380.0160.0121.376

SUPD: Statin Use in Persons with Diabetes; HRM: High-risk Medications in the Elderly; RASA: Renin-angiotensin System Antagonists; NIDM: Non-insulin Diabetes Medications

Fig. 1

Unadjusted and adjusted RICC ratios by measure.

Reference line at 1 indicates no relative impact. Values greater than 1 indicate greater pharmacist impact; values less than 1 indicate greater primary care provider impact.

RICC: Residual Intraclass Correlation Coefficient; SUPD: Statin Use in Persons with Diabetes; HRM: High-risk Medications in the Elderly; RASA: Renin-angiotensin System Antagonists; NIDM: Non-insulin Diabetes Medications

Pharmacy and primary care group practice RICC statistics. SUPD: Statin Use in Persons with Diabetes; HRM: High-risk Medications in the Elderly; RASA: Renin-angiotensin System Antagonists; NIDM: Non-insulin Diabetes Medications Unadjusted and adjusted RICC ratios by measure. Reference line at 1 indicates no relative impact. Values greater than 1 indicate greater pharmacist impact; values less than 1 indicate greater primary care provider impact. RICC: Residual Intraclass Correlation Coefficient; SUPD: Statin Use in Persons with Diabetes; HRM: High-risk Medications in the Elderly; RASA: Renin-angiotensin System Antagonists; NIDM: Non-insulin Diabetes Medications

Discussion

This study used a novel statistical tool, the RICC, to estimate the relative contribution of pharmacies and primary care group practices on care delivered to Medicare enrollees across a range of quality measures. Medication adherence measures had the highest relative contribution by pharmacists to total measure variation, with adjusted RICC ratios ranging from 1.15 to 1.38. While the method applied in this study does not lend itself to causal inference, this finding suggests pharmacists may have a greater opportunity to contribute to performance on quality measures assessing adherence than primary care providers. This is not surprising, as pharmacists have access to dispensing records, the most accurate source of prescription filling data. Additionally, literature shows that pharmacists can integrate various interventions to improve medication adherence rates.11, 12, 13, 14 Given these results, payers may consider contracting directly with pharmacies to improve medication adherence and may have a greater return on investment by engaging pharmacies rather than primary care group practices. Indeed, this opportunity emphasizes the potential impact that innovative pharmacy practice movements, including Flip the Pharmacy and Community Pharmacy Enhanced Services Network (CPESN), can have on payer-relevant quality measures., These initiatives support practice change and help facilitate contracts between pharmacy networks and payers to reward improvements in quality measures. As of July 2022, CPESN networks were present in 44 states and had secured 33 contracts. These innovative efforts, in addition to traditional medication therapy management opportunities through platforms such as OutcomesMTM and Prescribe Wellness, create established pathways for payer-pharmacy collaboration to support engagement on quality measures evaluated in this study. Of the measures included in the MIPS measure set (diabetic eye exam, adult sinusitis, annual flu vaccine, and HRM), HRM had the greatest RICC ratio at 0.950. This suggests that pharmacists and primary care providers have approximately equal contributions to measure variation and that payers could consider incentives to either or both sets of providers to improve the measure. Literature also supports pharmacists' role in deprescribing high-risk medications for elderly patients, as such, it is reasonable that the RICC ratio for this measure would be higher than the other three MIPS measures. The diabetic eye exam and adult sinusitis measures were included to check the validity of the RICC ratio as an estimate of relative contribution to quality measure performance. If RICC ratio values for these measures were close to 1, it would have called into question the strength of the findings for the medication adherence measures. However, since the RICC ratio values for these measures are considerably less than 1, it supports the finding that pharmacists have greater relative contribution to the adherence measures and that these results are not simply an artifact of the methodology or the analytical technique. The low RICC ratios for annual flu vaccine and SUPD are somewhat surprising. These measures were categorized as having a high potential community pharmacist impact based on QMIT-CP categorization, yet the RICC ratios are below 1. For influenza vaccination, providing these services for elderly patients was within the scope of practice for pharmacists in every state at the time these data were collected, and approximately a third of elderly patients received their vaccination at a pharmacy during this study. Therefore, it could be that all pharmacists had approximately the same vaccination rate for elderly patients, thus reducing practice-to-practice variation and, subsequently, the RICC. Alternatively, it could be that since the measure-eligible attributed population is limited to those who visited primary care group practices during the influenza season, this population may have been more likely to receive their influenza vaccination from the primary care provider, reducing the ability of a pharmacist to impact their care for this measure. Another explanation may be that despite the service's broad offering, the pharmacist's ability to influence a patient's decision to receive an influenza vaccination is low, resulting in a smaller RICC value. Nevertheless, the lowest RICC ratio among all measures included in this study was observed for the annual flu vaccine measure. Therefore, payers hoping to maximize influenza vaccination rates among patients aged 65 and older may consider primary care group practice incentives. It is also logical that the SUPD measure would have a lower pharmacist contribution than the other medication-related measures since this was the only measure that required a new prescription to be written. Although research has shown that a pharmacist can contribute to SUPD measure through outreach to physicians, the extent to which this service was employed during this study's time period is unknown. Finally, many measures, such as SUPD and adherence, had low RICC values, suggesting that providers may have little control over variation in these measures. Formal reliability testing is recommended to include measures as a part of performance-based payment models., Including unreliable performance measures in value-based and alternative payment models risk misclassification of performance based on random variation. Therefore, payers and measure developers may consider alternative measures or alternative measure specifications, which expand the number of attributed patients per provider or the time period over which performance is measured to improve the reliability of measure scores. However, when plans receive bonuses or penalties based on measures decided by other funders, such as Medicare Part C and managed Medicaid plans, there is often interest in holding providers accountable for performance regardless of the reliability. In these instances, plans can consider using RICC ratios to incentivize performance for providers with the greatest relative control, even if absolute RICC values are small.

Limitations

There are several limitations to this work. First, this is a cross-sectional study, and causality cannot be inferred. Second, the Medicare 20% sample used in this study captures a large number of Medicare beneficiaries nationwide, and it does not allow for calculating RICC ratios for pharmacies or group practices with exceptionally small numbers of attributed patients. Third, while fidelity to measure specifications was the project's goal, adaptations had to be made to endorsed specifications, which may likewise limit generalizability to plans using data provided by vendors such as National Commission on Quality Assurance and Pharmacy Quality Solutions. Fourth, while these calculations represent the influence of providers on quality measures, they don't necessarily reflect the relative influence on underlying care quality. For example, while the proportion of days covered is commonly used to measure medication adherence, it is a proxy measure that relies on fills instead of assessing the actual rate at which patients take medications as their prescriber intended. Finally, the calculation of the RICC ratio can be sensitive to small differences in RICC values when RICC values are small. This could lead to instability in estimates over time, and analysis of RICC ratio stability over time is out of the scope of this study. The development of confidence intervals for RICC ratios using methods such as nonparametric bootstrapping could help quantify variation in these RICC estimates, but this is outside the current study's scope.

Conclusions

Overall, this study found differences in provider-level variation across a set of quality measures, suggesting that while a range of providers may be able to contribute to performance on a given quality measure, some providers have greater contributions than others. Within this sample, pharmacists contributed more to performance on adherence measures, whereas physicians demonstrated greater contributions to non-medication-related measures. If the RICC ratio technique is further validated, these findings may inform payer decisions on which providers to incentivize for which measures.

Declaration of Competing Interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:
  18 in total

1.  Effect of vaccination by community pharmacists among adult prescription recipients.

Authors:  J D Grabenstein; H A Guess; A G Hartzema; G G Koch; T R Konrad
Journal:  Med Care       Date:  2001-04       Impact factor: 2.983

2.  Medication Synchronization Programs Improve Adherence To Cardiovascular Medications And Health Care Use.

Authors:  Alexis A Krumme; Robert J Glynn; Sebastian Schneeweiss; Joshua J Gagne; J Samantha Dougherty; Gregory Brill; Niteesh K Choudhry
Journal:  Health Aff (Millwood)       Date:  2018-01       Impact factor: 6.301

3.  The role of retail pharmacies in CVD prevention after the release of the ATP IV guidelines.

Authors:  William H Shrank; Andrew Sussman; Troyen A Brennan
Journal:  Am J Manag Care       Date:  2014-11-01       Impact factor: 2.229

4.  Value-based provider payment: towards a theoretically preferred design.

Authors:  Daniëlle Cattel; Frank Eijkenaar; Frederik T Schut
Journal:  Health Econ Policy Law       Date:  2018-09-27

5.  Impact of a pharmacist in improving quality measures that affect payments to physicians.

Authors:  Jessica Sinclair; Olivia Santoso Bentley; Amina Abubakar; Laura A Rhodes; Macary Weck Marciniak
Journal:  J Am Pharm Assoc (2003)       Date:  2019-06-13

6.  Measuring pharmacy performance in the area of medication adherence: addressing the issue of risk adjustment.

Authors:  Sai Dharmarajan; John P Bentley; Benjamin F Banahan Iii; Donna S West-Strum
Journal:  J Manag Care Spec Pharm       Date:  2014-10

7.  Pharmacy-based interventions to reduce primary medication nonadherence to cardiovascular medications.

Authors:  Michael A Fischer; Niteesh K Choudhry; Katsiaryna Bykov; Gregory Brill; Gregory Bopp; Aaron M Wurst; William H Shrank
Journal:  Med Care       Date:  2014-12       Impact factor: 2.983

8.  Improving the reliability of physician performance assessment: identifying the "physician effect" on quality and creating composite measures.

Authors:  Sherrie H Kaplan; John L Griffith; Lori L Price; L Gregory Pawlson; Sheldon Greenfield
Journal:  Med Care       Date:  2009-04       Impact factor: 2.983

9.  Impact of Appointment-Based Medication Synchronization on Existing Users of Chronic Medications.

Authors:  David Holdford; Kunal Saxena
Journal:  J Manag Care Spec Pharm       Date:  2015-08

10.  Development and reliability assessment of a tool to assess community pharmacist potential to influence prescriber performance on quality measures.

Authors:  Melissa Nelson; Matthew Pickering; Lee Holland; Benjamin Urick; Patrick Campbell
Journal:  J Am Pharm Assoc (2003)       Date:  2020-08-13
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.