Bruce E Landon1, A James O'Malley, Thomas Keegan. 1. Department of Health Care Policy, Harvard Medical School, 180 Longwood Avenue, Boston, MA 02215, USA. landon@hcp.med.harvard.edu
Abstract
BACKGROUND: There is accelerating interest in measuring and reporting the quality of care delivered by health care providers and organizations, but methods for defining the patient panels for which they are held accountable are not well defined. OBJECTIVES: To examine the potential impact of using alternative algorithms to define accountable patient populations for performance assessment. RESEARCH DESIGN: We used administrative data regarding Community Health Center (CHC) visits in simulations of performance assessment for breast, cervical, and colorectal cancer screening. PARTICIPANTS: Fifteen CHC sites in the northeastern US. MEASURES: We used three different algorithms to define patient populations eligible for measurement of cancer screening rates and simulated center-level performance rates based on these alternative population definitions. RESULTS: Focusing on breast cancer screening, the percentage of women aged 51-75 eligible for this measure across CHCs, if using the most stringent algorithm (requiring a visit in the assessment year plus at least one visit in the 2 years prior), ranged from 28% to 60%. Analogous ranges for cervical and colorectal cancer screening were 18-59% and 26-62%, respectively. Simulated performance data from the centers demonstrate that variations in eligible patient populations across health centers could lead to the appearance of large differences in health center performance or differences in expected rankings of CHCs when no such differences exist. For instance, when holding performance among similar populations constant, but varying the proportion of populations seen across different health centers, simulated health center adherence to screening guidelines varied by over 15% even though actual adherence for similar populations did not differ. CONCLUSIONS: Quality measurement systems, such as those being used in pay-for-performance and public reporting programs, must consider the definitions used to identify sample populations and how such populations might differ across providers, clinical practice groups, and provider systems.
BACKGROUND: There is accelerating interest in measuring and reporting the quality of care delivered by health care providers and organizations, but methods for defining the patient panels for which they are held accountable are not well defined. OBJECTIVES: To examine the potential impact of using alternative algorithms to define accountable patient populations for performance assessment. RESEARCH DESIGN: We used administrative data regarding Community Health Center (CHC) visits in simulations of performance assessment for breast, cervical, and colorectal cancer screening. PARTICIPANTS: Fifteen CHC sites in the northeastern US. MEASURES: We used three different algorithms to define patient populations eligible for measurement of cancer screening rates and simulated center-level performance rates based on these alternative population definitions. RESULTS: Focusing on breast cancer screening, the percentage of women aged 51-75 eligible for this measure across CHCs, if using the most stringent algorithm (requiring a visit in the assessment year plus at least one visit in the 2 years prior), ranged from 28% to 60%. Analogous ranges for cervical and colorectal cancer screening were 18-59% and 26-62%, respectively. Simulated performance data from the centers demonstrate that variations in eligible patient populations across health centers could lead to the appearance of large differences in health center performance or differences in expected rankings of CHCs when no such differences exist. For instance, when holding performance among similar populations constant, but varying the proportion of populations seen across different health centers, simulated health center adherence to screening guidelines varied by over 15% even though actual adherence for similar populations did not differ. CONCLUSIONS: Quality measurement systems, such as those being used in pay-for-performance and public reporting programs, must consider the definitions used to identify sample populations and how such populations might differ across providers, clinical practice groups, and provider systems.
Authors: Meredith B Rosenthal; Bruce E Landon; Sharon-Lise T Normand; Richard G Frank; Arnold M Epstein Journal: N Engl J Med Date: 2006-11-02 Impact factor: 91.245
Authors: Katharine A Bradley; Laura J Chavez; Gwendolyn T Lapham; Emily C Williams; Carol E Achtmeyer; Anna D Rubinsky; Eric J Hawkins; Richard Saitz; Daniel R Kivlahan Journal: Psychiatr Serv Date: 2013-10 Impact factor: 3.084
Authors: Yingye Zheng; Douglas A Corley; Chyke Doubeni; Ethan Halm; Susan M Shortreed; William E Barlow; Ann Zauber; Tor Devin Tosteson; Jessica Chubak Journal: Ann Appl Stat Date: 2020-06-29 Impact factor: 2.083
Authors: Mark A Nyman; Rosa L Cabanela; Juliette T Liesinger; Paula J Santrach; James M Naessens Journal: BMC Health Serv Res Date: 2015-03-14 Impact factor: 2.655