CONTEXT: Performance measures are increasingly widely used in health care and have an important role in quality. However, field studies of what organizations are doing when they collect and report performance measures are rare. An opportunity for such a study was presented by a patient safety program requiring intensive care units (ICUs) in England to submit monthly data on central venous catheter bloodstream infections (CVC-BSIs). METHODS: We conducted an ethnographic study involving ∼855 hours of observational fieldwork and 93 interviews in 17 ICUs plus 29 telephone interviews. FINDINGS: Variability was evident within and between ICUs in how they applied inclusion and exclusion criteria for the program, the data collection systems they established, practices in sending blood samples for analysis, microbiological support and laboratory techniques, and procedures for collecting and compiling data on possible infections. Those making decisions about what to report were not making decisions about the same things, nor were they making decisions in the same way. Rather than providing objective and clear criteria, the definitions for classifying infections used were seen as subjective, messy, and admitting the possibility of unfairness. Reported infection rates reflected localized interpretations rather than a standardized dataset across all ICUs. Variability arose not because of wily workers deliberately concealing, obscuring, or deceiving but because counting was as much a social practice as a technical practice. CONCLUSIONS: Rather than objective measures of incidence, differences in reported infection rates may reflect, at least to some extent, underlying social practices in data collection and reporting and variations in clinical practice. The variability we identified was largely artless rather than artful: currently dominant assumptions of gaming as responses to performance measures do not properly account for how categories and classifications operate in the pragmatic conduct of health care. These findings have important implications for assumptions about what can be achieved in infection reduction and quality improvement strategies.
CONTEXT: Performance measures are increasingly widely used in health care and have an important role in quality. However, field studies of what organizations are doing when they collect and report performance measures are rare. An opportunity for such a study was presented by a patient safety program requiring intensive care units (ICUs) in England to submit monthly data on central venous catheter bloodstream infections (CVC-BSIs). METHODS: We conducted an ethnographic study involving ∼855 hours of observational fieldwork and 93 interviews in 17 ICUs plus 29 telephone interviews. FINDINGS: Variability was evident within and between ICUs in how they applied inclusion and exclusion criteria for the program, the data collection systems they established, practices in sending blood samples for analysis, microbiological support and laboratory techniques, and procedures for collecting and compiling data on possible infections. Those making decisions about what to report were not making decisions about the same things, nor were they making decisions in the same way. Rather than providing objective and clear criteria, the definitions for classifying infections used were seen as subjective, messy, and admitting the possibility of unfairness. Reported infection rates reflected localized interpretations rather than a standardized dataset across all ICUs. Variability arose not because of wily workers deliberately concealing, obscuring, or deceiving but because counting was as much a social practice as a technical practice. CONCLUSIONS: Rather than objective measures of incidence, differences in reported infection rates may reflect, at least to some extent, underlying social practices in data collection and reporting and variations in clinical practice. The variability we identified was largely artless rather than artful: currently dominant assumptions of gaming as responses to performance measures do not properly account for how categories and classifications operate in the pragmatic conduct of health care. These findings have important implications for assumptions about what can be achieved in infection reduction and quality improvement strategies.
Authors: John A Spertus; Kim A Eagle; Harlan M Krumholz; Kristi R Mitchell; Sharon-Lise T Normand Journal: J Am Coll Cardiol Date: 2005-04-05 Impact factor: 24.094
Authors: Elizabeth I Harris; David N Lewin; Hanlin L Wang; Gregory Y Lauwers; Amitabh Srivastava; Yu Shyr; Bashar Shakhtour; Frank Revetta; Mary K Washington Journal: Am J Surg Pathol Date: 2008-12 Impact factor: 6.394
Authors: Mary Dixon-Woods; Charles L Bosk; Emma Louise Aveling; Christine A Goeschel; Peter J Pronovost Journal: Milbank Q Date: 2011-06 Impact factor: 4.911
Authors: Naomi P O'Grady; Mary Alexander; Lillian A Burns; E Patchen Dellinger; Jeffrey Garland; Stephen O Heard; Pamela A Lipsett; Henry Masur; Leonard A Mermel; Michele L Pearson; Issam I Raad; Adrienne G Randolph; Mark E Rupp; Sanjay Saint Journal: Am J Infect Control Date: 2011-05 Impact factor: 2.918
Authors: Matthew E Wise; R Douglas Scott; James M Baggs; Jonathan R Edwards; Katherine D Ellingson; Scott K Fridkin; L Clifford McDonald; John A Jernigan Journal: Infect Control Hosp Epidemiol Date: 2013-04-18 Impact factor: 3.254
Authors: Jane K O'Hara; Katja Grasic; Nils Gutacker; Andrew Street; Robbie Foy; Carl Thompson; John Wright; Rebecca Lawton Journal: J R Soc Med Date: 2018-05-11 Impact factor: 5.344