Literature DB >> 23923102

Visualizing Central Line -Associated Blood Stream Infection (CLABSI) Outcome Data for Decision Making by Health Care Consumers and Practitioners-An Evaluation Study.

Yair G Rajwan1, Pamela W Barclay, Theressa Lee, I-Fong Sun, Catherine Passaretti, Harold Lehmann.   

Abstract

The purpose of this study was to evaluate information visualization of publicly-reported central line-associated blood stream infection (CLABSI) outcome data for decision making by diverse target audiences - health care consumers and practitioners. We describe the challenges in publicly reporting of healthcare-associated infections (HAIs) data and the interpretation of an evaluation metric. Several options for visualization of CLABSI data were designed and evaluated employing exploratory working group, two confirmatory focus groups' observations, and experts' committee validation of the final designs. Survey-data collection and evaluation criteria results, collected from the two focus groups, are presented and are used to develop the final recommendations for how to visualize publicly-reported CLABSI data from Maryland acute care hospitals. Both health care consumer and practitioner's perspectives are highlighted and categorized based on the visualizations' dimensions framework. Finally, a recommended format for visualizing CLABSI outcome data based on the evaluation study is summarized.

Entities:  

Keywords:  Blood-Borne Pathogens; Cross Infection; Infection Control; Infectious Disease Transmission, Professional-to-Patient; Information Graphics; Information Visualization; MeSH Headings: Catheter-Related Infections; Public Health Informatics; Public Health Practice; Sense Making; Usability of Health Information; Visual Communication

Year:  2013        PMID: 23923102      PMCID: PMC3733762          DOI: 10.5210/ojphi.v5i2.4364

Source DB:  PubMed          Journal:  Online J Public Health Inform        ISSN: 1947-2579


Introduction

In 2002, the Centers for Disease Control and Prevention (CDC) released a public report estimating that 1.7 million healthcare-associated infections (HAIs) result in 99,000 deaths annually within hospitals across the United States. Over the past decade a number of states, including Maryland, to drive improvement and increase transparency have enacted legislation that requires hospitals to publicly report HAIs. Central Line-Associated Blood Stream Infections (CLABSIs) are one of the more common HAIs that result in substantial morbidity and mortality as well as increased medical costs. As such, the HAI Advisory Committee of the Maryland Health Care Commission, (MHCC), an “independent regulatory agency whose mission is to increase accountability and promote informed decision-making,” chose CLABSIs in Intensive Care Units (ICUs) as the first HAI outcome measure to be reported in Maryland [1]. In determining how best to publicly report CLABSI outcome data, MHCC considered the goals and challenges of public reporting. The goals of public reporting are to inform the public about hospital performance, to increase transparency and trust between hospitals and consumers, and to drive best practices and improvement to eliminate healthcare-associated infections [2,3]. The purpose of this study was to help MHCC in choosing the best way of communicating these data that would address needs and concerns of consumers, hospitals, and the State.

Challenges in Public Reporting of HAI Data

There are a number of challenges that must be faced when deciding how to publicly report HAI or CLABSI data. First, one must consider the audience viewing the data. Each individual viewing the data may have different objectives and goals ranging from a patient trying to choose a hospital for a procedure to hospital administration utilizing it for performance improvement. Guidance in the medical and public health literature related to public reporting of health care-associated infections data is currently limited and there are no articles directly pertaining to how to effectively present HAI data to consumers. Extrapolating from literature on public reporting of other measures, however, several major principles are apparent. To make informed choices and navigate the health care system, consumers need to have easily available, accurate, understandable, and timely information. Consumers likely represent a range of perspectives because they have varying levels of education, different backgrounds and different needs with regards to the data presented. For example, only about 50% of Americans have the minimal mathematical skills necessary to understand numbers presented in printed materials [4]. The primary challenge in designing a system for public reporting of health quality data is that quality measures are often difficult to understand or are not meaningful to consumers. Cognitive interviews of health care consumers have revealed that consumers prefer information that can be reviewed quickly and that is clear at first review. Participants in these interviews frequently felt inundated by the amount of information listed [5,6]. The data must be framed clearly to a broad audience providing neither too much nor too little information [7,8]. The Agency for Healthcare Research and Quality (AHRQ) recommends that information should be made relevant to what consumers care about, that metrics should be consistent, and that data on sponsors and methods should be included to help legitimize the data for consumers [9,10]. More importantly, however, to ensure that the broadest possible audience utilizes and understands the publicly-reported data, the information presented should be summarized and interpreted for consumers to the greatest extent possible. Simple language should be used and guidance on how to read graphs and understand measures should be provided. Familiarity with health vocabulary by the public is an important factor in consumer understanding of health related reporting [11]. Employing consumers’ vocabulary, in health literacy, can reduce the gap between a vocabulary that is used by health care professionals and the consumers’ understandings [12]. Visual communication using visualization of the information can improve learning and communication [13]. Strategies that narrow options and highlight differences are the most useful to consumers [14,15]. Display strategies that seem to be particularly effective include rank ordering providers by performance, labeling performance (i.e., excellent, fair, poor or above average, average or below average), or using symbols (i.e., stars or symbols that incorporate the interpretive label as part of the symbol) [16,17]. AHRQ recommends against presenting Confidence Interval (CI) when presenting comparative performance data given that consumers often don’t understand statistics and that research has shown that consumers tend to discount information when the report suggests uncertainty regarding the data [18]. The primary objective of our study was to determine the most effective manner to publicly report hospital CLABSI data to both consumers and professionals, based on current standards of data presentation.

Methods

The study method, governance framework, had three phases—exploratory, confirmatory, and validation [19].

Exploratory Phase

The purpose of this phase was to perform four activities: 1) Thorough Literature Review, 2) Panel Study, 3) Experts’ Evaluation, and 4) Iterative Information Visualization Design. The exploratory working group that participated in the panel study of the exploratory phase included five (5) health care professionals: 1) the director of Center for Hospital Services at the Maryland Health Care Commission, 2) the Chief of Hospital Quality Initiatives at the Maryland Health Care Commission, 3) an assistant professor and hospital epidemiologist at the Johns Hopkins Bayview Hospital, 4) a program manager at the Center for Innovation in Quality Patient Care of Johns Hopkins University, and 5) a postdoctoral research fellow of the National Library of Medicine (NLM) in the Division of Health Sciences Informatics at the Johns Hopkins University School of Medicine. Three (3) experts that were solicited for their expertise and evaluation included: 1) hospital epidemiologist – and professor of epidemiology, medicine, and pathology, 2) anesthesiologist and critical-care specialist, and 3) professor of pediatrics, health policy and management, and health sciences informatics. Thirteen (13) members of the MHCC HAI Advisory Committee were solicited for their expertise prior to health care consumers and practitioners’ focus groups. The expert-opinion solicitation employed recurring brainstorm and interview sessions and the HAI Advisory Committee solicitation employed two panel discussion and collection of word-type data that were analyzed for identifying themes.

Confirmatory Phase

In order to garner perceptions, on the proposed display formats during the confirmatory phase from both of the intended audience groups, a structured interview tool was developed. The purpose was to capture participants’ opinions and attitudes on the various information visualization alternatives. The structured interview was administered to consumer and health care professional, with expertise in hospital epidemiology and infection control, focus groups to test the usability and understanding of the alternative display presentations. The focus-groups-based study included structured elicitation of responses of custom-built alternative display formats. Volunteer participants in the focus groups, listed in Table 1, were identified and recruited to provide feedback on the alternative displays.

Table 1: Focus Groups Composition

Table 1: Focus Groups Composition Thirteen (13) study subjects were included in the health care consumer focus group based on chain-referral sampling. The chain-referral sampling was initiated from the exploratory working group members with the aim to recruit study subjects with personal HAIs’ experience, study subjects with no personal HAIs’ experience, and representatives from the community and other health care domains. Health care practitioners were excluded from participation in the health care consumer focus group. Seven (7) study subjects were included in the health care professional focus group, including one (1) hospital’s Chief Executive Officer (CEO), based on a chain-referral sampling. The chain-referral sampling was initiated from members of the exploratory working group and the Maryland HAI Advisory Committee with the aim to recruit study subjects that were certify as a health care practitioner. Members of the exploratory working group and Maryland HAI Advisory Committee were excluded from participation in either group.

Validation Phase

Summarization of the focus groups comments and the evaluation survey regarding alternative formats were captured and submitted to the HAI Advisory Committee, during the validation phase. Following review by the Commission staff, the alternative displays for reporting CLABSI data for consumer and professional audiences were presented to the HAI Advisory Committee, Maryland's panel of hospital epidemiology and infection control subject matter experts, who selected the final format. Subsequently, a webinar was held for Maryland hospital Infection Preventionists, performance-improvement, quality -measure, and public-relations staff on the format of public reporting of central line-associated blood stream infection (CLABSI) data in ICUs. Capturing consumers and practitioner’s perspective was an important and critical aspect of recruiting diverse composition of participants.

Data Collection

We captured participants’ comments during all the phases on the various information visualization alternatives. To provide the focus groups with simulated data formats, we presented mocked-up representations of how the data would be visualized. We used the custom-built structured interview tool on four dimensions, see Figure 1. At the end of each focus-group discussion the consumer and professional audiences were provided a paper-based custom survey that asked them to rate each display format in terms of four criteria, depicted in Table 2, to consider when visualizing complex medical information [20].
Figure 1

Evaluation of Reporting Dimensions

Figure 2

Visualization of Standardized Infection Ratio (SIR)

Clarity – Is the information provided in a clear and understandable format? Functionality – Does this visualization provide the information and data elements you are looking for? Usefulness – Is this visualization useful? (i.e., does this visualization help you make a decision?) Effectiveness – To what extent does the visualization portray the intended information? (i.e., are you able to tell which hospitals perform better or worse easily with this visualization?) Evaluation criteria and scale used a Likert scale of 1 (very poor) to 5 (very good) to select the evaluation level. Participants ranked their top three visualization options according to their overall preference. Furthermore, the survey included additional overall ranking of visualization symbols and options, the quality interpretations of the Standardized Infection Ratio (SIR) using different symbols, such as stars (full, half, empty), colors, and shapes. Evaluation of Reporting Dimensions Table 2: Evaluation Criteria and Scale

Consistent Metric

The Standardized Infection Ratio (SIR) is a summary measure used to compare the infection rate of one group of patients to that of a standard population [21]. It is the observed number of infections divided by the predicted number of infections. The predicted infection rate is the number of infections that we would expect if the hospital had the same infection rate as a comparison group, in this case the national average [22]. Visualization of Standardized Infection Ratio (SIR) A SIR of 1 means the hospital infection rate and that of the comparison group are the same. A SIR > 1 means the hospital has a higher rate (i.e., more infections) than the comparison group. A SIR < 1 means that the hospital has a lower infection rate (i.e., fewer infections) than the comparison group. Figure 2 illustrates an example of estimated SIR for three hospitals. For example, if a hospital’s medical intensive care unit (MICU) has five (5) bloodstream infections and based on the national average for that type of ICU one would expect only four (4) infections the SIR would equal 5/4 = 1.25 (e.g., Hospital C).

Information Visualization Mockups and Options Presented to Focus Groups

During the exploration phase six (6) distinct visualization mockups were developed, illustrated in Figures 3a to 3f. These six (6) mockups were based on the following information visualizations techniques.
Figure 3a

Option 1 (Comparative Table)

Figure 3f

Option 6 (Tree Map)

Comparative Table Box Plot Quality Graph Analysis Table Heat Map Tree Map After an initial evaluation of the six (6) mockups with the Commission, based on their clarity, functionality, usefulness, and effectiveness, the Commission requested that only four (4) options be presented to the focus groups for discussion, as depicted in Figures 3a to 3d.
Figure 3d

Option 4 (Heat Map)

Option 1 (Comparative Table) Option 2 (Analysis Table) Option 3 (Box Plot) Option 4 (Heat Map) Option 5 (Quality Graph) Option 6 (Tree Map) During the focus groups study, the four (4) selected visualizations were labeled as options 1 to 4 without specifying the visualization technique. The intention was to reduce selection bias, which might have been influenced by a preconception of a visualization category. The information visualization layouts included a combination of standards information graphics used in public health and public-oriented visualization. Differences in preferences across groups were analyzed by Kruskal–Wallis non-parametric tests, appropriate for the ordinal data we collected.

Results

The focus group evaluation results, as summarized in Table 3, were collected from the Maryland Health Care Commission, Public Reporting of Maryland HAI Outcome Data, Consumer and Health Care Professional Focus Groups, which were conducted on August 10-11, 2010. Table 3: Survey Statistics Results Summary Table 4: Kruskal-Wallis equality-of-populations rank test (with ties) within a group Within each group, the evaluation ratings indicate varied preferences of the ratings for each visualization and criteria as depicted in Figure 4. Testing for Kruskal-Wallis equality-of-populations rank [23] (with ties), which indicates the degree of dispersion (spread) in the data within a group, in Table 4, shows that there are statistically significant differences within the groups. The consumers’ group had a statistically significant difference for ratings across visualization methods. The practitioners’ group had statistically significant difference for most of the ratings across visualization methods except between “Analysis Table” and “Comparative Table.” Between Consumers and Practitioners there were differences in the favorites’ ranking with no consensus on most of the visualization ranking, as depicted in Figure 5. Overall, Consumers preferred the “Heat Map” and the Practitioners preferred “Box Plot,” as indicated in Figure 6.
Figure 4

Visualizations’ Evaluation Rating

Figure 5

Overall Opinion of Visualization Option

Figure 6

Weighted Opinion of Visualization Options

Visualizations’ Evaluation Rating Overall Opinion of Visualization Option Weighted Opinion of Visualization Options Although Consumers preferred the “Heat Map” and Practitioners preferred “Box Plot”, there was also a statistically significant difference of preferences between the two groups for the “Analysis Table,” as indicated in Table 5. In contrast, there was no difference of preferences between the two groups for “Comparative Table.” Table 5: Kruskal-Wallis equality-of-populations rank test between groups In evaluating the overall opinion of Consumers and Practitioners on the use of visualization symbols (Stars, Colors, or Shapes), Consumers and Practitioners ranked them at the same order, as depicted in Figure 7, and selected colors as the overall preferred symbols, as depicted in Figure 8, which indicates a consensus among the groups.
Figure 7

Overall Opinion of Visualization Symbols

Figure 8

Weighted Opinion of Visualization Symbols

Overall Opinion of Visualization Symbols Weighted Opinion of Visualization Symbols

Consumer and Health Care Professional Survey Sentiments

Consumers indicated a preference to obtain one overall aggregated CLABSI measure supplemented with a symbol for quality interpretation, to make a decision and drive improvement, and then to have an overall hospital CLABSI rate by specific units. The consumers group discussed the meaning of the term “Expected” and “Significantly”. The consumers preferred to view hospitals overall quality and then to drill down to the details. Practitioners indicated to see an overall CLABSI performance, number of infections, number of central line days, and SIR confidence interval at 95 percent without any additional interpretation symbols.

Conclusions

From this rigorous elicitation of data-visualization preferences for a key State and national measures, our data suggest that Consumers prefer “Heat Map” and that it was desired by them to focus on a meaningful level of aggregation beneath total overall and to employ colors for quality interpretation and grouping. On the other hand, our data suggest that Practitioners prefer “Box Plot” augmented with numerical data. One interpretation of these preferences is that it was desired by the Practitioners to focus on the details and relative comparison. One of the methods used to address both groups’ preferences for aggregation and interpretation is by constructing three ordinal categories of performance and by combining symbols and colors that indicate “Better than National Experience” as green circle, “No Different than National Experience” as a yellow triangle, and “Worse than National Experience” as a red Diamond. The visual display of quantitative information clarifies data [24] for consumers and practitioners for making decision. The objective of visual design, to organize the data for communicating a message effectively, can be accomplished by prioritizing, grouping, and sequencing the data correctly [25]. But the long-term challenge, in the evaluation of visual communication, is its demonstration of adaptation and utility [26]. Hence, the robust triangulation of mixed study methods, as in our study, is necessary because it uses theoretical and applied constructs of usability studies and controlled experiments in all its phases—exploratory, confirmatory, and validation. The deployment and usage of our final formats, on the MHCC website [27,28], are the demonstration of their adaptation and utility.

Discussion

Based on the comments and data analysis of the focus groups, two formats were selected for presentation on the MHCC website. These displays are depicted in Figure 9 and Figure 10.
Figure 9

Post Focus Group – Consumers’ Visualization Recommendation

Figure 10

Post Focus Group - Health Care Practitioners’ Visualization Recommendation

Post Focus Group – Consumers’ Visualization Recommendation Post Focus Group - Health Care Practitioners’ Visualization Recommendation These displays had been designed based on Consumers and health care Practitioners’ perspectives and the focus groups analysis results. Additionally, they encompassed standard information visualization techniques that were employed in multi-dimensional case studies [29]. Subsequently, they were validated by subject matter experts. Improvement in quality and safety performance over time is important to consider (i.e., what is the current performance and what is the goal in three years). As we demonstrated in our results, in the visualization’s evaluation rating of the “Analysis Table”, SIR and CI were difficult concepts to explain to Consumers. However, they were also more interested in seeing the absolute number of infections. It was challenging to explain to Consumers the meaning of large confidence intervals as it was related to small number of cases. Moreover, combining data was difficult and may result in reaching the wrong conclusions, which is ultimately unfair to patients. Overall, operationally, hospitals focus on zero harm to patients (i.e., no infections). The goal is to create meaningful data aggregation (e.g., overall adult, overall pediatric, and specialized units). Avoiding priority ranking is important to prevent the unintended consequence of hospitals avoiding high risk patients. Thus, we chose to group hospitals alphabetically within the broad categories. As well, we chose to comment that within the broad categories all hospitals have approximately equivalent performance.

Limitations

Compared with population surveys, our sample size, based on number of focus group participants, was small. However, these numbers are in line with what is required for assessing user-interface preferences [30]. Furthermore, we used multiple formative methods with multiple groups to confirm the preferences that we did elicit of the participants, and we employed a broad cross section of the targeted user populations.

Future Directions

Visual communication can provide effective evidenced-based information to consumers for decision making and to practitioners for improving patient safety outcomes and processes. Additional HAIs outcomes can be presented and evaluated for Surgical Site Infections (SSI), nosocomial transmission of multi-drug resistant organisms (e.g., MRSA and VRE), Ventilator-Associated Pneumonia (VAP) and nosocomial Respiratory Syncytial Virus (RSV). Process oriented measurements can also be presented for surgical antimicrobial prophylaxis, hand hygiene compliance, health care worker influenza, and compliance with active surveillance testing for MRSA in ICUs. Moreover, public reporting should focus on reporting: 1) the overall picture, 2) where individual hospitals are, 3) where hospitals should be, and 4) the direction of change toward a target improvement goal. To address these temporal and multivariate dimensions in pre- and post- intervention evaluation, outcomes can be displayed in a run chart, a trend graph, or a statistical process control diagram. Visualization capabilities can be employed for understanding an intervention efficacy, providing insight on trends improvement, and acting as a public social influencer. Providing those tools for comparing and monitoring performance should influence consumers’ decision, assist practitioners in improving patient safety, and inform policy makers. As a result of our study, the validated visualizations were approved and publicly deployed for consumers [27] and practitioners [28] in Maryland.
  11 in total

1.  Making health care quality reports easier to use.

Authors:  J H Hibbard; E Peters; P Slovic; M L Finucane; M Tusler
Journal:  Jt Comm J Qual Improv       Date:  2001-11

2.  Less is more in presenting quality information to consumers.

Authors:  Ellen Peters; Nathan Dieckmann; Anna Dixon; Judith H Hibbard; C K Mertz
Journal:  Med Care Res Rev       Date:  2007-04       Impact factor: 3.929

3.  National Healthcare Safety Network (NHSN) report: data summary for 2006 through 2008, issued December 2009.

Authors:  Jonathan R Edwards; Kelly D Peterson; Yi Mu; Shailendra Banerjee; Katherine Allen-Bridson; Gloria Morrell; Margaret A Dudeck; Daniel A Pollock; Teresa C Horan
Journal:  Am J Infect Control       Date:  2009-12       Impact factor: 2.918

4.  Strategies for reporting health plan performance information to consumers: evidence from controlled studies.

Authors:  Judith H Hibbard; Paul Slovic; Ellen Peters; Melissa L Finucane
Journal:  Health Serv Res       Date:  2002-04       Impact factor: 3.402

Review 5.  Supporting informed consumer health care decisions: data presentation approaches that facilitate the use of information in choice.

Authors:  Judith H Hibbard; Ellen Peters
Journal:  Annu Rev Public Health       Date:  2001-11-06       Impact factor: 21.981

Review 6.  How will we know patients are safer? An organization-wide approach to measuring and improving safety.

Authors:  Peter Pronovost; Christine G Holzmueller; Dale M Needham; J Bryan Sexton; Marlene Miller; Sean Berenholtz; Albert W Wu; Trish M Perl; Richard Davis; David Baker; Laura Winner; Laura Morlock
Journal:  Crit Care Med       Date:  2006-07       Impact factor: 7.598

7.  Frequency or probability? A qualitative study of risk communication formats used in health care.

Authors:  M M Schapira; A B Nattinger; C A McHorney
Journal:  Med Decis Making       Date:  2001 Nov-Dec       Impact factor: 2.583

8.  Eliminating catheter-related bloodstream infections in the intensive care unit.

Authors:  Sean M Berenholtz; Peter J Pronovost; Pamela A Lipsett; Deborah Hobson; Karen Earsing; Jason E Farley; Shelley Milanovich; Elizabeth Garrett-Mayer; Bradford D Winters; Haya R Rubin; Todd Dorman; Trish M Perl
Journal:  Crit Care Med       Date:  2004-10       Impact factor: 7.598

9.  How do healthcare consumers process and evaluate comparative healthcare information? A qualitative study using cognitive interviews.

Authors:  Olga C Damman; Michelle Hendriks; Jany Rademakers; Diana M J Delnoij; Peter P Groenewegen
Journal:  BMC Public Health       Date:  2009-11-20       Impact factor: 3.295

10.  Assessing consumer health vocabulary familiarity: an exploratory study.

Authors:  Alla Keselman; Tony Tse; Jon Crowell; Allen Browne; Long Ngo; Qing Zeng
Journal:  J Med Internet Res       Date:  2007-03-14       Impact factor: 5.428

View more
  8 in total

Review 1.  A review of analytics and clinical informatics in health care.

Authors:  Allan F Simpao; Luis M Ahumada; Jorge A Gálvez; Mohamed A Rehman
Journal:  J Med Syst       Date:  2014-04-03       Impact factor: 4.460

2.  Improving central line infection rates in the neonatal intensive care unit: Effect of hospital location, site of insertion, and implementation of catheter-associated bloodstream infection protocols.

Authors:  Jennifer J Freeman; Samir K Gadepalli; Sabina M Siddiqui; Marcus D Jarboe; Ronald B Hirschl
Journal:  J Pediatr Surg       Date:  2015-02-07       Impact factor: 2.545

3.  Do Experts Understand Performance Measures? A Mixed-Methods Study of Infection Preventionists.

Authors:  Sushant Govindan; Beth Wallace; Theodore J Iwashyna; Vineet Chopra
Journal:  Infect Control Hosp Epidemiol       Date:  2017-12-05       Impact factor: 3.254

4.  Do Clinicians Understand Quality Metric Data? An Evaluation in a Twitter-Derived Sample.

Authors:  Sushant Govindan; Vineet Chopra; Theodore J Iwashyna
Journal:  J Hosp Med       Date:  2017-01       Impact factor: 2.960

5.  Lack of Patient Understanding of Hospital-Acquired Infection Data Published on the Centers for Medicare and Medicaid Services Hospital Compare Website.

Authors:  Max Masnick; Daniel J Morgan; John D Sorkin; Elizabeth Kim; Jessica P Brown; Penny Rheingans; Anthony D Harris
Journal:  Infect Control Hosp Epidemiol       Date:  2015-11-23       Impact factor: 3.254

6.  What's Past is Prologue: A Scoping Review of Recent Public Health and Global Health Informatics Literature.

Authors:  Brian E Dixon; Jamie Pina; Hadi Kharrazi; Fardad Gharghabi; Janise Richards
Journal:  Online J Public Health Inform       Date:  2015-07-01

7.  A comprehension scale for central-line associated bloodstream infection: Results of a preliminary survey and factor analysis.

Authors:  Sushant Govindan; Katherine Prenovost; Vineet Chopra; Theodore J Iwashyna
Journal:  PLoS One       Date:  2018-09-13       Impact factor: 3.240

Review 8.  Obstetric anaesthesia practice: Dashboard as a dynamic audit tool.

Authors:  Sunil T Pandya; Kausalya Chakravarthy; Aparna Vemareddy
Journal:  Indian J Anaesth       Date:  2018-11
  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.