Literature DB >> 28040686

Graphics help patients distinguish between urgent and non-urgent deviations in laboratory test results.

Brian J Zikmund-Fisher1,2,3, Aaron M Scherer3,4, Holly O Witteman5,6,7, Jacob B Solomon3, Nicole L Exe3, Beth A Tarini8, Angela Fagerlin9.   

Abstract

OBJECTIVE: Most electronic health record systems provide laboratory test results to patients in table format. We tested whether presenting such results in visual displays (number lines) could improve understanding.
MATERIALS AND METHODS: We presented 1620 adults recruited from a demographically diverse Internet panel with hypothetical results from several common laboratory tests, first showing near-normal results and then more extreme values. Participants viewed results in either table format (with a "standard range" provided) or one of 3 number line formats: a simple 2-color format, a format with diagnostic categories such as "borderline high" indicated by colored blocks, and a gradient format that used color gradients to smoothly represent increasing risk as values deviated from standard ranges. We measured respondents' subjective sense of urgency about each test result, their behavioral intentions, and their perceptions of the display format.
RESULTS: Visual displays reduced respondents' perceived urgency and desire to contact health care providers immediately for near-normal test results compared to tables but did not affect their perceptions of extreme values. In regression analyses controlling for respondent health literacy, numeracy, and graphical literacy, gradient line displays resulted in the greatest sensitivity to changes in test results. DISCUSSION: Unlike tables, which only tell patients whether test results are normal or not, visual displays can increase the meaningfulness of test results by clearly defining possible values and leveraging color cues and evaluative labels.
CONCLUSION: Patient-facing displays of laboratory test results should use visual displays rather than tables to increase people's sensitivity to variations in their results.
© The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association.

Entities:  

Keywords:  clinical laboratory information systems; computer graphics; decision making; education of patients; electronic health record

Mesh:

Year:  2017        PMID: 28040686      PMCID: PMC5565988          DOI: 10.1093/jamia/ocw169

Source DB:  PubMed          Journal:  J Am Med Inform Assoc        ISSN: 1067-5027            Impact factor:   4.497


BACKGROUND AND SIGNIFICANCE

Patient access to electronic health record (EHR) systems has increased dramatically in recent years, and most of these allow patients to view their laboratory test results outside of clinical consultations. As a result, increasing numbers of patients are now viewing medical test result data that they either have never seen before or have only seen in the context of a clinical visit with a health care provider who could explain and interpret it. Patients value this access to information, which is one component of a larger trend to encourage and facilitate greater patient involvement in medical decision making and health self-management. Yet simply having access to test results is insufficient to ensure that patients can use that information to improve their health or their care. Currently, most EHR patient portals present test results in tabular formats that are similar to those used to report results to clinicians. These tables are difficult to interpret, and patients (especially those with lower numeracy or literacy skills) face significant barriers when trying to identify whether test results are outside of the standard reference range. Furthermore, the clinical importance of having results outside of the standard range is generally undefined in tabular formats. While patients do not have primary responsibility for identifying and acting upon laboratory findings (clinicians are generally expected to review and act upon urgent results), they will of course attempt to interpret and draw conclusions from whatever information is made available to them. The more ambiguous test result displays are in terms of clinical meaning, the more patients will wonder how bad their results are, which could lead them to draw erroneous inferences. Test values represent patients’ short-term risk (eg, of bleeding if platelet counts are low) and long-term risk (eg, of diabetes-related complications if hemoglobin A1c remains high). If patients view every out-of-range test value similarly, they are likely to overreact to slightly elevated or reduced values that are not urgent (especially if the ordering clinician does not provide an interpretive note) and fail to recognize results that are urgent. However, the fact that patients currently have difficulty interpreting test results does not imply that they would not if the data were presented in more effective formats. Visual displays often help people understand data such as risk statistics, and visual displays that show health care providers multiple test results appear to speed their interpretation, although different displays appear to work better for different use cases. When only one result is being communicated at a time, number line graph displays are used in many applications to enable users to see where a single value falls in the range of possible values (eg, the page position within the total pages of an e-book). Thus, presenting laboratory test results in number line graphs rather than tables may increase the usability of these data. Such designs need to be studied rigorously to evaluate their effects on patient comprehension, risk perception, and activation. To our knowledge, no experimental research has tested whether presenting test results in visual number line graph formats instead of tables alters how people interpret test values. We conducted an online survey in which respondents imagined receiving multiple laboratory test results through an EHR patient portal. Our primary research question was: To what extent do different visual displays help people discriminate between test results that do or do not require urgent action? By varying whether participants viewed their test results in either number line graph or tabular formats as well as the test values themselves, we provide empirical evidence of how display format influences people’s understanding of what their laboratory test results mean.

MATERIALS AND METHODS

Participants

We recruited a stratified random sample of US adults from a panel of Internet users administered by Survey Sampling International (SSI), which recruits panel members through various opt-in methods. SSI uses a probability-weighted random process to identify which panel members should receive which surveys based on sample requirements. To ensure demographic diversity and offset variations in response rates, we established quotas based on respondent age and race (thereby approximating the distributions of these characteristics in the US population). The sampling algorithm continued to route SSI participants to the survey until all quotas were achieved. We recruited over a one-week period in May 2016. Upon completion, participants were entered into instant-win contests and regular drawings administered by SSI for modest prizes.

Design and procedure

Participants were asked to imagine that they had recently visited their doctor’s office to discuss medications and had undergone a set of blood tests shortly thereafter. They were then asked to imagine that they were viewing the results of these tests on an online EHR portal. Our experimental design focused on participants’ reactions to 3 specific test results (platelet count, alanine aminotransferase [ALT], and serum creatinine) selected because they varied in the size of established standard ranges (relative to possible test values) and in the level of patient risk associated with varying degrees of deviation from standard ranges. Each test was reported initially as slightly outside the standard range and then as a more extreme result (ie, further from the standard range). Specifically, all participants first viewed a platelet count of 135 × 109/L and then 25 × 109/L, an ALT value of 80 U/L and then 360 U/L, and a creatinine value of 2.2 mg/dL and then 3.4 mg/dL. Our primary between-subject experimental manipulation was to vary which of 4 formats (Figure 1) was used to display the test results. One group (Table condition) viewed results presented in a table format that included the test result value and standard range in number form, as well as the appropriate units. This is similar to what is typically presented in EHRs such as Epic. All remaining participants viewed visual displays (described in detail below) that placed the test value on a horizontal number line graph, with the numerical values corresponding to the ends of the number line and the ends of the standard range clearly marked and labeled, as shown in Figure 2. These ranges were chosen in consultation with clinicians to represent a set of values that would include all but the most unusual outlier values.
Figure 1.

Visual display formats used in this study

Figure 2.

Visual displays of the 3 near-normal test values on Gradient Lines

Visual display formats used in this study Visual displays of the 3 near-normal test values on Gradient Lines We tested 3 different designs in order to explore to what degree any differences in patient reactions were attributable to the use of number line visual displays in general vs specific design features (eg, colors or labels) of the displays (Figure 2). Participants in the Simple Line condition viewed displays that were colored solid gray except for a green range labeled “standard range.” Participants in the Block Line condition viewed number lines that had areas outside the standard range divided into solid-color blocks (using a stoplight red-yellow-green color palette) that indicated diagnostic categories such as “very low” (red) and “borderline high” (yellow). This design provided additional reference points to increase the evaluability of (likely unfamiliar) results, and supported increased gist (vs verbatim) processing to facilitate meaning derivation. The cutoffs for these categories were determined in consultation with clinician team members as representing plausible descriptions of the level of risk associated with different degrees of deviation from the standard range. Lastly, participants in the Gradient Line condition viewed number lines similar to those in the Block Line condition, except that the transitions between categories were graphically smoothed by using color gradients to more accurately represent the continuous nature of the underlying value-to-risk relationship. In addition, category labels were replaced by 2 small arrows labeled “low” and “high.” This design therefore represents a compromise between simplicity and the volume of information provided in the Block Line design.

Outcome measures

Perceived test urgency

Because our primary research question focused on increasing patients’ ability to discriminate between urgent and nonurgent test results, our primary outcome measure was the difference between participants’ subjective sense of urgency when given test results that were slightly and extremely outside the standard range. For each of the 3 focal tests, we asked 2 questions: “How alarming does this result feel to you?” and “How urgent of an issue is this result?” Both questions utilized a 6-point Likert scale response option, with 1 = not at all and 6 = very.

Behavioral intentions

We also included a question about participants’ specific behavioral intentions in response to the test. Specifically, we asked, “Which of the following best describes what you would do in response to your [test name] test result?” Response options were “Nothing,” “Talk to your doctor about this test result at your next regular appointment,” “Ask to see your doctor at the first available appointment,” “Go to a hospital or your doctor’s office tomorrow,” “Go to a hospital as soon as you can get free later today,” and “Go to a hospital immediately.” The test result display remained visible during all questioning so that all questions measured data interpretation, not recall.

Display format preferences

At the end of the survey, we asked 4 questions to measure user perceptions of the data formats (table or line graph). Participants rated “In your opinion, how well did these images describe the test results?” on a 5-point scale from “not at all well” to “extremely well”; “How helpful were these images in helping you to understand the test results?” on a 5-point scale from “not at all helpful” to “extremely helpful”; “If you were receiving laboratory test results in real life, would you like to see the test results presented using this type of image?” on a 5-point scale from “definitely no” to “definitely yes”; and “How much would you trust what these images are telling you about your health?” on a 5-point scale from “do not trust at all” to “trust completely.”

Individual difference measures

In addition to completing standard demographic questions, participants also completed 4 individual difference measures that we hypothesized might affect their ability to interpret test result tables or graphs. First, because ample evidence exists that even highly educated adults can have poor numeracy skills, all study participants completed the Subjective Numeracy Scale. The Subjective Numeracy Scale measures perceived quantitative ability and preference for receiving information in numerical form (range: 1–6; higher values indicate greater subjective numeracy) and has previously been shown to correlate with the ability to recall and comprehend textual and graphical risk communications., Second, participants completed Chew’s screening question for health literacy (“How confident are you filling out forms by yourself?,” where 1 = not at all confident and 5 = extremely confident), which has been validated and shown to be highly correlated with the Rapid Estimate of Adult Literacy in Medicine and Short Test of Functional Health Literacy Assessment. Third, participants also completed 6 questions (numbers 5–9, 11, and 13) from Galesic and Garcia-Retamero’s graphical literacy scale. We used the total number of correct answers (0–6) as an abbreviated measure of graphical literacy (reduced due to this scale’s significant time demand on respondents). Fourth, respondents answered “How familiar are you with the medical test results like the ones discussed in this survey?” on a scale from 1 = not at all familiar to 5 = extremely familiar.

Data management

All data were collected anonymously using the Qualtrics® online survey platform. Participants were identified and prevented from taking the survey multiple times via unique identification numbers provided by SSI within the redirected URL. The design, sampling process, data management procedures, and outcome measures received exempt status approval from the University of Michigan Health Sciences and Behavioral Sciences Institutional Review Board.

Statistical analyses

Perceptions of alarm and urgency for each test were highly correlated (r = 0.83–0.90). As a result, we created an aggregate scale of perceived urgency using the average of both questions for each test (Cronbach’s α = 0.91–0.95). To quantify sensitivity between values that are near normal and do not require immediate medical attention and values that are extreme and do require immediate medical attention, we then created an urgency difference score equal to perceived urgency (extreme value) minus perceived urgency (near-normal value) as the primary outcome variable of our analysis. In addition, because the categories included in the behavioral intentions variable are not necessarily mutually exclusive (eg, one might describe waiting until a future appointment as doing “nothing”), for analysis purposes we divided that variable into 2 groups corresponding to willingness to wait (first 2 categories) and taking some form of immediate action (last 4 categories). We also combined the 4 questions assessing user perceptions of the display formats into a single, highly reliable measure of display format preferences (Cronbach’s α = 0.88). We report descriptive statistics for the urgency difference score, willingness to wait, and user preferences across the 4 display formats. We also report the results of linear regression analyses predicting the urgency difference score, using display format as well as respondent numeracy, health literacy, graphical literacy, and familiarity with medical tests as continuous variables. All analyses were performed using Stata 1429 and all tests of significance were 2-sided and used α = 0.05.

RESULTS

Sample characteristics

Out of 1822 people who began the survey, 1621 completed it (an 89% completion rate). In addition, 1 response was dropped due to a reported age <18 years old. Table 1 reports sample demographic characteristics and the distribution of scores for the health literacy, numeracy, and graphical literacy measures among the remaining 1620 respondents.
Table 1.

Sample demographics (n = 1620)

CharacteristicCategoryFrequency (%)Mean (SD)
Age48.9 (15.7)
GenderMale768 (47.5)
Female846 (52.3)
Transgender4 (0.3)
EthnicityHispanic (any race)205 (12.7)
RaceaWhite1252 (77.4)
African-American211 (13.0)
All other91 (7.0)
Education<High school30 (1.9)
High school only260 (16.1)
Some college/trade547 (33.8)
Bachelor’s degree502 (31.0)
Master’s/doctorate279 (17.2)
Health Literacy129 (1.8)4.24 (0.96)
268 (4.2)
3213 (13.2)
4482 (30.0)
5817 (50.8)
Subjective Numeracy Scale1.00–1.9926 (1.6)4.47 (0.97)
2.00–2.99104 (6.4)
3.00–3.99288 (17.8)
4.00–4.99622 (38.5)
5.00–5.99528 (32.7)
6.0047 (2.9)
Graphical Literacy Scale091 (5.7)3.76 (1.83)
1137 (8.5)
2212 (13.2)
3226 (14.0)
4249 (15.5)
5354 (22.0)
6342 (21.2)
Familiarity with Medical Tests1167 (10.4)3.31 (1.18)
2197 (12.3)
3469 (29.2)
4519 (32.3)
5254 (15.8)

Note: Reports results only for those respondents who completed each question or measure.

aRespondents could mark more than one race.

Sample demographics (n = 1620) Note: Reports results only for those respondents who completed each question or measure. aRespondents could mark more than one race.

Perceived test urgency

As shown in Table 2, perceptions of perceived urgency for the extreme values were universally high, regardless of which display format was shown. However, perceptions of the near-normal values varied substantially across formats, especially for the ALT and serum creatinine tests. The pattern is consistent: participants who saw their near-normal values in a table display rated those results as most urgent, while those who saw a gradient line display perceived the results as least urgent.
Table 2.

Means (standard deviations) of urgency score and urgency difference score ratings, by display format

Near-NormalExtremeWithin-SubjectDifference
Test ValueTest ValueDifference ScoreEquals 0 (%)
Platelet Count
 Table3.95 (1.32)5.20 (1.22)1.24 (1.44)26.5
 Simple Line3.72 (1.38)5.26 (1.17)1.54 (1.41)17.5
 Blocks Line3.94 (1.18)5.20 (1.09)1.26 (1.29)19.0
 Gradient Line3.73 (1.39)5.30 (1.05)1.57 (1.62)15.8
ALT
 Table4.90 (1.22)5.26 (1.19)0.37 (.91)56.3
 Simple Line4.00 (1.36)5.44 (1.06)1.44 (1.57)21.3
 Blocks Line3.97 (1.13)5.35 (1.02)1.38 (1.34)20.2
 Gradient Line3.56 (1.30)5.39 (1.03)1.83 (1.59)14.8
Serum Creatinine
 Table4.36 (1.27)4.74 (1.30)0.39 (1.03)43.7
 Simple Line4.09 (1.22)4.81 (1.08)0.72 (1.12)27.7
 Blocks Line3.99 (1.11)4.58 (1.06)0.59 (.94)28.7
 Gradient Line3.91 (1.20)4.74 (1.08)0.84 (1.19)24.0

Note: Urgency scores are the average value of 2 questions asked on a 1–6 scale, where greater numbers represent higher perceived urgency of the test result. Urgency difference score = urgency score (extreme value) – urgency score (near-normal value) for each individual study participant.

Means (standard deviations) of urgency score and urgency difference score ratings, by display format Note: Urgency scores are the average value of 2 questions asked on a 1–6 scale, where greater numbers represent higher perceived urgency of the test result. Urgency difference score = urgency score (extreme value) – urgency score (near-normal value) for each individual study participant. The within-subject urgency difference score is similarly smallest for the table conditions and largest for the gradient line conditions. Full distributions (violin plots) of this variable for each of the 3 tests are presented in Figure 3. The notable finding is that the modal urgency difference score for table displays is 0 for the displays of ALT and serum creatinine results. As shown in Table 2, over 56% of people who saw tables gave precisely the same ratings of urgency when ALT = 80 U/L vs ALT = 360 U/L, and 44% gave identical responses when serum creatinine = 2.2 mg/dL vs = 3.4 mg/dL. By contrast, when those same test results were presented in any of the line graph formats, most participants had positive urgency difference scores, suggesting a more nuanced understanding of the result and its meaning for their health.
Figure 3.

Violin plots of urgency difference scores, by display format

Violin plots of urgency difference scores, by display format To assess the significance of these patterns, we conducted linear regression analyses of the urgency difference scores, controlling for individual differences in health literacy, subjective numeracy (Cronbach’s α = 0.84), and graphical literacy (mean correct answers = 3.76 out of 6). Table 3 shows that both the gradient line and simple line displays resulted in significantly greater sensitivity to the change in test result for all 3 laboratory tests. The effect of display format was notably larger for the ALT test than for the other 2 tests. In addition, the coefficients for the Gradient Line condition were significantly larger than those for the Block Line condition for all 3 tests (all P ≤ .002) and the coefficient for the Simple Line condition for the ALT test (F [1,1595] = 23.23, P < .001).
Table 3.

Linear regression results showing predictors of urgency difference score

Platelets
ALT
Creatinine
Coef.P-valueCoef.P-valueCoef.P-value
Display Format
 Table
 Simple Line0.20.031.01<.0010.29<.001
 Block Line−0.00.991.02<.0010.18.02
 Gradient Line0.28.0031.44<.0010.43<.001
Graphical Literacy (0–6)0.29<.0010.21<.0010.10<.001
Health Literacy (1–5)0.23<.0010.21<.0010.13<.001
Subjective Numeracy (1–6)−0.10.01−0.07.060.01.76
Familiarity with Medical Tests0.00.94−0.03.30−0.02.39
Constant−0.34.08−0.84<.001−0.47.002

Notes: Urgency difference score = urgency score (extreme value) – urgency score (near-normal value). Higher scores represent greater graphical literacy, health literacy, subjective numeracy, and familiarity with medical tests.

Linear regression results showing predictors of urgency difference score Notes: Urgency difference score = urgency score (extreme value) – urgency score (near-normal value). Higher scores represent greater graphical literacy, health literacy, subjective numeracy, and familiarity with medical tests. When comparing the effects of the individual difference measures using a forward selection process, increased graphical literacy had the strongest relationship with degree of sensitivity, adding substantially to model fit when compared to a model with only display format (platelets ΔR2 = 0.146, P < .001; ALT ΔR2 = 0.080, P < .001; serum creatinine ΔR2 = 0.040, P < .001). It was also the strongest predictor in the final models shown in Table 3. Higher scores on the health literacy screening question also significantly predicted higher urgency difference scores (ΔR2 vs model with display format and graphical literacy only: platelets ΔR2 = 0.016, P < .001; ALT ΔR2 = 0.013, P < .001; serum creatinine ΔR2 = 0.012, P < .001). Adding subjective numeracy to graphical literacy and health literacy had little effect (ΔR2 < 0.004 in all regressions), and familiarity with medical tests was not correlated with urgency difference scores.

Behavioral intentions

When respondents were presented with the near-normal platelet test result of 135 × 109/L, there were no significant differences across display formats in their stated willingness to wait (either by doing nothing or waiting until the next scheduled appointment). However, participants who viewed their near-normal ALT and serum creatinine test results in table format were significantly less willing to wait than participants who received the same results in any of the number line displays: ALT (table = 26.9% vs simple line = 44.2%, blocks line = 42.0%, gradient line = 51.8%; χ2 (3) = 55.25, P < .001); serum creatinine (table = 35.5% vs simple line = 43.3%, blocks line = 46.5%, gradient line = 47.7%; χ2 (3) = 15.20, P < .001).

Display format preferences

Mean preference ratings for tables (M = 3.51, SD = 1.09) were substantially lower than for the number line displays (simple line: M = 3.62, SD = 0.91; blocks line: M = 3.76, SD = 0.92; gradient line: M = 3.68, SD = 0.91). The overall pattern of variation was significant by one-way analysis of variance (F [3, 1614] = 5.02, P = .002), but pairwise comparisons identified only the table vs blocks line comparison as individually significant (P < .001).

DISCUSSION

Interpreting laboratory test results is difficult for most people. Furthermore, our results show that when such results are presented in table formats, many people perceive no difference between values that represent minor nonurgent deviations and those that are more clinically concerning. But our results also demonstrate that using even the simplest number line graphics instead of tables to visually represent test results can decrease perceptions of urgency about values near the standard range and therefore increase sensitivity to variations in test values. The most powerful visual cue that our number line displays provide patients is the concrete placement of test results in a visual space of possible values. Patients thus literally see whether their values are high, low, or in the middle, and they construct cognitive and emotional responses to the test results accordingly. Hence, visual formats are powerful tools that can be designed to shape patient perceptions in ways that facilitate appropriate response, either action (in the case of clinically urgent values) or inaction (in the case of values that are appropriately handled through existing interactions or processes). Differences in perception of test urgency and behavioral motivation were particularly apparent for ALT and serum creatinine tests, for which values can deviate substantially from the standard range (to varying degrees) before the patient faces immediate medical consequences or risks. Thus, these are the situations in which patients viewing test results without clinician guidance may be most likely to become unnecessarily concerned about near-normal values. While the specificity of the observed effects implies that the choice of display format may be less critical for some types of laboratory tests (ie, those for which deviations quickly become medically concerning) vs others, visual displays are preferred over tables and were at least as effective as table formats for all tests. Given that a simple substitution of well-designed visual display formats appears to reduce such problems at minimal cost, we believe our results provide justification for their adoption. Several of our designs focused on increasing the evaluability of test result data for patients by providing meaning-rich cues and reference standards to anchor the “how does this compare to X” comparative process that people naturally use to derive meaning from data.,, In 2 of the formats we tested in this study, the displays included additional contextual cues in the form of either a color gradient and clearly marked “low” and “high” regions in the Gradient Line condition or distinct colored blocks labeled with specific terms such as “very low” and “borderline high” in the Block Line condition. Such evaluative labels have previously been shown to facilitate interpretation of unfamiliar health data. However, our results found consistently lower perceived urgency about near-normal values (and hence greater overall sensitivity) with the Gradient Line design, which had fewer distinct evaluative categories, than with the Blocks Line design. Further research is needed to determine the optimal tradeoff between providing additional contextual information and the need for simplicity (since visual complexity inhibits understanding and use of data,) in patient-facing displays. However, perhaps the most important, yet difficult, challenge in implementing visual displays of laboratory test results will be determining the range of values to be shown. For example, should a visual display of ALT include values from 0 to 400 U/L, a wider range (eg, 0–800 U/L), or a narrower range (eg, 0–100 U/L)? Use of a narrower range would make small changes of 10 or fewer units visually larger, implying that such changes are more medically significant. Doing so might be particularly valuable in situations where a patient’s attention to (seemingly) small variations in test result values is important for self-management. Yet, narrowing the range of values shown increases the possibility of outlier results that fall off the end of the display. Conversely, extending the range to minimize outliers ensures that the vast majority of test results will fall within a very small range of the display. Nor is dynamic scaling of displays a good solution either, as there is clear evidence that people fail to adjust appropriately for scale changes. The most likely solution will be to select default display ranges that include most, but not all, test values but that nonetheless make clinically significant shifts in test values visually salient. The generalizability of these findings is primarily limited by our use of a hypothetical scenario. Participants did not receive actual test results relevant to true personal medical conditions. This meant that they lacked the personal relevance that such test results have for patients actually attempting to manage chronic diseases or track illnesses. Nor did we measure the respondents’ level of familiarity with the specific tests presented. Nonetheless, our randomized experimental design allowed us to carefully disentangle the effects of test type, test result level, and display format on people’s reactions in a way that would be impossible to do with patients of varying backgrounds and diagnoses receiving independently varying test results. Furthermore, there are many scenarios in which patients with no prior knowledge of a given laboratory test might view laboratory results in a patient portal. We believe that these are the scenarios that could most easily lead to unnecessarily alarmed patients urgently contacting their health care professionals, and thus we designed the study to identify ways this negative outcome could be avoided. Governments and hospital systems have recently invested enormous resources to support the adoption of EHR systems. According to a 2015 survey of hospitals, 92% offered patients the ability to view medical records, compared to 43% of hospitals in 2013. These initiatives are increasing patients’ ability to directly access and view laboratory test results and have been justified by the concept that patients will be able to translate access to test results into better disease self-management and better preparation for clinic visits. Yet all these potential benefits depend on patients being able to interpret the results they are given. They depend on patients' gist interpretations being appropriately sensitive to variations in what the test data show. The present practice of providing numerical test result values in EHRs in simple table formats is patently insufficient to enable many patients to achieve meaningful use. Systematic, user-centered design that draws upon existing knowledge of the psychology of information evaluability is needed before our investment in patient access to test results yields the health outcome returns of which they are potentially capable.
  28 in total

1.  The patient clinical information system (PatCIS): technical solutions for and experience with giving patients access to their electronic medical records.

Authors:  James J Cimino; Vimla L Patel; Andre W Kushniruk
Journal:  Int J Med Inform       Date:  2002-12-18       Impact factor: 4.046

2.  Effective communication of risks to young adults: using message framing and visual aids to increase condom use and STD screening.

Authors:  Rocio Garcia-Retamero; Edward T Cokely
Journal:  J Exp Psychol Appl       Date:  2011-09

Review 3.  Numeracy skill and the communication, comprehension, and use of risk-benefit information.

Authors:  Ellen Peters; Judith Hibbard; Paul Slovic; Nathan Dieckmann
Journal:  Health Aff (Millwood)       Date:  2007 May-Jun       Impact factor: 6.301

4.  Can patients use test results effectively if they have direct access?

Authors:  Maurice O'Kane; Danielle Freedman; Brian J Zikmund-Fisher
Journal:  BMJ       Date:  2015-02-11

Review 5.  General Evaluability Theory.

Authors:  Christopher K Hsee; Jiao Zhang
Journal:  Perspect Psychol Sci       Date:  2010-07

6.  The design and evaluation of a graphical display for laboratory data.

Authors:  David T Bauer; Stephanie Guerlain; Patrick J Brown
Journal:  J Am Med Inform Assoc       Date:  2010 Jul-Aug       Impact factor: 4.497

7.  What's time got to do with it? Inattention to duration in interpretation of survival graphs.

Authors:  Brian J Zikmund-Fisher; Angela Fagerlin; Peter A Ubel
Journal:  Risk Anal       Date:  2005-06       Impact factor: 4.000

8.  "Is 28% good or bad?" Evaluability and preference reversals in health care decisions.

Authors:  Brian J Zikmund-Fisher; Angela Fagerlin; Peter A Ubel
Journal:  Med Decis Making       Date:  2004 Mar-Apr       Impact factor: 2.583

9.  Patient-centered medicine. A professional evolution.

Authors:  C Laine; F Davidoff
Journal:  JAMA       Date:  1996-01-10       Impact factor: 56.272

10.  The impact of the format of graphical presentation on health-related knowledge and treatment choices.

Authors:  Sarah T Hawley; Brian Zikmund-Fisher; Peter Ubel; Aleksandra Jancovic; Todd Lucas; Angela Fagerlin
Journal:  Patient Educ Couns       Date:  2008-08-27
View more
  23 in total

1.  Reference range number line format preferred by adults for display of asthma control status.

Authors:  Adriana Arcia; Maureen George
Journal:  J Asthma       Date:  2019-04-03       Impact factor: 2.515

2.  Interventions to increase patient portal use in vulnerable populations: a systematic review.

Authors:  Lisa V Grossman; Ruth M Masterson Creber; Natalie C Benda; Drew Wright; David K Vawdrey; Jessica S Ancker
Journal:  J Am Med Inform Assoc       Date:  2019-08-01       Impact factor: 4.497

3.  Design and Comprehension Testing of Tailored Asthma Control Infographics for Adults with Persistent Asthma.

Authors:  Adriana Arcia; Maureen George; Maichou Lor; Sabrina Mangal; Jean-Marie Bruzzese
Journal:  Appl Clin Inform       Date:  2019-09-04       Impact factor: 2.342

4.  Visual analogies, not graphs, increase patients' comprehension of changes in their health status.

Authors:  Meghan Reading Turchioe; Lisa V Grossman; Annie C Myers; Dawon Baik; Parag Goyal; Ruth M Masterson Creber
Journal:  J Am Med Inform Assoc       Date:  2020-05-01       Impact factor: 4.497

5.  A Systematic Review of Patient-Facing Visualizations of Personal Health Data.

Authors:  Meghan Reading Turchioe; Annie Myers; Samuel Isaac; Dawon Baik; Lisa V Grossman; Jessica S Ancker; Ruth Masterson Creber
Journal:  Appl Clin Inform       Date:  2019-10-09       Impact factor: 2.342

6.  Supporting Collaborative Health Tracking in the Hospital: Patients' Perspectives.

Authors:  Sonali R Mishra; Andrew D Miller; Shefali Haldar; Maher Khelifi; Jordan Eschler; Rashmi G Elera; Ari H Pollack; Wanda Pratt
Journal:  Proc SIGCHI Conf Hum Factor Comput Syst       Date:  2018-04-21

7.  Parental Perceptions of Displayed Patient Data in a PICU: An Example of Unintentional Empowerment.

Authors:  Onur Asan; Matthew C Scanlon; Bradley Crotty; Richard J Holden; Kathryn E Flynn
Journal:  Pediatr Crit Care Med       Date:  2019-05       Impact factor: 3.624

8.  Understanding What Information Is Valued By Research Participants, And Why.

Authors:  Consuelo H Wilkins; Brandy M Mapes; Rebecca N Jerome; Victoria Villalta-Gil; Jill M Pulley; Paul A Harris
Journal:  Health Aff (Millwood)       Date:  2019-03       Impact factor: 6.301

9.  Presenting self-monitoring test results for consumers: the effects of graphical formats and age.

Authors:  Da Tao; Juan Yuan; Xingda Qu
Journal:  J Am Med Inform Assoc       Date:  2018-08-01       Impact factor: 4.497

10.  Older Adults Can Successfully Monitor Symptoms Using an Inclusively Designed Mobile Application.

Authors:  Meghan Reading Turchioe; Lisa V Grossman; Dawon Baik; Christopher S Lee; Mathew S Maurer; Parag Goyal; Monika M Safford; Ruth M Masterson Creber
Journal:  J Am Geriatr Soc       Date:  2020-03-10       Impact factor: 5.562

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.