Patrice D Tremoulet1. 1. Department of Psychology, Rowan University, Glassboro, NJ, USA.
Abstract
Objective: This study investigated how effectively simplified cognitive walkthroughs, performed independently by four nonclinical researchers, can be used to assess the usability of clinical decision support software. It also helped illuminate the types of usability issues in clinical decision support software tools that cognitive walkthroughs can identify. Method: A human factors professor and three research assistants each conducted an independent cognitive walkthrough of a web-based demonstration version of T3, a physiologic monitoring system featuring a new clinical decision support software tool called MAnagement Application (MAP). They accessed the demo on personal computers in their homes and used it to walk through several pre-specified tasks, answering three standard questions at each step. Then they met to review and prioritize the findings. Results: Evaluators acknowledged several positive features including concise, helpful tooltips and an informative column in the patient overview which allows users direct (one-click) access to protocol eligibility and compliance criteria. Recommendations to improve usability include: modify the language to clarify what user actions are possible; visually indicate when eligibility flags are snoozed; and specify which protocol's data is currently being shown. Conclusion: Independent, simplified cognitive walkthroughs can help ensure that clinical decision support software tools will appropriately support clinicians. Four researchers used this technique to quickly, inexpensively, and effectively assess T3's new MAP tool, which suggests positive actions, such as removing a patient from a ventilator. Results indicate that, while there is room for usability improvements, the MAP tool may help reduce clinician's cognitive load, facilitating improved care. The study also confirmed that cognitive walkthroughs identify issues that make clinical decision support software hard to learn or remember to use.
Objective: This study investigated how effectively simplified cognitive walkthroughs, performed independently by four nonclinical researchers, can be used to assess the usability of clinical decision support software. It also helped illuminate the types of usability issues in clinical decision support software tools that cognitive walkthroughs can identify. Method: A human factors professor and three research assistants each conducted an independent cognitive walkthrough of a web-based demonstration version of T3, a physiologic monitoring system featuring a new clinical decision support software tool called MAnagement Application (MAP). They accessed the demo on personal computers in their homes and used it to walk through several pre-specified tasks, answering three standard questions at each step. Then they met to review and prioritize the findings. Results: Evaluators acknowledged several positive features including concise, helpful tooltips and an informative column in the patient overview which allows users direct (one-click) access to protocol eligibility and compliance criteria. Recommendations to improve usability include: modify the language to clarify what user actions are possible; visually indicate when eligibility flags are snoozed; and specify which protocol's data is currently being shown. Conclusion: Independent, simplified cognitive walkthroughs can help ensure that clinical decision support software tools will appropriately support clinicians. Four researchers used this technique to quickly, inexpensively, and effectively assess T3's new MAP tool, which suggests positive actions, such as removing a patient from a ventilator. Results indicate that, while there is room for usability improvements, the MAP tool may help reduce clinician's cognitive load, facilitating improved care. The study also confirmed that cognitive walkthroughs identify issues that make clinical decision support software hard to learn or remember to use.
The physiologic monitoring systems used in intensive care units aggregate real-time
data from a variety of different sources, including pulse oximeters,
electrocardiography devices, infusion pumps, and ventilators.
Initially, these systems displayed monitored parameters in real time and
issued alerts whenever data values were outside of preset thresholds. However,
modern systems log patient data, enabling them to offer clinical decision support
capabilities that leverage recent advances in data science, predictive analytics,
and clinical informatics.
For example, several patient monitoring systems compute early warning scores,
and notify clinicians when scores suggest that a patient's condition is
worsening.[3-5]Although physiologic monitoring systems play an essential role in caring for
critically ill patients, they contribute to alarm fatigue, that is, situations where
a large number of audio signals overwhelm or desensitize users.[6,7] Clinical decision support
software (CDSS) that uses early warning scores or risk indexes to prompt health
providers to assess patients and potentially intervene sooner than they otherwise
might, can be beneficial. However, multiple interruptions triggering rapid patient
assessments are disruptive of workflow, setting providers up for burnout[8,9] which is associated with poor
patient safety outcomes.
In addition, notifications based on warning scores or risk indexes can
contribute to alert fatigue: situations where an excess of visual warnings, flags,
or pop-up messages overload and/or desensitize health providers.[11-13] Both alarm fatigue and alert
fatigue negatively impact patient safety. Healthcare providers that are overwhelmed
or desensitized may delay their responses to, ignore, or dismiss alarms or alerts
that otherwise would have prompted rapid intervention,[13-15] leaving patients vulnerable
to greater deterioration or more harm than necessary.On the other hand, the data collected by physiologic monitoring systems may also be
used to detect positive trends suggesting that a patient's health is improving. This
ability can be leveraged to create CDSS tools that provide users with gentle
reminders to consider reducing intensive interventions, rather than obtrusive
alerts. One physiologic monitoring system, called T3, recently adopted this
approach. It features a new component called MAP that identifies patients who may be
good candidates for clinical protocols, defined as specific codes of practice for
applying medical interventions. In most acute care settings, a healthcare team works
together to determine if a patient is a good candidate for a clinical trial or
decides if and when to implement a beneficial protocol such as vasoactive weaning
(VW) or extubation. This means that at least one team member must remember and bring
the possibility of eligibility to the team's attention. CDSS that unobtrusively
indicates that a patient is eligible for a protocol can remove this memory burden
from clinicians, allowing them to focus on other aspects of providing care for their
patients. MAP also tracks and displays compliance data, enabling clinicians to
quickly and easily review physiological data relevant to evaluating the progress of
patients placed on clinical protocols.CDSS that automatically identifies patients who meet the criteria for enrolling in a
clinical study or starting a new protocol can reduce clinician workload and memory
demands, but only if clinicians are able to quickly and easily access a
comprehensive, easy-to-understand summary of relevant patient data; otherwise, being
presented with flags and reminders could increase their workload and/or reduce their
effectiveness. Similarly, if it is difficult for T3 users to simultaneously access
all the data needed to evaluate how well a patient is tolerating a protocol,
clinicians will need more time than is strictly necessary to assess patients,
reducing efficiency and effectiveness. In short, the usability of T3's new MAP
component will play a significant role in determining whether it will facilitate or
inhibit users from effectively caring for patients monitored by T3.One relatively quick, inexpensive, and convenient method for assessing usability is
the cognitive walkthrough (CW). This is an analytical technique that entails walking
through the steps needed to perform each of a series of pre-identified tasks and
answering a small set of questions about how easily users will be able to perform
those tasks. While originally developed to assess “walk up and use”
technologies,[16,17] CWs are commonly used to evaluate relatively complex
products,[18-20] since most
users prefer to interact with tools to learn how to perform a task rather than read
a manual or follow directions.
Prior research indicates that this is true of T3 users; in a study that
assessed the efficacy of training, one participant noted that “using T3 is the best
way to learn to use it”.
CWs are particularly useful for highlighting aspects of user interfaces that
are intuitive, thus easy to learn and remember, and for identifying usability issues
that may make it hard for new or infrequent users.[22-24]Many variations of CWs have been used to assess the usability of different types of
user interface designs.[24-27] There is also a wide variance
in terms of the backgrounds and number of evaluators used. It is valuable for even a
single user interface developer or a team of user interface designers to conduct
CWs,[28-32] though some recommend
assembling larger teams that also include project managers and target
users.[31,33-35] In fact, one
of the significant benefits of CW is that it does not require evaluators to be
pre-trained, nor to have the same domain expertise as an application's target
users.[36,37] In addition, this technique is relatively simple to perform;
however, some researchers have noted that it can be difficult for evaluators to take
into account the real context of use
and that it does not provide estimates of frequency or severity of the issues
it uncovers.
Even with those shortcomings, CWs can be extremely helpful, by quickly and
easily identifying usability issues early in development lifecycles, when it is
least expensive to address them.
The study reported here explores the impact of having a small team of human
factors researchers each independently conduct a simplified CW using first-person
questions, and then meet to review results, prioritize issues in terms of the
expected impact of resolving them, and generate recommendations. This work addresses
several research questions:Can CWs be performed by four human factors researchers to determine how
easily clinicians will be able to use T3's new MAP capabilities?How effective is it to have evaluators conduct independent CWs rather
than a single team-based walkthrough?Is it helpful to employ three straightforward questions, phrased in the
first person, rather than the four third-person questions that are
typically used, or the two third-person questions developed for
streamlined CWs
?What usability issues for CDSS are CWs well designed to identify?
Methods
Rowan University IRB determined that this study does not qualify as human subjects
research and therefore was exempt from full review.Four trained evaluators each independently performed a CW to assess usability of the
Beta version of MAP (short for Management Application). The main steps for
conducting a CW are as follows[17,30,34,38,39]:Create descriptions of the intended users (personas).Decide upon a set of tasks to use to analyze the user interface.Document the correct sequence(s) of steps needed to complete each
task.Develop instructions for all participants.For each task, go step-by-step, asking the same set of pre-defined set of
questions about each step.Aggregate results and develop a report to share findings; ideally issues
found should be prioritized in terms of how much resolution will improve
usability.
Evaluators
Three students serving as research assistants in a Human Factors lab, and the lab
director, a Psychology professor, each independently assessed T3's MAP tool. All
students had prior experience conducting usability assessments and had spent at
least two semesters working in the Human Factors lab. All evaluators were
familiar with T3's user interface. The students had recently participated in a
training activity, which entailed reviewing each screen of a demonstration
version of T3 to determine if any usability best practices, called heuristics,
were violated in each screen. The professor had previously conducted heuristic
evaluations of two earlier versions of T3, which entailed becoming very
comfortable with its user interface.
Setting
Walkthroughs were conducted using evaluators’ own devices in their homes during
March and April 2021. All evaluators accessed a web-based demonstration of T3
version 3.10 with Risk Analytics version 5.3, which displayed de-identified
historical data from actual patients. This demo version of T3, which also
contained a Beta version of the new MAP tool, was only available to hospitals
who were helping test MAP at the time of the evaluation. An earlier version of
T3 was deployed
at other hospitals.
Procedure: CW
All evaluators were instructed to adopt the perspective of a new user attempting
to use the MAP tool to complete a set of tasks. They were directed to record
both all potential usability problems that they discovered and all ideas for
improvement that they conceived while undertaking the tasks.Preparation: Initially, the Psychology professor and one of the
students reviewed training materials, and then attended a demonstration of the
new MAP tool. Next, the professor developed a list of tasks and descriptions of
the steps required to complete each task. For tasks that could be completed
multiple ways, multiple step sequences were recorded. After verifying the
completion sequences, the student developed instructions for the walkthroughs,
which included the list of tasks and a set of three questions that all
evaluators answered after attempting each task.Analysis: The students were given copies of all the MAP training
materials along with the CW instructions. Evaluation tasks are listed in the
Appendix and post-task questions are listed in Table 1.
Appendix.
Tasks used for MAP cognitive walkthroughs (CWs).
Task name
Task description
Enroll patient in vasoactive weaning (VW)
Indicate to T3 that a patient is starting VW protocol.
Start screening patient for extubation readiness trial
(ERT)
Indicate that a patient is potentially a candidate for an
ERT so T3 includes this patient in eligibility scans.
Snooze eligibility flag(s)
Temporarily stop displaying patient eligibility notification
flag (both ERT and VW flags)
View/edit compliance and eligibility criteria
Confirm/adjust inclusion and compliance criteria for a
single patient
View enrolled patient's progress
Check how patient is doing on an ongoing protocol (both ERT
and VW—access new MAP view)
Review all data from a patient who completed a protocol
Review how a patient did while on a completed protocol (both
ERT and VW)
Check trial dates
Find start and end times for a completed patient trial
View compliance data for a completed patient
Determine which compliance criteria, if any, were not fully
met during a completed trial
View compliance data for an enrolled patient
Determine which, if any, compliance criteria have not been
fully met by a patient currently under a protocol
Find eligible patients
Identify all patients currently eligible to start a
protocol
Tasks used for MAP cognitive walkthroughs (CWs).Questions used for cognitive walkthroughs (CWs).While attempting to work through each of these tasks, all evaluators were asked
to answer the questions in Table 1 to help identify positive features and potential usability
concerns.Once all evaluators had completed their analysis, the professor aggregated the
individual findings and took the first pass at grouping-related feedback and
similar suggestions for improvement. Then the evaluators met to review the
aggregated results, to ensure that all of the feedback was accurately reflected,
and to assign priorities to the problems. Priorities were based on the
evaluators’ collective judgement of how much each problem impacts usability.
Results
For ease of explanation, results are organized based on five regions in T3's user
interface that are used to display and/or interact with the new MAP capabilities.
For each region, a list of positive features is followed by a table listing
usability issues. The positive features are aspects of the MAP functionality that
should be retained if the recommended modifications are implemented.
Census screen of MAP activity column
Figure 1 shows a
screenshot of a part of T3's census screen.
Figure 1.
Census screen's new MAP activity column, indicating which patients are
currently following protocols, and which ones are eligible. It also
shows the legend for symbols used in this column, which is displayed
when users hover the cursor over the information icon.
Census screen's new MAP activity column, indicating which patients are
currently following protocols, and which ones are eligible. It also
shows the legend for symbols used in this column, which is displayed
when users hover the cursor over the information icon.Positive features:
Problem Descriptions and Priorities
The column header “MAP activity” seems more like application
language than user language. Priority: Medium-High
Not clear what will happen if users click on the leftmost
icon in a MAP activity bar (no tooltip for checks and
flags). Priority: Medium.
Clean, easy to read.Informative tooltips explaining the meanings of icons, and what
numbers represent.Users can view patient data during current or most recently completed
protocol with a single click.Users can pull up and review inclusion criteria and compliance
targets for a current/recent protocol with a single click.
Patient view of MAP Activity summary (top center)
Figure 2(a) shows the
top portion of a patient view screen, Figure 2(b) shows an enlarged view of
the MAP activity summary bar on a patient view screen, and Figures 2(c), (d), and (e) show screenshots of pop-up windows
that are displayed after clicking on different regions in the MAP activity
summary bar.
Figure 2a.
Top portion of a patient view screen, showing new “MAP activity summary”
(same information as on census page). The histogram shows one of the
T3's risk indexes, and the graph below it shows heart rate; more
physiological data is shown in graphs that are not included in this
screenshot.
Figure 2b.
Enlarged view of MAP activity summary.
Figure 2c.
Pop-up accessed by clicking first on MAP activity summary bar, then on
the three dots that appear after clicking on it.
Figure 2d.
Extubation readiness trial (ERT) criteria pop-up accessed by clicking on
the text that states “ERT” in a MAP activity summary.
Figure 2e.
Vasoactive weaning (VW) criteria pop-up, accessed by clicking on “WV”
when shown on a MAP activity summary bar (would show where “ERT” is
shown in Figure 2(b) and (c)).
Top portion of a patient view screen, showing new “MAP activity summary”
(same information as on census page). The histogram shows one of the
T3's risk indexes, and the graph below it shows heart rate; more
physiological data is shown in graphs that are not included in this
screenshot.Enlarged view of MAP activity summary.Pop-up accessed by clicking first on MAP activity summary bar, then on
the three dots that appear after clicking on it.Extubation readiness trial (ERT) criteria pop-up accessed by clicking on
the text that states “ERT” in a MAP activity summary.Vasoactive weaning (VW) criteria pop-up, accessed by clicking on “WV”
when shown on a MAP activity summary bar (would show where “ERT” is
shown in Figure 2(b) and (c)).Positive features:
Problem Descriptions and Priorities
No tooltip for the three vertical dots (which only appear
after clicking on either Flag or clock/time section of bar).
Priority: Medium-High
Control bar/button is missing tooltip on leftmost icon
(flag/check). Priority: Low
For patients eligible to start a protocol, four clicks are
required to start the protocol or snooze the eligibility
flag. Priority: Low
Informative, useful tooltips.Consistent with display in census view.Allows users to snooze flags or start eligible patients on
protocols.Pop-up shown when clicking on the protocol name (VW/extubation
readiness trial, ERT) are clear and consistent with one another.
MAP view (MAP tab on individual patient view screen)
Figure 3 shows a
screenshot of T3 screen with the MAP tab selected.
Figure 3.
Summary of several physiological parameters during the period that
patient was participating in the extubation readiness trial (ERT).
Summary of several physiological parameters during the period that
patient was participating in the extubation readiness trial (ERT).Positive features:
Problem Descriptions and Priorities
Not always obvious which protocol trial's data is being
displayed; it’s possible to be viewing data from a different
protocol trial than the one shown in top center. Priority:
High
Top center MAP activity section can show only one completed
trial but it’s possible to use time navigation controls to
show multiple trials on the display at the same time.
Priority: Medium
When a patient has completed multiple trials, it can be hard
to distinguish between start time for one trial and end time
for another. Priority: Low
For a patient who has been eligible for a long time, it is
unclear why a particular interval of time is shown in
graphs. Priority: Low
Allows users to quickly review the data for the time the patient was
on the protocol.Green horizontal lines helpfully graphically overlay the start and
end of protocol trial on large graph windows.Shading clearly indicates when parameters are not in compliance with
protocol targets; this is consistent with the use of shading in
other T3 graphical displays.Green horizontal lines inside the time navigation slider to show
protocol start/end times relative to the slider is helpful.
Patient view of MAP icon (checkbox on bottom of patient view screen)
Figure 4 shows a
screenshot of the middle bottom portion of a T3 patient view screen, which
contains several icons, including one that can be used to bring up pop-up
windows allowing users to see if the patient is eligible for a specific protocol
and, if so, allowing users to indicate to T3 that they will be starting patients
on a protocol.
Figure 4.
Icon used to access MAP functionality from individual patient view.
Icon used to access MAP functionality from individual patient view.Positive features:
Problem Descriptions and Priorities
Users who want to adjust eligibility criteria for a patient
must remember that the only way they can do this is to first
click on this icon. Other things can be done in multiple
ways (e.g. start MAP). Priority: Medium
Check box icon is not intuitive for something named “MAP”.
Priority: Medium–Low
Table content is clear and easy to read.Tooltip that connects this icon to MAP tab and MAP Activity summary
bar top center is helpful.
MAP pop-up menus (accessed via MAP icon on individual patient view
screen)
Figure 5(a) shows a
screenshot of the pop-up window that shows whether or not a patient is eligible
for specific clinical protocols and Figure 5(b) shows how the pop-up
changes if the user clicks on a protocol for which a patient is eligible. Figure 5(c) shows the
pop-up window that is displayed when a user elects to have an eligible patient
start on the VW protocol. Figure 5(d) shows the pop-up that appears if a patient has completed
one or more clinical protocols and the user clicks on the bar labeled “Completed
MAPs.” Figure 5(e)
shows the pop-up window that is displayed when a user elects to have an eligible
patient start the ERT.
Figure 5a.
MAP pop-up accessed by clicking on MAP icon (checkmark) at bottom of
individual patient view screen (see Figure 4). Based on data
captured by the physiologic monitoring system, this patient is eligible
for vasoactive weaning (VW).
Figure 5b.
Change to MAP pop-up shown if user clicks the button showing patient is
eligible to start VW.
Figure 5c.
Pop-up window that appears if user clicks on the button to start VW.
Figure 5d.
Another view of the initial MAP pop-up window (to left) accessed by
clicking MAP icon (checkmark) and list of completed MAPs (right)
accessed by clicking the button that says “completed MAPs” on initial
pop-up. This patient is eligible to start an extubation readiness trial
(ERT).
Figure 5e.
Pop-up menu that appears when user indicates that patient will start
ERT.
MAP pop-up accessed by clicking on MAP icon (checkmark) at bottom of
individual patient view screen (see Figure 4). Based on data
captured by the physiologic monitoring system, this patient is eligible
for vasoactive weaning (VW).Change to MAP pop-up shown if user clicks the button showing patient is
eligible to start VW.Pop-up window that appears if user clicks on the button to start VW.Another view of the initial MAP pop-up window (to left) accessed by
clicking MAP icon (checkmark) and list of completed MAPs (right)
accessed by clicking the button that says “completed MAPs” on initial
pop-up. This patient is eligible to start an extubation readiness trial
(ERT).Pop-up menu that appears when user indicates that patient will start
ERT.Positive features:
Problem Descriptions and Priorities
MAP pop-up: Not intuitive that user needs to use “click to
start MAP” link to view/adjust Parameter targets. Priority:
High
MAP pop-up: Icon of x in a circle conveys that something is
negative or not allowed which is inconsistent with “click to
start MAP”. Priority: Medium
Clicking on the protocol name (inside initial pop-up, Figure 5(a))
produces a pop-up summarizing inclusion criteria and compliance
targets.Clicking “Not Eligible” in a protocol's control bar in the initial
MAP pop-up brings and takes the user directly to MAP tab by allowing
the user to review data relevant to that protocol.Update button on Start Extubation Readiness pop-up states that
updates require that a patient be eligible or enrolled in a MAP.The pop-up table of completed MAPs is clear and it's easy to select a
row to bring up the MAP tab display to show data collected during
that completed trial.
Recommendations
Based upon the results of the CWs, the team generated several recommendations,
including the following:Ask target users to review the language used throughout MAP, to ensure
users clearly understand what actions are available. Specifically,
consider renaming “click to start MAP” as “click to review/adjust
protocol parameters”.Allow users to easily view all relevant patient data when deciding
whether or not to start a protocol or evaluating how well a patient is
doing/did while on an active/completed protocol.After snoozing a patient eligibility flag, provide an indicator that the
flag has been snoozed (e.g. instead of displaying “No Recent MAP
activity,” show “Eligibility flag snoozed until HH:MM”).When the MAP tab is active, have the MAP activity summary indicate which
is depicted in the graph, a completed protocol trial, a currently active
protocol, or a recent time interval when eligibility criteria have been
met.Add start date/time to the MAP Activity summary bar.Allow users to snooze eligibility as an option in the MAP pop-up window
(accessed from checkbox icon).Since clicking the vertical dots in the MAP Activity bar only yields two
options, consider placing icons for these actions directly on the
Activity bar.
Discussion and conclusions
Results of independent CWs of T3's new MAP feature suggest that novices may find some
of its features hard to learn. The evaluators noted that some of the language used
in the MAP tool's pop-ups and controls seems to be more application-oriented than
user-oriented. They also suggested that it would be helpful if tooltips and control
labels could help to make it clearer what actions to take when users make decisions
about patient eligibility. This is consistent with previously developed guidance
that CDSS tools use language that is familiar to users.
Moreover, lack of information in the MAP tab tooltip and the fact that the
snooze feature is only available via a three vertical dot display but the ability to
“start MAP” seems to be available through multiple routes, could mean that users
need to work harder and/or take longer to complete basic tasks or to understand how
well patients are doing/did during protocol trials than should be necessary. These
usability issues may lower overall user satisfaction with the MAP tool, even though
it provides users with relevant and clinically useful information and
capabilities.On the other hand, despite room for usability improvements, the MAP tool has the
potential to significantly benefit both healthcare providers and their patients.
With just a few clicks, users can pull up a customized display of relevant patient
data that helps clinicians quickly understand a patient's current status and recent
history. In the long term, the MAP tool could contribute to increased efficiency,
effectiveness, and situational awareness among clinicians, particularly if
recommendations for addressing existing usability issues developed in this study are
followed.The evaluators’ results are consistent with feedback provided by clinical Beta
testers. This indicates that nonclinical researchers who use first-person evaluation
questions while performing a CW can identify issues that impact how easily
clinicians will be able to use CDSS. It also suggests that it is effective to employ
independent walkthroughs followed by a virtual meeting rather than a collaborative
team-based walkthrough.These findings are important for several reasons. First, although CW is a relatively
simple technique that does not require much training, the target user base for CDSS
tools such as T3—inpatient physicians and nurses—are extremely busy, which makes it
challenging for them to participate in usability evaluations.
Thus, it is significant that undergraduates and a human factors professor
could identify issues that impact CDSS usability for those clinical experts. Several
other researchers have successfully had nonclinical evaluators–most often software
developers or usability experts—use CWs to assess the usability of complex health
technologies.[42-47]Second, having evaluators independently perform CWs and then meet to review results
and generate recommendations can be more efficient than having a group
collaboratively perform a CW. At the time this study was performed, large
face-to-face meetings were rare due to COVID-19 restrictions. Evaluators in this
study were able to perform independent walkthroughs in their homes, and the review
meeting was conducted via videoconferencing software. Even without social distancing
restrictions, trying to schedule a time for a diverse team to meet to walkthrough a
user interface can be challenging, so it is notable that independent walkthroughs
can be productive.Third, instructing evaluators to adopt the perspective of a new user and then
directing them to answer first-person questions based on their experience, rather
than answering questions about what clinical users would likely experience, helps
make CWs more straightforward for novice evaluators. (One criticism of CW is that it
can be difficult for participants to truly represent the perspective of target users.
) This modification was especially advantageous in the context of this study
where students who had not previously participated in a CW were tasked to perform
independent walkthroughs.In general, CWs help to identify features of interfaces that influence how easily
users will be able to learn, and remember, how to use applications.[18,24,28] Hence, this
technique is particularly helpful in identifying those features in CDSS user
interfaces. In addition, CWs are well suited to assess the comprehensibility and
utility of contextual information that is intended to support users’ decision
making. In fact, several of the results of this study—including both positive and
negative findings, could be generalized into guidelines for producing useful, usable
CDSS tools.For example, the evaluators indicated that one-click access to relevant physiological
data about a patient who has been identified as a candidate for a clinical protocol,
and displaying eligibility criteria for a protocol via a mouseover, are both
positive features of MAP. These results suggest guidance that CDSS tools enable the
users to quickly and easily access relevant contextual information that helps
explain why a particular action is (or is not) suggested or why a particular
alert/alarm has been fired. This aligns with prior research suggesting that CDSS
recommendations be accompanied by simple explanations of why they are
recommended,[48,49] and that CDSS should be a “clinical partner”.
In addition, the results of this study suggest that it is beneficial both for
busy clinicians to be able to defer or “snooze” notifications that patient health is
improving, so that intervention reductions can be considered later, and for there to
be a visual indicator that a notification has been deferred. This is consistent with
previously developed guidance that CDSS should fit into users existing
workflows,[48,51,52] and that it should be “a team player”.Meanwhile, evaluators’ judgement that MAP requires users to perform too many clicks
to indicate that a patient will be starting on a protocol can be generalized as
“make it as easy as possible to implement recommendations provided by CDSS tools”,
which is consistent with other researchers’ guidance to minimize numbers of
clicks/screens[48,53] and to make it easy to follow recommended actions.[48,51,52] Other results
can be generalized as “consistently allow users multiple ways to perform the same
action”. This is aligned with general guidance to aim for consistency in any user interface.
In summary, the candidates for general guidelines for creating useful, usable
CDSS tools suggested by this study complement and extend existing literature that
contains guidelines for creating successful CDSS tools.While this study effectively identified several positive features of the new MAP tool
and produced recommendations for changes that could improve its usability, the study
has some limitations. Rather than a diverse team that includes intended users,
evaluators are all researchers affiliated with the same human factors lab, and most
were undergraduate students. Having at least one clinical expert would have
strengthened the study—though domain expertise is not required for CWs.
Moreover, the students had different educational backgrounds: one was a
psychology major heading to medical school, one was an engineering major headed to a
clinical psychology graduate program, and the third one was a computer science
major, who had over 2 years of experience working in healthcare as an x-ray
technician. In addition, the effort was led by a human factors expert with over two
decades of experience and all students had prior experience evaluating usability,
including participating in a heuristic evaluation of an earlier version of T3.That said, students working in a human factors lab are adept at adopting the
perspective of target users since understanding users’ needs is central to their
research. As a result, these students may have an easier time putting themselves
into the shoes of clinical experts for the purpose of a usability assessment than
other potential CW participants. This suggests that other CW participants might
still have had difficulty taking the perspective of users even when given
first-person questions. Despite these limitations, the positive features, issues,
and recommendations generated in this study can be applied to improve the usability
of T3's MAP functionality, which in turn can result in improved care for patients in
hospitals that use T3. In particular, T3's MAP tool can benefit patients whose
health is improving by gently prompting clinicians to consider reducing intensive
interventions and making it easy for those clinicians to access relevant data.
Moreover, several of the results of this study suggest possible guidelines for
developing useful, easy-to-use CDSS tools, although additional research is needed to
determine how broadly applicable these potential guidelines are.
Table 1.
Questions used for cognitive walkthroughs (CWs).
Short name
Full question
Completion
Were you able to complete the task?
Controls
Were the controls clearly visible?
Feedback
Was there feedback to indicate that you completed (or did
not complete) the task?
Authors: Mariska Weenk; Mats Koeneman; Tom H van de Belt; Lucien J L P G Engelen; Harry van Goor; Sebastian J H Bredie Journal: Resuscitation Date: 2019-01-24 Impact factor: 5.262
Authors: Kristen Miller; Danielle Mosby; Muge Capan; Rebecca Kowalski; Raj Ratwani; Yaman Noaiseh; Rachel Kraft; Sanford Schwartz; William S Weintraub; Ryan Arnold Journal: J Am Med Inform Assoc Date: 2018-05-01 Impact factor: 4.497
Authors: Joseph M Blankush; Robbie Freeman; Joy McIlvaine; Trung Tran; Stephen Nassani; I Michael Leitman Journal: J Clin Monit Comput Date: 2016-10-20 Impact factor: 2.502