| Literature DB >> 35156928 |
Bernard Bucalon1,2, Tim Shaw2,3, Kerri Brown2,4, Judy Kay1,2.
Abstract
BACKGROUND: There is an increasing interest in using routinely collected eHealth data to support reflective practice and long-term professional learning. Studies have evaluated the impact of dashboards on clinician decision-making, task completion time, user satisfaction, and adherence to clinical guidelines.Entities:
Keywords: data visualization; mobile phone; practice analytics dashboards; professional learning; reflective practice
Year: 2022 PMID: 35156928 PMCID: PMC8887640 DOI: 10.2196/32695
Source DB: PubMed Journal: JMIR Med Inform
Research questions (RQs) and data planned to be extracted from included studies.
| RQ | Data extracted | |
| RQ1 | What was the purpose of the performance feedback interfaces? |
Stated purpose and aims |
| RQ2 | What clinical indicators were used and how are they visualized? |
Clinical indicators Visualization elements Frequency of intended use Individual or team use Static or interactive features Data source Technology |
| RQ3 | How were the interfaces designed? |
Design process |
| RQ4 | What are the methods used to evaluate the interfaces? |
Evaluation methods Laboratory vs in-the-wild settings |
| RQ5 | How successful have the interfaces been? |
Reported results and outcomes Strengths and limitations |
| RQ6 | What are the key design considerations for developing future interfaces? |
Practice points Recommendations |
Figure 1Flow diagram of search and selection studies.
Country of origin from included studies (N=18).
| Country of origin | Count, n (%) | References |
| United States | 10 (56) | [ |
| Australia | 2 (11) | [ |
| Other (Canada, France, the Netherlands, Oman, Sweden, and United Kingdom) | 6 (33) | [ |
Purpose of included dashboard studies grouped by category (N=18).a
| Purpose | Count, n (%) | References |
| Performance improvement | 9 (50) | [ |
| Quality and safety | 6 (33) | [ |
| Management and operations | 4 (22) | [ |
aIncluded studies may be in more than 1 category.
Clinical indicators by type from included studies (N=18).a
| Clinical indicators | Count, n (%) | References | |
|
| |||
|
| Structural | 1 (6) | [ |
|
| Process | 17 (94) | [ |
|
| Outcome | 5 (28) | [ |
|
| |||
|
| Generic | 15 (83) | [ |
|
| Disease-specific | 3 (17) | [ |
aIncluded studies may have more than 1 type of clinical indicator.
Dashboard visualization elements used in included studies (N=18).a
| Visualization elements | Count, n (%) | References |
| Bar chart including histogram | 10 (56) | [ |
| Table | 9 (50) | [ |
| Line chart | 9 (50) | [ |
| Scatter plot | 1 (6) | [ |
| Meter | 1 (6) | [ |
| Radar including radial or spider-web | 1 (6) | [ |
| Pie chart including donuts or rings | 1 (6) | [ |
aIncluded studies may have more than 1 visualization element.
Dashboard designed for team or individual use (N=18).
| Use | Count, n (%) | References |
| Team | 12 (67) | [ |
| Individual | 4 (22) | [ |
| Both | 2 (11) | [ |
Dashboard studies designed for fast or slow use (N=18).
| Use | Count, n (%) | References | |
|
| |||
|
| Daily | 8 (44) | [ |
|
| Rapid or at-a-glance | 1 (6) | [ |
|
| |||
|
| Weekly | 1 (6) | [ |
|
| Monthly | 3 (17) | [ |
|
| Quarterly | 1 (6) | [ |
|
| No details | 4 (22) | [ |
Dashboard studies designed to be interactive or static (N=18).
| Interface design | Count | References |
| Interactive | 15 (83) | [ |
| Static | 2 (11) | [ |
| No details | 1 (6) | [ |
Evaluation methods used by included studies (N=18).a
| Method | Count, n (%) | References | |
|
| |||
|
| Questionnaires or surveys | 9 (50) | [ |
|
| Interviews | 2 (11) | [ |
|
| |||
|
| eHealth data analysis | 10 (56) | [ |
|
| System usage log analysis | 4 (22) | [ |
|
| |||
|
| Expert method | 1 (6) | [ |
|
| Usability user study | 3 (17) | [ |
aIncluded studies may have more than 1 evaluation method.
Reported results from standardized questionnaires, surveys, and interviews (N=18).
| Evaluation method | Reported outcomes | References |
| Standardized questionnaire |
Mean SUSa score of at least 73.0 across 5 studies (range 73.0-87.5). PSSUQb score of 1.7 (SD 0.5). All tasks rated median SEQc score of 1 (very easy) or 2 (easy). | [ |
| Survey |
Respondents had favorable responses to the dashboards (range 72-79). Respondents stated the data were actionable (range 48-69). Respondents felt the data improve their practice (range 64-98). | [ |
| Interview |
Interviewees were interested and enthusiastic about the individual patient dashboard. Interviewees were generally excited to have the opportunity to see the cohort dashboard but commented on its complexity. Interviewees were generally positive about the clinical performance summary, patient lists, suggested actions, and detailed patient-level information views. Interviewees identified improvements on the clinical performance summaries view (eg, inclusion of CIs with differing guidance was confusing). | [ |
aSUS: system usability scale.
bPSSUQ: Post-Study System Usability Questionnaire.
cSEQ: single ease question.
Reported outcomes from data analysis of eHealth data and system use logs.
| Evaluation method | Reported outcomes | References |
| eHealth data analysis |
2 out of 9 studies evaluating eHealth data reported positive changes to CI data. 2 out of 9 studies reported no change to CI data. | [ |
| System use log data analysis |
>50% of participants viewed the dashboard in 2 studies (range 28-50). A median of 55 views from 30 users was observed in 1 study. | [ |