| Literature DB >> 30487119 |
Devin M Mann1,2, Sara Kuppin Chokshi1, Andre Kushniruk3.
Abstract
BACKGROUND: Technology is increasingly embedded into the full spectrum of health care. This movement has benefited from the application of software development practices such as usability testing and agile development processes. These practices are frequently applied in both commercial or operational and academic settings. However, the relative importance placed on rapid iteration, validity, reproducibility, generalizability, and efficiency differs between the 2 settings and the needs and objectives of academic versus pragmatic usability evaluations.Entities:
Keywords: medical informatics; software design; user-computer interface
Year: 2018 PMID: 30487119 PMCID: PMC6291682 DOI: 10.2196/10721
Source DB: PubMed Journal: JMIR Hum Factors ISSN: 2292-9495
Comparison of features of academic versus pragmatic usability testing.
| Feature | Academic usability | Pragmatic usability |
| Objectives | Production of evidence regarding adaptation and development of tool types (eg, clinical decision support) and workflows for academic publication and dissemination Priority: rigor and reproducibility | Rapid iterative design and testing cycles to provide user feedback to product owners and developers Priority: speed and cost-effectiveness |
| Methodological approach | Direct observation Think-aloud Near-live Live testing | Direct observation Think-aloud Near-live Live testing using low-cost approaches |
| Setting | Variable (laboratory to Priority: high-fidelity, representative testing environment and tasks | Variable (laboratory to Priority: convenience over fidelity |
| Number of participants | 10-15 participants (representative of end users) per user group for usability testing (potentially more if conducting statistical analyses) Priority: representativeness of user | <10 participants (typically minimum=4) Priority: convenience and managing time constraints |
| Data capture | Note taking Audio recordings Video recording Screen capture Data captured and transcribed for detailed analyses | Observational note taking Notes on debriefing interviews Real-time analysis of user-screen interaction |
| Termination criteria | Termination with data saturation for current iteration | Termination based on consensus, cost, and time constraints |
| Data analysis | Detailed qualitative analyses (including interrater reliability) of data captured: usability testing transcripts, screen captures, etc Quantitative analyses (eg, error rates, System Usability Scale scores, measures of clicking, eye tracking, etc) | Concise, structured summaries of findings based on notes from usability sessions and debriefings and notes from anecdotal and stakeholder feedback |
| Output | Detailed data tables and results reporting | Simple summary or table of problems and solutions |
| Dissemination | Publication of findings in peer-reviewed journals Priority: generalizability of results and scientific value | Final summary report presented to developers and management Priority: local (vs wider) distribution of findings for use to improve a specific system or interface |
| Time frame | Varies from weeks to months | Feedback from testing immediately or within days of testing |
Case study comparison of usability evaluation features.
| Feature and usability type | Case study 1 (Integrated Clinical Prediction Rule 2) | Case study 2 (Avoiding Diabetes Thru Action Plan Targeting) | |
| Academic | To generate evidence on the optimal adaptation of clinical decision-support tools | To generate evidence on the clinical impact of an electronic health record-enabled prediabetes counseling tool | |
| Pragmatic | Tool adaptation and identification of issues in tool build before widespread deployment | User feedback for recommendations to tool developers | |
| Methods used: Academic and pragmatic | Direct observation Think-aloud Near-live Live testing Semistructured group interview (postdeployment) | Direct observation Think-aloud Near-live Live testing | |
| Setting: Academic and pragmatic | Laboratory and | Laboratory and | |
| Core team: Academic and pragmatic | 9 members (expertise: primary care, clinical decision support, informatics, electronic health records, usability, qualitative research, and graphic design) | 6 members (expertise: primary care, health psychology, diabetes education, nutrition, informatics, usability, and graphic design) | |
| Number of participants: Academic and pragmatic | Think-aloud=12 clinicians Near-live=12 clinicians (same) Live=3 clinicians and 6 encounters Postdeployment=75 clinicians and 14 sites (group interviews) | Think-aloud=7 clinicians Near-live=6 clinicians | |
| Data capture: Academic and pragmatic | Note taking Audio recording of sessions Video recordings Screen capture | Note taking Audio recording of sessions Screen capture | |
| Termination criteria: Academic and pragmatic | Termination with data saturation for current iteration | Termination with data saturation for current iteration | |
| Academic | Qualitative thematic analysis by 2 independent coders | Qualitative thematic analysis by 2 independent coders | |
| Pragmatic | Thematic analysis of observational field notes | Thematic analysis of observational field notes | |
| Academic | Detailed data tables and results reporting | Detailed data tables and results reporting | |
| Pragmatic | Summary reports from field notes | Summary reports from field notes | |
| Academic | Publication of protocol and usability findings from think-aloud, near-live, and live testing in peer-reviewed journals | Publication of protocol and usability findings from think-aloud and near-live testing in peer-reviewed journals | |
| Pragmatic | Research team Electronic health record development team | Research team Electronic health record development team | |
| Academic | Think-aloud or near-live usability 16 months from the beginning of data capture to the publication of findings | Think-aloud or near-live usability 11 months from the beginning of data capture to the publication of findings | |
| Pragmatic | Think-aloud or near-live usability 2 months from the beginning of each phase of data capture to the completion of all summary reports | Think-aloud or near-live usability 1 months from the beginning of each phase of data capture to the completion of all summary reports | |