Reza Khajouei1,2, Misagh Zahiri Esfahani3, Yunes Jahani4. 1. Medical Informatics Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran. 2. Department of Health Information Management and Technology, Faculty of Management and Medical Information Sciences, Kerman University of Medical Sciences, Kerman, Iran. 3. Regional Knowledge Hub and World Health Organization Collaborating Centre for HIV Surveillance, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran. 4. Department of Biostatistics and Epidemiology, School of Public Health, Kerman University of Medical Sciences, Kerman, Iran.
Abstract
OBJECTIVE: There are several user-based and expert-based usability evaluation methods that may perform differently according to the context in which they are used. The objective of this study was to compare 2 expert-based methods, heuristic evaluation (HE) and cognitive walkthrough (CW), for evaluating usability of health care information systems. MATERIALS AND METHODS: Five evaluators independently evaluated a medical office management system using HE and CW. We compared the 2 methods in terms of the number of identified usability problems, their severity, and the coverage of each method. RESULTS: In total, 156 problems were identified using the 2 methods. HE identified a significantly higher number of problems related to the "satisfaction" attribute ( P = .002). The number of problems identified using CW concerning the "learnability" attribute was significantly higher than those identified using HE ( P = .005). There was no significant difference between the number of problems identified by HE, based on different usability attributes ( P = .232). Results of CW showed a significant difference between the number of problems related to usability attributes ( P < .0001). The average severity of problems identified using CW was significantly higher than that of HE ( P < .0001). CONCLUSION: This study showed that HE and CW do not differ significantly in terms of the number of usability problems identified, but they differ based on the severity of problems and the coverage of some usability attributes. The results suggest that CW would be the preferred method for evaluating systems intended for novice users and HE for users who have experience with similar systems. However, more studies are needed to support this finding.
OBJECTIVE: There are several user-based and expert-based usability evaluation methods that may perform differently according to the context in which they are used. The objective of this study was to compare 2 expert-based methods, heuristic evaluation (HE) and cognitive walkthrough (CW), for evaluating usability of health care information systems. MATERIALS AND METHODS: Five evaluators independently evaluated a medical office management system using HE and CW. We compared the 2 methods in terms of the number of identified usability problems, their severity, and the coverage of each method. RESULTS: In total, 156 problems were identified using the 2 methods. HE identified a significantly higher number of problems related to the "satisfaction" attribute ( P = .002). The number of problems identified using CW concerning the "learnability" attribute was significantly higher than those identified using HE ( P = .005). There was no significant difference between the number of problems identified by HE, based on different usability attributes ( P = .232). Results of CW showed a significant difference between the number of problems related to usability attributes ( P < .0001). The average severity of problems identified using CW was significantly higher than that of HE ( P < .0001). CONCLUSION: This study showed that HE and CW do not differ significantly in terms of the number of usability problems identified, but they differ based on the severity of problems and the coverage of some usability attributes. The results suggest that CW would be the preferred method for evaluating systems intended for novice users and HE for users who have experience with similar systems. However, more studies are needed to support this finding.
Authors: Imam M Xierali; Chun-Ju Hsiao; James C Puffer; Larry A Green; Jason C B Rinaldo; Andrew W Bazemore; Mathew T Burke; Robert L Phillips Journal: Ann Fam Med Date: 2013 Jan-Feb Impact factor: 5.166
Authors: Alanna Kulchak Rahm; Nephi A Walton; Lynn K Feldman; Conner Jenkins; Troy Jenkins; Thomas N Person; Joeseph Peterson; Jonathon C Reynolds; Peter N Robinson; Makenzie A Woltz; Marc S Williams; Michael M Segal Journal: BMJ Health Care Inform Date: 2021-05
Authors: Frédérique Thonon; Saleh Fahmi; Olivia Rousset-Torrente; Pascal Bessonneau; James W Griffith; Carter Brown; Olivier Chassany; Martin Duracinsky Journal: JMIR Res Protoc Date: 2021-05-05
Authors: Tobias N Bonten; Anneloek Rauwerdink; Jeremy C Wyatt; Marise J Kasteleyn; Leonard Witkamp; Heleen Riper; Lisette Jewc van Gemert-Pijnen; Kathrin Cresswell; Aziz Sheikh; Marlies P Schijven; Niels H Chavannes Journal: J Med Internet Res Date: 2020-08-12 Impact factor: 5.428