| Literature DB >> 32509971 |
David M Levine1,2, Zoe Co1, Lisa P Newmark3, Alissa R Groisser1, A Jay Holmgren4, Jennifer S Haas2,5, David W Bates1,2,3.
Abstract
Mobile health applications ("apps") have rapidly proliferated, yet their ability to improve outcomes for patients remains unclear. A validated tool that addresses apps' potentially important dimensions has not been available to patients and clinicians. The objective of this study was to develop and preliminarily assess a usable, valid, and open-source rating tool to objectively measure the risks and benefits of health apps. We accomplished this by using a Delphi process, where we constructed an app rating tool called THESIS that could promote informed app selection. We used a systematic process to select chronic disease apps with ≥4 stars and <4-stars and then rated them with THESIS to examine the tool's interrater reliability and internal consistency. We rated 211 apps, finding they performed fair overall (3.02 out of 5 [95% CI, 2.96-3.09]), but especially poorly for privacy/security (2.21 out of 5 [95% CI, 2.11-2.32]), interoperability (1.75 [95% CI, 1.59-1.91]), and availability in multiple languages (1.43 out of 5 [95% CI, 1.30-1.56]). Ratings using THESIS had fair interrater reliability (κ = 0.3-0.6) and excellent scale reliability (ɑ = 0.85). Correlation with traditional star ratings was low (r = 0.24), suggesting THESIS captures issues beyond general user acceptance. Preliminary testing of THESIS suggests apps that serve patients with chronic disease could perform much better, particularly in privacy/security and interoperability. THESIS warrants further testing and may guide software and policymakers to further improve app performance, so apps can more consistently improve patient outcomes.Entities:
Keywords: Diagnosis; Health policy; Health services; Therapeutics
Year: 2020 PMID: 32509971 PMCID: PMC7242452 DOI: 10.1038/s41746-020-0268-9
Source DB: PubMed Journal: NPJ Digit Med ISSN: 2398-6352
Mobile health app rating domains and criteria.
| Domain | Criteria |
|---|---|
| Transparency | Cost of app Consent Accuracy of app store description |
| Health content | Appropriate measurement Appropriate interpretation of data Quality of information Potential for harm Literacy level Presentation of information |
| Technical content | Software performance/stability Interoperability Bandwidth Application size |
| Security/Privacy | Protection against theft and viruses Authentication Data sharing Maintenance Signaling of breaches Anonymization |
| Usability | Installation and setup Functionality Aesthetics Customization/tailoring Ease of use for users with low literacy and numeracy Availability in multiple languages |
| Subjective | Recommend app Overall star rating |
Refer to Supplementary Table 3 for detailed descriptions of each individual item.
Fig. 1App selection (all categories combined).
We selected the first four apps in each disease category. Not all apps were rated due to resource constraints. Please refer to Supplementary Figs. 1–3 for individual category selection details.
Fig. 2App ratings.
a Overall app ratings. b App ratings by category. The error bars represent 95% confidence intervals. See Supplementary Table 4 for detailed ratings.
Fig. 3Path to building and evaluating THESIS.
The methods taken in the development and evaluation of THESIS. Apps from systematic keyword search in Apple and Google stores (n = 3191).