| Literature DB >> 26863986 |
Adam C Powell1, John Torous, Steven Chan, Geoffrey Stephen Raynor, Erik Shwarts, Meghan Shanahan, Adam B Landman.
Abstract
BACKGROUND: There are over 165,000 mHealth apps currently available to patients, but few have undergone an external quality review. Furthermore, no standardized review method exists, and little has been done to examine the consistency of the evaluation systems themselves.Entities:
Keywords: evaluation studies; health apps; mental health; mobile applications; ratings
Year: 2016 PMID: 26863986 PMCID: PMC4766362 DOI: 10.2196/mhealth.5176
Source DB: PubMed Journal: JMIR Mhealth Uhealth ISSN: 2291-5222 Impact factor: 4.773
mHealth app quality measures evaluated.
| Measure | Source | Range | Definitions |
| Ease of use | ADAA | 1-5 | 5=very easy; 1=very difficult |
| Effectiveness (Perceived) | ADAA | 1-5 | 5=highly likely; 1=highly unlikely |
| Personalization | ADAA | 1-5 | 5=complete ability; 1=no ability |
| Interactiveness/Feedback | ADAA | 1-5 | 5=very interactive, helpful feedback; 1=not interactive, no feedback |
| Basis of research | PsyberGuide (& ADAAa) | 0-3 (and 1-5a) | 3=data from at least one randomized controlled trial; 2=data from at least one non-randomized non-controlled trial; 1=data from an open study; 0=no data provided |
| Source of funding for research | PsyberGuide | 0-2 | 2=research supported exclusively by government agency or non-profit organizations; 1=research supported in full or part by for-profit organizations; 0=no data provided |
| Specificity of intervention | PsyberGuide | 1-3 | 3=the application is designed to improve a specific condition or symptom; 2=the application is designed to help with non-specific items such as “mood” or “brain fitness”; 1=the application is designed to track and monitor items such as symptom severity or medication; 0= no data provided |
| Number of consumer ratings | PsyberGuide | 1-3 | 3=ratings exist from >50 users; 2=ratings exist from 25-50 users; 1=fewer than 25 user ratings |
| Product advisory support | PsyberGuide | 0-1 | 1=yes; 0=no |
| Software support | PsyberGuide | 0-1 | 1=yes; 0=no |
| Password protection | Kharrazi et al (2012) | 0-1 | 1=yes; 0=no |
| Import/export capabilities | Kharrazi et al (2012) | 0-1 | 1=yes; 0=no |
| Uploaded by health care agency | Pandey et al (2012) | 0-1 | 1=yes; 0=no |
| Encryption | Powell et al (2014) | 0-1 | 1=yes; 0=no |
| Explicit privacy policy | Powell et al (2014) | 0-1 | 1=yes; 0=no |
| Effectiveness tested (claimed by app) | Powell et al (2014) | 0-1 | 1=yes; 0=no |
| Developer contactable | Lewis | 0-1 | 1=yes; 0=no |
| Advertising policy stated | Lewis | 0-1 | 1=yes; 0=no |
| Errors and performance issues | Martinez-Perez et al (2013) | 0-1 | 1=yes; 0=no |
| Continuous availability of data | Martinez-Perez et al (2013) | 0-1 | 1=yes; 0=no |
| Discloses potential risks | Ferrero-Álvarez- Rementería et al (2013) | 0-1 | 1=yes; 0=no |
| Offers technical support or help | Ferrero-Álvarez- Rementería et al (2013) | 0-1 | 1=yes; 0=no |
a1=no research evidence; 5=ample research evidence; ADAA scale not used.
Interrater reliability of depression and smoking apps by measure.
| Measure | Interrater reliability (Krippendorff’s alpha) | Completeness, % | ||
| Aggregate | Depression | Smoking | ||
| Interactiveness/Feedback | 0.69 | 0.69 | 0.67 | 93 |
| Password protection | 0.65 | 0.75 | 0.37 | 93 |
| Uploaded by health care agency | 0.63 | 0.60 | 1.00 | 93 |
| Number of consumer ratings | 0.59 | 0.74 | 0.42 | 93 |
| Explicit privacy policy | 0.55 | 0.73 | 0.38 | 93 |
| Encryption | 0.54 | 0.51 | 1.00 | 92 |
| Basis of research | 0.53 | 0.55 | 0.44 | 93 |
| Product advisory support | 0.52 | 0.55 | 0.44 | 93 |
| Offers technical support or help | 0.45 | 0.50 | 0.35 | 93 |
| Software support | 0.44 | 0.42 | 0.44 | 93 |
| Import/export capabilities | 0.42 | 0.47 | 0.04 | 93 |
| Developer contactable | 0.42 | 0.38 | 0.36 | 93 |
| Personalization | 0.42 | 0.38 | 0.49 | 93 |
| Specificity of intervention | 0.36 | 0.33 | -0.14 | 91 |
| Source of funding for research | 0.36 | 0.22 | 0.59 | 92 |
| Discloses potential risks | 0.31 | 0.23 | 0.00 | 93 |
| Effectiveness (Perceived) | 0.30 | 0.43 | 0.12 | 93 |
| Continuous availability of data | 0.27 | 0.22 | 0.09 | 93 |
| Effectiveness tested (claimed by app) | 0.21 | 0.11 | 0.34 | 93 |
| Ease of use | 0.18 | 0.09 | 0.23 | 93 |
| Advertising policy stated | 0.16 | -0.04 | 0.20 | 93 |
| Errors and performance issues | 0.15 | 0.28 | 0.03 | 93 |
Key lessons learned for clinicians, patients, and app reviewers.
| For clinicians | For patients | For app reviewers |
| Interpret mHealth app reviews cautiously, especially if measures have not been validated | Interpret mHealth app reviews cautiously | Use previously validated measures with high interrater reliability, if available |
| Consider reviewing apps personally before recommending apps to patients | Consult with your health care provider or another trusted source | Train reviewers on the measures using standardized specifications |
| Consider discussing apps with colleagues |
| Involve patients or reviewers with the condition of interest in the reviews |
| Use clinical judgment as a tool for evaluating apps |
| Record the name and version of the app being reviewed, as well as the date of the review |