| Literature DB >> 35252957 |
Pier-Luc de Chantal1,2, Alexandre Chagnon2,3, Michael Cardinal2, Julie Faieta2,4, Alexandre Guertin2.
Abstract
Searching the commercial Google Play Store and App Store is one of the most common strategies for discovering mobile applications for digital health, both among consumers and healthcare professionals. However, several studies have suggested a possible mismatch between this strategy and the objective of finding apps in physical and mental health that are both clinically relevant and reliable from a privacy standpoint. This study provides direct evidence of a gap between the five-star user rating system and expert ratings from a curated library of over 1,200 apps that cover both physical and mental health. An objective metric is derived to assess the strength of the user-expert gap for each app, which in turn allows identifying missed opportunities-low user ratings and high expert ratings-and overrated apps-high user ratings and low expert ratings. Implications for practice and care delivery are discussed.Entities:
Keywords: app stores; delivery of health care; digital health; expert ratings; mental health; mobile applications; physical health; user ratings
Year: 2022 PMID: 35252957 PMCID: PMC8891373 DOI: 10.3389/fdgth.2022.765993
Source DB: PubMed Journal: Front Digit Health ISSN: 2673-253X
Descriptive statistics for user and expert ratings (n = 1,233).
|
|
|
|
|
|
| |
|---|---|---|---|---|---|---|
| User rating | 4.23 | 0.70 | 1.00 | 5.00 | −1.70 | 3.57 |
| Expert rating | 0.00 | 2.59 | −11.18 | 9.39 | 0.27 | 0.58 |
| Data rights | 3.08 | 2.84 | −9 | 10 | −1.30 | 3.07 |
| Usability | 3.59 | 2.42 | −6 | 9 | 0.13 | 0.13 |
| Clinical | 2.21 | 2.49 | −5 | 7 | 0.32 | −0.67 |
| Evidence | 0.34 | 0.72 | 0 | 4 | 2.32 | 5.23 |
The expert rating is the sum of the standardized subscales. M, Mean. SD, Standard deviation.
Comparison of user and expert ratings by health domain (physical health, mental health), platform (Android, iOS) and quartile in the number of reviews (1st, 2nd, 3rd, 4th).
|
|
|
| ||||||
|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
| |
| User-expert gap | −0.01 (1.34) | 0.02 (1.42) | 0.18 (1.28) | −0.18 (1.45) | −0.25 (1.68) | 0.13 (1.34) | 0.18 (1.31) | −0.06 (1.02) |
| User rating | 4.21 (0.71) | 4.24 (0.69) | 4.01 (0.60) | 4.36 (0.76) | 4.18 (0.89) | 4.03 (0.70) | 4.20 (0.64) | 4.50 (0.39) |
| Expert rating | −0.09 (2.44) | 0.10 (2.76) | −0.05 (2.59) | 0.05 (2.58) | −0.82 (2.66) | −0.38 (2.49) | 0.35 (2.48) | 0.85 (2.40) |
| Data rights | 3.08 (2.63) | 3.08 (3.08) | 3.08 (2.80) | 3.08 (2.87) | 2.07 (3.45) | 2.85 (2.83) | 3.40 (2.52) | 3.98 (2.02) |
| Usability | 3.62 (2.46) | 3.55 (2.38) | 3.48 (2.41) | 3.70 (2.44) | 2.90 (2.32) | 3.02 (2.39) | 3.93 (2.35) | 4.52 (2.27) |
| Clinical | 2.11 (2.39) | 2.35 (2.60) | 2.19 (2.49) | 2.23 (2.49) | 2.14 (2.47) | 2.31 (2.42) | 2.40 (2.65) | 2.01 (2.40) |
| Evidence | 0.30 (0.70) | 0.39 (0.75) | 0.35 (0.72) | 0.34 (0.73) | 0.23 (0.61) | 0.27 (0.65) | 0.36 (0.73) | 0.51 (0.85) |
The expert rating is the sum of the standardized subscales. Each cell represents Mean (SD). 1st quartile: 1–21 reviews; 2nd: 22–174 reviews; 3rd: 175–3,691 reviews; 4th: 3,705–2,409,239 reviews.
Regression analysis summary for expert rating subscales and number of reviews predicting user rating.
|
|
|
|
|
|
|
| |
|---|---|---|---|---|---|---|---|
| (Intercept) | 4.11 | (4.03, 4.18) | 106.32 | <0.001 | |||
| Data rights | 0.02 | (0.01, 0.03) | 0.08 | 2.62 | 0.009 | 0.09 | 0.003 |
| Usability | 0.03 | (0.01, 0.05) | 0.10 | 3.40 | <0.001 | 0.10 | <0.001 |
| Clinical | −0.01 | (−0.03, 0.01) | −0.04 | −1.34 | 0.18 | −0.002 | 0.95 |
| Evidence | −0.08 | (−0.12, −0.01) | −0.07 | −2.45 | 0.02 | −0.06 | 0.06 |
| Nb of reviews | 0.00 | (0.00, 0.00) | 0.08 | 2.77 | 0.006 | 0.08 | 0.002 |
Figure 1Distribution of standardized user and expert ratings. This figure shows user-expert gaps for each level of 5-star ratings and identifies missed opportunities and overrated apps.