| Literature DB >> 35564781 |
Elisa Puigdomènech1,2,3, Noemi Robles2,3,4, Mariona Balfegó5, Guillem Cuatrecasas5, Alberto Zamora6,7, Francesc Saigí-Rubió8,9, Guillem Paluzié6, Montserrat Moharra10,11, Carme Carrion2,3,4,12.
Abstract
BACKGROUND: Digital health interventions and mobile technologies can help to reduce the rates of obesity and overweight conditions. Although weight management apps are widely used, they usually lack professional content and evaluation, so the quality of these apps cannot be guaranteed. The EVALAPPS project aims to design and validate a tool to assess the safety and effectiveness of health-related apps whose main goal is to manage and prevent obesity and overweight conditions.Entities:
Keywords: codesign; mHealth; obesity; overweight; participatory research; pilot testing
Mesh:
Year: 2022 PMID: 35564781 PMCID: PMC9103883 DOI: 10.3390/ijerph19095387
Source DB: PubMed Journal: Int J Environ Res Public Health ISSN: 1660-4601 Impact factor: 3.390
Main themes and ideas related with the content of the application.
|
|
| Login free access: |
|
Prevents the request of personal data (username, mail) which can be a barrier to unsubscribe Can be difficult to control multiple evaluations from the same user in case he or she uses different devices. |
| Login access: |
|
Allows the identification of each user and the collection of useful information for the exploitation of the evaluation results. Must indicate very clearly authorship and objectives and ensure that it is not associated with any advertising. It is highly recommended that the app remembers the password automatically if prompted by the user. |
|
|
| Gathering the evaluator’s profile (background, exertise, …) can be used to: |
|
To assign the dimensions to be evaluated |
|
If the evaluation is conducted through certain dimensions according to evaluator’s profile, it is necessary to collect certain information about the evaluator. An introductory question can be included regarding the purpose of the evaluation. An example could be: What is the purpose of the evaluation? (the evaluator can check more than one option): As a health professional/As a user/As an ICT professional/As a design professional/Other use. If the evaluator decides which criteria and dimensions he or she will evaluate regardless of his/her background the information will not be necessary. |
|
To weight the user-provided responses, depending on the profile. |
|
Evaluator information can be used to weight responses based on his/her profile, for example by giving more weight to the responses of healthcare professionals in clinical dimensions. |
|
To use the results of the evaluation for research purposes. |
|
The analysis of results can be done according to the following variables: sex, age groups, and evaluator’s profile (user, health professional,…) |
|
|
|
This exploitation will be indispensable during the pilot and should be valued for the commercial version. For the pilot it is commented that some of this information can be obtained outside the application, linking the information of each user to their medical history. |
|
|
|
Beyond the pilot, the need to ask the users of the application for sociodemographic information should be valued. This decision should consider the advantages and disadvantages of collecting information about users: on the one hand it allows information to be available to carry out investigations. |
|
|
|
On the other hand, it can be a barrier to the use of the application. |
|
|
|
Finally, the discussion arises regarding when the collection of sociodemographic data should take place. Thus, while for some of the workshop participants these questions should be asked at the end of the evaluation, others argue that it should be done at first to minimize the risk of not collecting such information in the event that the person leaves the application before completing the evaluation and does not answer the profile questions. |
| Selection of the APP to be evaluated |
|
Research will select the weight control apps to be evaluated and by which participant. Technical aspects such as the mobile operating system version of the person performing the evaluation (iOS or Android) should be considered. |
|
|
|
The selection of apps to be evaluated after the pilot is finished can be displayed in a carousel format, differentiating between recommended apps (e.g., apps with more than n downloads) and other apps. Moreover, the evaluation tool could identify the applications that the user has installed on their mobile phone and incorporate them into the list of applications to evaluate. |
|
|
|
The debate also arises as to whether any application should be able to be chosen to be evaluated, or whether the applications to be evaluated should be pre-determined from the application itself. Leaving the possibility open carries the risk of incorporating applications that have nothing to do with health (e.g., pokemon go), but it was also valued that the objective of the application is to empower the user and that it should be given the possibility to upload any application so that the user can check if they are using an application that does not meet the basic requirements set by the evaluation. |
| Content of the evaluation |
|
Brief information on the objective and content of the evaluation should be provided to the evaluator. The evaluation is proposed as a list of the dimensions to be evaluated, with a follow-up of the level of response performed, so that, if the evaluation is not completed, the tool can be subsequently entered, and the pending items continue to be evaluated. |
|
|
|
It is proposed to undertake the assessment through a combination of yes/no questions and Lickert-scale questions. To avoid monotony, different strategies are proposed: |
| -change the way scales are presented, combining different types of icons (stars, faces, etc.); |
| -use different color scales for each dimension; |
| -use icons related to the content of the dimension to be evaluated; |
| -when a dimension evaluation is complete, an intermediate screen appears with a chart collecting the scores collected in that dimension throughout the evaluation process. |
|
The number of criteria to be evaluated is very high, so different options are raised to facilitate the answer: |
| -Present questions randomly, to prevent questions presented at the end from being systematically evaluated automatically; |
| -Start with the easiest questions to answer, and increase complexity as you progress through the assessment; |
| -Start with the most relevant questions to make the assessment, to ensure that you get the most answers to these questions, and then ask the rest. |
|
In relation to navigation options, depending on the number of criteria to be evaluated different options are proposed: |
| -Distribution of dimensions by tabs; |
| -List followed by criteria and scroll navigation; |
| -It was also suggested to incorporate information (using a pop-up screen) on the criteria to be evaluated, next to each question. |
|
The re-evaluation of the app can be a possibility, after a certain period after being evaluated. To do this, the tool would generate an automatic message between 15 days and 1 month after the user has evaluated the application so that it can be re-evaluated. |
| Report |
|
Once the evaluation is carried out, it is proposed to provide a report with different levels of information: |
| -Overall score; |
| -Score disaggregated by dimensions; |
| -Comparison between user and median score obtained during the evaluation process (e.g., using spider charts). |
|
The final evaluation report could be conceived as an incentive to the user (to be taken to something) and indicates the need to offer some other type of incentive, once completed—for example, a report with the profile of the most recommended applications, or a detailed application report… |
|
|
| Gamification |
|
The commercial version of the application, incentive for the use of the application, thorough gamification techniques |
|
|
|
|
|
The main incentive of such an application could be the provision of information about the evaluation carried out by other people |
|
|
|
|
Figure 1Screenshots of the Evalapps app.
Sociodemographic and technological characteristics of the EVALAPPS pilot testing.
|
| (%) | |
|---|---|---|
| Gender | ||
| Female | 23 | (74.2) |
| Male | 8 | (25.8) |
| Age group | ||
| 18–25 | 9 | (29.0) |
| 26–35 | 5 | (16.0) |
| 36–45 | 9 | (29.0) |
| 46–55 | 2 | (6.5) |
| 56–65 | 4 | (13.0) |
| >65 | 2 | (6.5) |
| Operating system | ||
| Android | 14 | (45.2) |
| iOS | 17 | (54.8) |
| App evaluated | ||
| MyFitnessPal | 10 | (41.7) |
| Yazio | 6 | (25.0) |
| MyPlate | 8 | (33.3) |
| Language used when using EVALAPSS tool | ||
| Catalan | 3 | (12.5) |
| Spanish | 20 | (83.3) |
| English | 1 | (4.2) |
Dimension and total score for each evaluated app.
| Yazio | MyPlate | MyFitnessPal | ||||
|---|---|---|---|---|---|---|
| Mean (SD) | Min–Max | Mean (SD) | Min–Max | Mean (SD) | Min–Max | |
| App Purpose | 8.5 (3.4) | 4–12 | 7.3 (3.6) | 3–12 | 9.1 (2.7) | 2–12 |
| Development | 1 (1.1) | 0–2 | 0.7 (0.5) | 0–1 | 1 (0.8) | 0–2 |
| Reliability | 4.6 (4.6) | 0–11 | 2.9 (2.1) | 0–6 | 4.6 (3.3) | 0–11 |
| Usability | 17.8 (10.9) | 0–27 | 13.8 (9.7) | 0–27 | 15 (10.1) | 0–27 |
| Health indicators | 6.3 (3.9) | 0–11 | 4.8 (4.3) | 0–10 | 4.2 (4.2) | 0–10 |
| Clinical effectiveness | 0.8 (1.16) | 0–3 | 0.1 (0.3) | 0–1 | 1 (1.6) | 0–4 |
| Security/Privacy | 5.5 (5.4) | 0–12 | 1.5 (2.9) | 0–8 | 2.9 (3.6) | 0–9 |
| Total | 44.6 (23.9) | 4–65 | 31.3 (17.6) | 3–55 | 37.8 (17.6) | 10–62 |
SD: Standard Deviation; Min: Minimum; Max: Maximum.