Literature DB >> 34637651

What Works Best to Engage Participants in Mobile App Interventions and e-Health: A Scoping Review.

Ingrid Oakley-Girvan1, Reem Yunis1, Michelle Longmire1, Jessey Schwartz Ouillon1.   

Abstract

Background: Despite the growing popularity of mobile app interventions, specific engagement components of mobile apps have not been well studied.
Methods: The objectives of this scoping review are to determine which components of mobile health intervention apps encouraged or hindered engagement, and examine how studies measured engagement.
Results: A PubMed search on March 5, 2020 yielded 239 articles that featured the terms engagement, mobile app/mobile health, and adult. After applying exclusion criteria, only 54 studies were included in the final analysis. Discussion: Common app components associated with increased engagement included: personalized content/feedback, data visualization, reminders/push notifications, educational information/material, logging/self-monitoring functions, and goal-setting features. On the other hand, social media integration, social forums, poor app navigation, and technical difficulties appeared to contribute to lower engagement rates or decreased usage. Notably, the review revealed a great variability in how engagement with mobile health apps is measured due to lack of established processes.
Conclusion: There is a critical need for controlled studies to provide guidelines and standards to help facilitate engagement and its measurement in research and clinical trial work using mobile health intervention apps.

Entities:  

Keywords:  e-Health; m-Health; smart phones; telehealth; telemedicine

Mesh:

Year:  2021        PMID: 34637651      PMCID: PMC9231655          DOI: 10.1089/tmj.2021.0176

Source DB:  PubMed          Journal:  Telemed J E Health        ISSN: 1530-5627            Impact factor:   5.033


Introduction

Enhancing participant engagement is considered a key priority for wellness and health care, especially as health care undergoes a shift toward the integration of digital technologies (e.g., mobile apps, health care monitors, and online portals with their consumer interfaces).[1,2] Technological systems play a critical role in enhancing participant engagement.[1,2] Among urban and low-income mothers, the use of smart-device technology for communication was a particularly important contributor to higher retention in longitudinal studies.[3] Providing digital health tools has not only led to an increase in study participation adherence rates,[4] but it has also contributed to measurable improvements in health care outcomes across several conditions. For instance, greater patient activation in their health care improved patient adherence to treatment prescriptions.[5] Participants' use of web portals to augment treatment of diabetes demonstrated improved glycemic control across multiple studies.[6-8] Other studies have seen improvements in participants with HIV,[9] with coronary artery disease,[10] and with depression,[11-13] highlighting how impactful the implementation of these tools can be across different clinical populations. Schoeppe et al.[14] emphasized common strategies that successful mobile interventions often use, such as goal setting, self-monitoring, and performance feedback in their app design. To our knowledge, however, there has not been a scoping review of the specific components of mobile intervention apps that increase engagement. Common across all digital health tools are the focus on increased patient engagement and “empowerment,” which is a result of several qualities inherent in these tools. Most of these technological systems improve patients' communication with and access to health care providers,[1,2,15] and provide patients with more comprehensive information about their health on demand.[2,15] While these qualities are common across successful tools and play a large part in improving patient self-management and decreasing stress,[2] improved engagement is no guarantee. Furthermore, measuring engagement is a challenge that has likely contributed to our lack of knowledge on app components that effectively increase this important metric. There are now several measures that quantify the amount of engagement that patients feel toward the digital tools and apps that are being developed,[2,15] but these are not widely used and engagement measurements are not standardized across studies. Some examples of such measures are the Patient Activation Measure (PAM[16]), Mobile App Rating Scale (MARS[17]), and the Patient Health Engagement scale (PHE-s[18]). These measures create a quantifiable standardized method by which researchers can measure the phenomenon of user engagement during program development, and are important considerations when creating new digital tools for patients and clinical research participants. In an effort to support the shift toward mobile interventions and the benefits of using mobile apps, this review article aims to address the following questions: What are the components or elements of mobile interventions that successfully increase participant engagement, and those that may hinder engagement? How do studies measure engagement? By addressing these questions, we can inform how future work may be able to standardize this effort with apps or app features.

Methods

A PubMed search with the following criteria was conducted on March 5, 2020: (engagement[Title/Abstract]) AND (mobile app) OR (mobile health) AND (adult). To be included in the analysis, articles must have recruited participants who used a mobile intervention, and articles must have examined the usage of specific aspect(s) or component(s) of the mobile app, whether through measurable app metrics, through participant feedback, or through author conjecture. The participant population must have also consisted of patients or of individuals seeking treatment for a condition; articles examining health care providers, administrators, or employees as the participant population were not included in the analysis. Ineligible articles included articles that did not have participants use a mobile app intervention (i.e., design/protocol/methods-only articles were not included), did not provide insight into which particular feature(s) were engaging for participants, were not written in English, and/or were duplicates. This review article followed PRISMA guidelines for scoping review articles. There is no review protocol for this article. See for the PRISMA-based flow chart.
Fig. 1.

PRISMA flow chart of articles included in this review. From: Moher et al.[80]

PRISMA flow chart of articles included in this review. From: Moher et al.[80] Before the original PubMed search, the process for data collection and analysis was agreed upon by the study team so that each article was screened for the same information and that data were collected the same way. Inclusion and exclusion criteria were also standardized as described above, and all reasons for exclusion were recorded specific to each article that was excluded from analysis. Data were extracted from reading each of the published articles' manuscripts. Information on the article's country of origin was collected based on the countries listed for each author. We also classified whether the article's outcome measure(s) for engagement were test based or opinion based: test-based outcome measures were measures that examined app usage in a measurable way (i.e., had outcomes that were quantifiable and based on measurements rather than on participant feedback) and opinion-based outcome measures were measures that did not have a quantifiable outcome (i.e., had outcomes that were based on subjectivity, participants' opinions, or qualitative measures). Additional information collected on each article included whether the study was randomized or not (as a binary yes/no), the study's participant sample size, and the length of app usage in the study. Length of app usage was converted into weeks of usage to standardize reporting. For instance, we converted articles that had reported usage time in months into usage time in weeks by dividing the number of months by 12 and then multiplying that number by 52 (the number of weeks in a year). We converted articles that reported usage time in days into usage time in weeks by dividing the number of days by 7. To examine the features that were determined as “engaging” for participants and the methods by which each article examined engagement, we collected details on engagement measurements, retention rates, clinical changes, and the specific components identified as “engaging” or “not engaging.” We categorized the article's methods for measuring engagement into one of the following categories: did not measure, based on participant self-report (meaning no measurements were taken by the study team), app usage logs, log-in frequency, survey/lesson completion rate, number/length of app sessions, use of MARS[17], use of Systems Usability Scale (SUS[19]), or “Other.” The study's retention rates were reported (as a percent) based on the article's reported retention/attrition rate. If it was presented in another format or missing, we used the enrollment numbers to calculate the overall retention rate as a percentage, with the number of participants who were reportedly enrolled to use the mobile app treated as the total number enrolled, or the denominator, and the number of participants who were still enrolled at the last time point in the study was treated as the numerator. We also recorded whether the study reported or found any clinical changes by classifying it into one of the following categories: not reported, no differences, N/A, majority of participants self-reported the app as helpful/useful, trend, or yes, if differences were found. And lastly, we listed out the important app components that were most used by participants in the study and/or were associated with an increase in app engagement. There was inconsistent reporting on studies' potential biases as not all articles assessed were randomized trials; however, as described above, the outcome measures used, whether or not the studies were randomized, and the methods of measuring engagement were all collected to inform the quality of each article's results.

Results

In total, 236 articles resulted from the search criteria. After removing articles based on exclusion criteria (), 54 articles were assessed for this review. provides details regarding each study and its characteristics.
Table 1.

Summary List of Articles Included in the Review and Their Characteristics

AUTHORSDISEASE INDICATION OR HEALTH REALMMEASURABLE OR OB ENGAGEMENT METRICS?RANDOMIZED?SAMPLE SIZELENGTH OF USAGE (IN WEEKS)STUDY RETENTION, %CLINICAL CHANGES?MEASURED ENGAGEMENT?
Puddephatt et al.[20]Alcohol/substance abuseOBNo294N/AMajority reported app was helpful/usefulNo
Westergaard et al.[21]Alcohol/substance abuseOBNo193978.9Not reportedBased on self-report
Bergman et al.[22]Alcohol/substance abuseOBNo123VariableN/AMajority reported app was helpful/usefulLog-in frequency
Davis et al.[23]AsthmaOBNo20One sessionN/ANot reportedAdapted usability scale[24]
Cohen et al.[25]Breast cancer preventionMNo869179.07Not reportedNumber/length of app sessions
Michaelides et al.[26]Diabetes managementMNo432464.46YesSurvey/lesson completion rate
Park et al.[27]Diabetes managementOBNo2821.6784.8Majority reported app was helpful/usefulNo
Conway et al.[28]Diabetes managementBothNo234One session59Not reportedNo
Koot et al.[29]Diabetes managementBothNo1002480YesApp usage logs
Kato-Lin et al.[30]DietOBYes37517.3364TrendNo
Kerr et al.[31]DietMYes2472689No differencesNo
Graetz et al.[32]Electronic health records accessMNo18,52952N/ANot reportedLog-in frequency
Lee et al.[33]General health monitoringMNo1,4397854Not reportedApp usage logs
Harzand et al.[34]Heart diseaseBothNo211272.22YesMessages sent
Dillingham et al.[35]HIVMNo775240YesApp usage logs
Cho et al.[36]HIVOBYes3813N/ANot reportedNo
Toro-Ramos et al.[37]HypertensionMNo502480YesWeight/blood pressure loggings
Fuller-Tyszkiewicz et al.[38]Mental healthOBNo1512N/ANot reportedBased on self-report
Pratap et al.[13]Mental healthMYes3481214YesSurvey/lesson completion rate
Lehto et al.[39]Mental healthOBNo114N/ANot reportedNo
Bauer et al.[40]Mental healthBothNo17835Majority reported app was helpful/usefulSurvey/lesson completion rate
Cheung et al.[41]Mental healthMNo1,5141621TrendNumber/length of app sessions
Mohr et al.[42]Mental healthMNo99890.1YesNumber/length of app sessions
Forchuk et al.[43]Mental healthOBNo394VariableN/ANot reportedNo
Mackintosh et al.[44]Mental healthBothYes58648.27No differencesNo
Glover et al.[45]Mental healthOBNo1001219Majority reported app was helpful/usefulApp usage logs
Bidargaddi et al.[46]Mental healthMYes1,25512.71Not reportedNot reportedWhether user has “charted” in the app within 24 h of a push notification
McCauley et al.[47]Neurological diseasesBothNo2812N/ANot reportedApp usage logs
Greiner et al.[48]Neurological diseasesOBNo42692.8Not reportedNo
Selter et al.[49]Pain managementBothNo931338Not reportedInteractions with daily self-reports
Druce et al.[50]Pain managementBothNo2704.2791Not reportedApp usage logs
Reade et al.[51]Rheumatoid arthritisBothNo208.5768Not reportedNumber/length of app sessions
Amorim et al.[52]Physical activityBothYes682681TrendSurvey/lesson completion rate
Reyes et al.[53]Physical activityOBNoN/AOne sessionN/AN/AMARS
Tong et al.[54]Physical activityOBNo552681.82Not reportedBased on self-report
Wang et al.[55]Physical activityBothYes67691.04YesLength of time participants wore wearable devices
Tong et al.[56]Physical activityBothNo552682YesApp usage logs, SUS
Baretta et al.[57]Weight lossOBNo20285Not reportedBased on self-report
Bush et al.[58]PregnancyMNo8526N/AYesApp usage logs
Soh et al.[59]RehabilitationBothYes42Not given90No differencesSUS
Pavliscsak et al.[60]RehabilitationOBYes95VariableN/ANot reportedApp usage logs
Choi and Paik[61]RehabilitationBothYes242Not reportedYesNo
Hoeppner et al.[62]Smoking cessationBothNo30397Majority reported app was helpful/usefulApp usage logs
Nash et al.[63]Smoking cessationMNo141,429VariableN/ANot reportedLog-in frequency
Marler et al.[64]Smoking cessationBothNo319Variable39.5YesApp usage logs
Kim et al.[65]Weight lossMYes60450YesBased on self-report
Alnasser et al.[66]Weight lossBothNo2401716.667YesUpdates per week
Kim et al.[67]Weight lossMNo30126N/AYesApp usage logs
Serrano et al.[68]Weight lossMNo12,427,196VariableN/ANot reportedApp usage logs
Svetkey et al.[69]Weight lossMYes36510486No differencesLog-in frequency
Patel et al.[70]Weight lossMYes1051276No differencesApp usage logs
Dolan et al.[71]Weight lossBothNo104.2890Not reportedSurvey/lesson completion rate
Partridge et al.[72]Weight lossBothNo2002681Not reportedBased on self-report
Morrison et al.[73]Weight lossBothNo134100YesApp usage logs

M, measurable usage metrics; MARS, Mobile App Rating Scale; OB, opinion-based usage metrics; SUS, System Usability Scale.

Summary List of Articles Included in the Review and Their Characteristics M, measurable usage metrics; MARS, Mobile App Rating Scale; OB, opinion-based usage metrics; SUS, System Usability Scale. More than half, 56% (n = 29) of articles, were from the USA, 7.8% (n = 4) were from the United Kingdom, 15.7% (n = 8) were from Australia, 3.9% (n = 2) from Canada, 7.8% (n = 4) from South Korea, 2% (n = 1) from Scotland, 2% (n = 1) from Singapore, and 3.9% (n = 3) involved more than one country (Australia/USA, U.K./Italy, and U.K./Saudi Arabia). There was a wide variety of health realms targeted by the mobile interventions (), including: Asthma, Rheumatoid Arthritis, Breast Cancer prevention, Heart disease, Pain management, Neurological diseases (one multiple sclerosis and one dementia), Diabetes management, Diet, Electronic health records access, General health monitoring, HIV, Hypertension, Mental health, Physical Activity, Pregnancy, Smoking cessation, Rehabilitation, Alcohol/Substance abuse, and Weight loss (involving both diet and physical activity). There was a near-even split between studies that examined the use of the app based on quantifiable measures or “test-based” (33.3%, n = 18), qualitative measures or “opinion-based” (31.5%, n = 17), and studies that used both test-based and opinion-based outcomes (35.1%, n = 19). However, only 25.9% (14/54) articles were randomized studies. The median participant sample size was 77, with a wide range from 10 participants up to 12,427,196.

Comparison Of Suggested Design Element/App Components

outlines the app components that were associated with more participant engagement. Across health conditions, three major areas emerged that were associated with greater engagement: (1) Diaries (logging) and feedback: meal, blood pressure, medications, and weight loggings, visualization of participant's health data over time, personalized feedback based on questionnaires or from health care providers; (2) Coaching and education: goal-setting tools, health coach/provider messaging, personalized content, educational modules or lessons; and (3) Reminders: reminder tests and/or app notifications at a limited frequency. App components that hindered participation included: reminders that were too frequent, social forums or integration with social media, daily surveys (or redundant surveys), and technical problems (such as problems with Wi-Fi connection, app navigation, or problems logging in).
Table 2.

Important Elements of Applications That May Increase and Decrease Usage by Disease Indication or Health Realm

DISEASE INDICATION OR HEALTH REALM (NO. OF ARTICLES)APP COMPONENTS ASSOCIATED WITH INCREASED ENGAGEMENTAPP COMPONENTS USED LEAST, AND/OR ASSOCIATED WITH DECREASED ENGAGEMENT
Alcohol/substance abuse (n = 3)Personalized content[20]None reported
Real-time feedback[20]
Text-message prompting use of app[20]
Reminders to take medications and attend appointments[21]
Daily meditation prompts[22]
Live online video meetings[22]
Discussion boards[22]
Diversity of resources under the concept that this “might help engage individuals at various recovery stages (e.g., less than 1 year and greater than 1 year)”[22]
Rheumatoid arthritis (n = 1)Data visualization, particularly a 10-segment motif interface instead of a list of questions[51]Mobile app drained smartphone battery[51]
Daily alerts[51]Smartphone memory problems due to accelerometer's large files[51]
Asthma (n = 1)Reminders[23]None reported
Asthma resources and educational information[23]
Ability to connect with others[23]
Goal-setting tools and assistants[23]
Breast cancer prevention (n = 1)Upcoming procedure list[25]None reported
Upcoming procedure detail[25]
Navigation[25]
Diabetes management (n = 4)Meal logging[26,29]Communicating with other patients[27]
Weight monitoring/“Weekly weigh-ins”[26,29]Social media integration[28]
Nutrition information of food eaten[27]
Blood glucose level tracking[27–29]
Insulin logging[28]
Patient education[28]
Health coach messaging[29]
Electronic health records access (n = 1)Mobile public health record access[32]None reported
General health monitoring (n = 1)Self-monitoring function (tracking/recording health information)[33]Medication function[33]
Access to electronic medical record information from chart[33]
Outpatient support service to make reservations[33]
Heart disease (n = 1)Reminders[34]None reported
Goal-setting[34]
Electronic health diary[34]
Secure app messaging with a coach[34]
Educational modules[34]
HIV (n = 2)Customizable push-notification medication reminders[36]Flashing lights and beeping of electronic pill bottle[36]
Discreteness of electronic pill bottle[36]
Blood pressure logging[34]
Weight logging[34]
Education modules[34]
Health-related messages to coach[34]
Hypertension (n = 1)Weigh-ins[37]None reported
Meal logging[37]
Educational articles[37]
Targeted text messages[37]
Mental health (n = 10)Personalization of app content[38–40]Videogame-inspired cognitive intervention[13]
Push notifications at 12:30 pm any day, or at 7:30 pm on weekends (vs. other times of the day)[46] 
Tailored health message notifications (vs. standard push notifications) are associated with a small increase in likelihood to engage with an app within 24 h[46] 
Graphical representation of mood states over time[38]“Some participants appreciated the badges and reinforcements they received when they completed their check-in surveys, whereas others felt patronized by the motivational language”[40]
Prompts to use app[38]
Internet-based problem-solving therapy[13]
Daily health tips[13,45]
Daily surveys[40,45]
Alerts in response to daily surveys[40]
Direct visualization of their own data[40]
Using a hub recommender app[41]
Expectation to swap apps in and out of use rotation[42]
Low-intensity coaching[42]
Skills training through brief app sessions[42]
Appointment reminders[43]
Tracking functions[43]
Anger frequency/intensity/cues logging[44]
Behavioral strategies suggestions[44]
Individually tailored anger management plan prompts[44]
Neurological diseases (n = 2)Photo multimedia content for individuals with dementia[47]Video content for individuals with dementia[47]
Personal media for individuals with dementia[47]
“Reminiscing screens” for individuals with dementia[47]
Sharing information with doctor[48]
Pain management (n = 2)Daily exercise notifications[49]Event marker button was difficult for those with dexterity and hand function issues[50]
“uMotif” interface design[50]
Passive data collection[50]
Personalizing time reminders are sent[50]
End-of-study report that detailed their sleep, average pain, fatigue, and wellbeing scores[50]
Study support (for app problems, etc.)[50]
Physical activity (n = 5)Using FiMit[52,55,56]Weekly surveys[52]
Health coaching[52]3+ text message reminders per day were too frequent[55]
Gamification and score sharing through social media[53]Social forum and private messages[56]
Customization features (changing color components of the app, score sharing options, smartphone vibration, etc.)[53]
Self-monitoring of behavior[54]
Goal setting[54]
Feedback on behavior[54]
Social comparison[54]
Similarity and familiarity between users[54]
Participation from other users in the network[54]
Automation and personalization[54]
Pregnancy (n = 1)Health milestones[58]None reported
Personalized “What's Happening this Week” screen[58]
Rehabilitation (n = 3)Exercise tracking[59]Wi-Fi connection issues[59]
Peer patients' information[59]
Receiving feedback from questionnaires[60]
Immediate feedback from the patient's movement[61]
Smoking cessation (n = 3)“Happiness exercises”[62]Social support app functionality[62]
“Interactive Tobacco Tracker”[63]No human intervention contact[69]
Cost savings calculator[63]
Quitting plan behaviors[63]
Taking daily breath samples[64]
Cigarette logging[64]
Wearable usage[65]
Diet and weight loss (n = 12)Feedback about calorie intake and consumption[57,72,73]Redundant surveys[71]
Calorie counter[66]Peer support[30]
Step counter[66]Mobile-based visual diary[30]
Contributing posts on a group[67]Difficulty logging in[72]
Information/reading articles[67,73]A hard to navigate app[72]
Customized recipes[68]
Diet, fluid, and/or protein tracking[70]
Push notification reminders (i.e., of goals, to drink and walk frequently)[71,73]
Nutritional information[71]
Image-based dietician support[30]
Personalized text messaging[31,72]
Personal data entry (i.e., fluid intake logging)[71–73]
Goal-setting[73]
Food lists[73]
Important Elements of Applications That May Increase and Decrease Usage by Disease Indication or Health Realm

Clinical Outcomes Of Assessed Apps

Only 55% (30 out of 54) articles included in the analysis examined differences in clinical outcomes due to the mobile intervention, of which 53% (n = 16) found significant differences, 10% (n = 3) showed a positive trend due to the intervention, 20% (n = 6) had a majority of participants report the app as helpful and/or useful, and 17% (n = 5) found no impact.

Measuring Engagement

About a fifth of articles (20%; n = 11) did not use any measurement method and drew conclusions about engaging features based on conjecture or opinions from the study team (rather than from collected study data), whereas 11% (n = 6) of articles used participant self-report of usage to measure engagement with the app. Of the remaining 37 articles that did measure engagement in a quantifiable way, 41% (n = 15) looked at app usage logs, 14% (n = 5) relied on survey/lesson completion rates, 11% (n = 4) relied on log-in frequency, 11% (n = 4) examined the number and/or length of app sessions, and only 8% (n = 3) used an established scale to measure engagement (n = 1 used the MARS and n = 2 used the SUS). Nineteen percent (n = 7) of these 37 articles used another way to measure engagement, which included: participant updates per week, the length of time participants wore wearable devices, interactions with daily self-reports, weight/blood pressure loggings, whether or not participants “charted” in the app within 24 h of a push notification, and an “adapted usability scale.”[24]

Length Of App Usage

The review analysis showed that one article did not report the length of app usage, three were completed in one-time sessions, and six had not set a specific period of time for the intervention (participants used it a variable amount of time and were neither told to start or stop using the app). Of the remaining 44 articles, the usage duration ranged from 2 to 104 weeks, and the average length of usage was 21 weeks (SD = 22.7 weeks, median = 12.8 weeks, mode = 26 weeks). Many articles also reported a drop-off in usage after the first week of the study,[29,41,62,73] and/or reported an initially high level of participation that slowly diminished over the course of the study.[29,32,40-42,56,62,73]

Study Retention Rates

Retention rates were not applicable or not reported in 31% (n = 17) of the evaluated studies. In the remaining studies, the retention rate ranged from 14% to 100%, with an average 68% (SD 25%) and a median of 79%.

Discussion

The use of mobile apps in health care is gaining ground, however, research geared toward understanding patient/app interactions (engagement) is still nascent. This is evident by the small number (236) of articles matching the broad keywords used in the initial literature search. The PRISMA process then reduced this number to only 54 articles (∼23%) eligible for engagement analysis. There was little commonality in how studies measured participant engagement, and a wide range in the length of app usage in these studies. Most studies, depending on the app utility objectives, used longitudinal or cohort-testing approaches. Several studies explored ways to improve engagement within their own app prototype; of note was Bidargaddi et al.[46] who implemented a “micro-randomized” clinical trial of push notifications to determine the effects they had on engagement with a mobile health app. This study approach was one of very few that used an experimental approach to address design choices and their effects on engagement directly. They found that sending a tailored health message at 12:30 pm on any day, or at 7:30 pm on weekends, made participants almost 9% more likely to use the app. Mobile app developers utilize measure scales such as SUS and MARS to gain insight on the usability, functionality, and satisfaction from users. It was surprising to find that only three articles followed the MARS[17] and SUS[19] to evaluate engagement outcome measures, and none used the PAM[16] or PHE-s.[5] For the rest of the studies, excluding the ones that did not directly measure engagement (n = 11), and those that relied on participant's report for engagement results (n = 6), the most common quantitative approach to measure engagement was through app usage logs (n = 15) from which the study teams were able to quantify minutes of app usage, the frequency of log-ins, and the frequency of app components that were used. A host of app components affected engagement level, but the most engaging mobile interventions provided participants with the ability to view and/or interact with their health data. This aligned with other findings that the return of information to participants is an especially important aspect of creating engaging and impactful digital tools,[74,75] as doing so increases participant self-efficacy by involving individuals in making their own well-informed health care decisions.[76] Moreover, providing results and information tailored to an individual's gender, needs, characteristics, and interests has been shown to both produce more online activity and increase retention on follow-up surveys.[77] The findings from this review support these statements. It appears that the first week marks an important time point in continuing usage of the app, as many studies reported the largest drop-off in usage after the first week of the study, regardless of study duration (ranged from 3 to 26 weeks). Through app usage logs, researchers were able to track app engagement and assess drop-off in participation (). This pattern was not reported in studies that only relied on subjective measures of engagement, demonstrating that measurable methods to track app engagement provide a fuller picture of app usage. The analysis suggests that app engagement plays an important role in study retention. Once past the first week, retention rate was very high averaging around 68% among participants who stayed on the study. Study duration, disease indication, study duration, and participant incentives affected retention rate that ranged widely from 46%[78] to 86%.[79] The retention rates of most of the examined mobile interventions fall well within the expected range of longitudinal studies, with only n = 12 studies (22.2%) examined reporting retention rates lower than 60%. Of the n = 11 studies (20.4%) that reported retention rates of 85% or greater, the apps featured the following components that were popular among participants: reminders/push notifications (especially personally tailored messages), ability to communicate with doctors and/or care teams through messaging features, self-reporting or “logging” of symptoms, and easy access to health information. Incorporating digital tools and mobile apps in the management plan of patients' health is rather a new approach in health care. Research to generate evidence for the utility and efficacy of these tools in health care intervention and management is still nascent. We recognize that this analysis bears several limitations due to (1) the small number of published studies (articles) addressing engagement to be included in analysis, (2) many studies' lack of processes around measurement of engagement, and (3) hence, great variability in results. This review may not be able to accurately capture the true app components necessary to increase engagement with apps due to articles' lack of robust methods for measuring engagement, and due to the limited number of studies that can be properly compared by underlying health conditions.

Conclusion

In conclusion, despite the growing popularity of mobile app interventions, the specific engaging components have not been well defined. This article provides insight into existing methods and tools that may encourage participant engagement with future mobile app interventions in various disease indications. The information provided in this study is intended to help clinical teams, researchers, and clinical trialists design better mobile applications that will provide participants and patients with greater satisfaction and ultimately better outcomes. Future work is needed to develop common guidelines to support specific components or activities that lead to increased engagement with mobile health applications. There is also a tremendous need for cross-disciplinary agreement on standards for engagement measurement to provide greater generalizability. These two elements could help minimize app design and user testing periods and lead to greater success with mobile health interventions in the future.
  75 in total

1.  Impact of electronic personal health record use on engagement and intermediate health outcomes among cardiac patients: a quasi-experimental study.

Authors:  Tammy Toscos; Carly Daley; Lisa Heral; Riddhi Doshi; Yu-Chieh Chen; George J Eckert; Robert L Plant; Michael J Mirro
Journal:  J Am Med Inform Assoc       Date:  2016-01       Impact factor: 4.497

2.  An iPhone Application Intervention to Promote Surveillance Among Women with a BRCA Mutation: Pre-intervention Data.

Authors:  Stephanie A Cohen; Courtney Lynam Scherr; Dawn M Nixon
Journal:  J Genet Couns       Date:  2018-02-10       Impact factor: 2.537

3.  Acceptability of mHealth augmentation of Collaborative Care: A mixed methods pilot study.

Authors:  Amy M Bauer; Matthew Iles-Shih; Reza Hosseini Ghomi; Tessa Rue; Tess Grover; Naomi Kincler; Monica Miller; Wayne J Katon
Journal:  Gen Hosp Psychiatry       Date:  2017-11-24       Impact factor: 3.238

4.  Evaluating Consumer m-Health Services for Promoting Healthy Eating: A Randomized Field Experiment.

Authors:  Yi-Chin Kato-Lin; Rema Padman; Julie Downs; Vibhanshu Abhishek
Journal:  AMIA Annu Symp Proc       Date:  2015-11-05

5.  Impact of patient use of an online patient portal on diabetes outcomes.

Authors:  Marco Lau; Harlan Campbell; Tricia Tang; Darby J S Thompson; Tom Elliott
Journal:  Can J Diabetes       Date:  2014-02       Impact factor: 4.190

6.  Mobile-accessible personal health records increase the frequency and timeliness of PHR use for patients with diabetes.

Authors:  Ilana Graetz; Jie Huang; Richard Brand; John Hsu; Mary E Reed
Journal:  J Am Med Inform Assoc       Date:  2019-01-01       Impact factor: 4.497

7.  Patient Portals Facilitating Engagement With Inpatient Electronic Medical Records: A Systematic Review.

Authors:  Ronald Dendere; Christine Slade; Andrew Burton-Jones; Clair Sullivan; Andrew Staib; Monika Janda
Journal:  J Med Internet Res       Date:  2019-04-11       Impact factor: 5.428

8.  The Twazon Arabic Weight Loss App: App-Based Intervention for Saudi Women With Obesity.

Authors:  Aroub Alnasser; Janet Kyle; Najla Aloumi; Abdulrahman Al-Khalifa; Debbi Marais
Journal:  JMIR Mhealth Uhealth       Date:  2019-05-28       Impact factor: 4.773

9.  Automated Mobile Phone-Based Mental Health Resource for Homeless Youth: Pilot Study Assessing Feasibility and Acceptability.

Authors:  Angela C Glover; Stephen M Schueller; Dominika A Winiarski; Dale L Smith; Niranjan S Karnik; Alyson K Zalta
Journal:  JMIR Ment Health       Date:  2019-10-11

10.  Cell phone intervention for you (CITY): A randomized, controlled trial of behavioral weight loss intervention for young adults using mobile technology.

Authors:  Laura P Svetkey; Bryan C Batch; Pao-Hwa Lin; Stephen S Intille; Leonor Corsino; Crystal C Tyson; Hayden B Bosworth; Steven C Grambow; Corrine Voils; Catherine Loria; John A Gallis; Jenifer Schwager; Gary G Bennett; Gary B Bennett
Journal:  Obesity (Silver Spring)       Date:  2015-11       Impact factor: 5.002

View more
  2 in total

1.  A Machine-Learning Based Approach for Predicting Older Adults' Adherence to Technology-Based Cognitive Training.

Authors:  Zhe He; Shubo Tian; Ankita Singh; Shayok Chakraborty; Shenghao Zhang; Mia Liza A Lustria; Neil Charness; Nelson A Roque; Erin R Harrell; Walter R Boot
Journal:  Inf Process Manag       Date:  2022-07-21       Impact factor: 7.466

2.  "Back Rx, a personalized mobile phone application for discogenic chronic low back pain: a prospective pilot study".

Authors:  Vijay B Vad; Antonio Madrazo-Ibarra; Deborah Estrin; John P Pollak; Kaitlin M Carroll; Deneen Vojta; Amoli Vad; Camilla Trapness
Journal:  BMC Musculoskelet Disord       Date:  2022-10-19       Impact factor: 2.562

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.