Literature DB >> 33112238

Acceptability and Effectiveness of NHS-Recommended e-Therapies for Depression, Anxiety, and Stress: Meta-Analysis.

Melanie Simmonds-Buckley1, Matthew Russell Bennion1,2, Stephen Kellett1,3, Abigail Millings1,4, Gillian E Hardy1, Roger K Moore2.   

Abstract

BACKGROUND: There is a disconnect between the ability to swiftly develop e-therapies for the treatment of depression, anxiety, and stress, and the scrupulous evaluation of their clinical utility. This creates a risk that the e-therapies routinely provided within publicly funded psychological health care have evaded appropriate rigorous evaluation in their development.
OBJECTIVE: This study aims to conduct a meta-analytic review of the gold standard evidence of the acceptability and clinical effectiveness of e-therapies recommended for use in the National Health Service (NHS) in the United Kingdom.
METHODS: Systematic searches identified appropriate randomized controlled trials (RCTs). Depression, anxiety, and stress outcomes at the end of treatment and follow-up were synthesized using a random-effects meta-analysis. The grading of recommendations assessment, development, and evaluation approach was used to assess the quality of each meta-analytic comparison. Moderators of treatment effect were examined using subgroup and meta-regression analysis. Dropout rates for e-therapies (as a proxy for acceptability) were compared against controls.
RESULTS: A total of 24 studies evaluating 7 of 48 NHS-recommended e-therapies were qualitatively and quantitatively synthesized. Depression, anxiety, and stress outcomes for e-therapies were superior to controls (depression: standardized mean difference [SMD] 0.38, 95% CI 0.24 to 0.52, N=7075; anxiety and stress: SMD 0.43, 95% CI 0.24 to 0.63, n=4863), and these small effects were maintained at follow-up. Average dropout rates for e-therapies (31%, SD 17.35) were significantly higher than those of controls (17%, SD 13.31). Limited moderators of the treatment effect were found.
CONCLUSIONS: Many NHS-recommended e-therapies have not been through an RCT-style evaluation. The e-therapies that have been appropriately evaluated generate small but significant, durable, beneficial treatment effects. TRIAL REGISTRATION: International Prospective Register of Systematic Reviews (PROSPERO) registration CRD42019130184; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=130184. ©Melanie Simmonds-Buckley, Matthew Russell Bennion, Stephen Kellett, Abigail Millings, Gillian E Hardy, Roger K Moore. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 28.10.2020.

Entities:  

Keywords:  National Health Service; anxiety; depression; e-therapy; meta-analysis; mobile phone; treatment effectiveness

Mesh:

Year:  2020        PMID: 33112238      PMCID: PMC7657731          DOI: 10.2196/17049

Source DB:  PubMed          Journal:  J Med Internet Res        ISSN: 1438-8871            Impact factor:   5.428


Introduction

The potential contribution of digital technology in enabling access to evidenced-based psychological care for mental health problems is high on national and international research, policy, commissioning, and service management agendas [1]. In modern life, as digital tools (eg, mobile phones, tablets, laptops, and wearable devices) have become ubiquitous, psychological interventions delivered by such devices (ie, e-therapies) offer greater convenience and enable constant access to treatment compared with traditional face-to-face therapy with health professionals [2]. The increasing demand for primary care psychological services globally has provided the context within which e-therapies have been integrated into the offer of a suite of low-intensity (LI) psychological interventions [3], often delivered within stepped-care systems [4,5]. Although technological innovation in methods of treatment delivery usefully expands availability, it also creates the risk of commercial promotion and availability of ineffective or possibly harmful psychological interventions [6]. Therefore, commissioners, clinicians, and patients need access to reliable and contemporary guidance regarding the empirical status and clinical utility of e-therapies. The potential organizational, therapeutic, and health economic benefits of e‑therapies initially triggered a global wave of investment and interest [7]. In the United Kingdom, for example, the National Health Service (NHS) Commissioning Board launched the NHS Health Apps Library in March 2013 and NHS Mental Health Apps Library in March 2015. However, the libraries were removed in 2015 after questions were raised concerning e-therapy data security governance [8] and clinical effectiveness [9]. NHS England launched 2 new digital platforms in April 2017, a new beta of the NHS Digital Apps Library and a mobile health space, in an effort to close the gap between e-therapy development and thorough evaluation. Before the removal of the initial NHS App Libraries, a list of 48 NHS-recommended e-therapies was compiled for the National Institute for Health and Care Excellence (NICE) assessment of digitally enabled psychological therapies for use in Improving Access to Psychological Therapies (IAPT) services [10]. A recent quality assessment of the development process of NHS-recommended e-therapies strongly advocated developers to routinely adopt clinical trial methods to test acceptability and efficacy of e-therapies before wider dissemination [11]. NICE has also recently published an evidence standards framework for e-therapies providing guidance concerning efficacy and effectiveness standards [12]. This review aims to quantitatively synthesize the evidence base of e-therapies recommended for use in the NHS for depression, anxiety, and stress in adults to better inform the commissioning and use of e-therapies in clinical services. It was relevant to restrict this review to adults as the NHS-recommended e-therapies are intended for adults. Previously, an individual participant meta-analysis of the e-therapy clinical trial evidence base for depression showed that e-therapy was significantly more effective than controls [13], and there is clinical trial evidence for the efficacy of e-therapy as a treatment for anxiety [14]. This study had 3 aims. First, we sought to quantify the effect of NHS-recommended e-therapies (ie, the 48 e-therapies identified by Bennion et al [10]), as no previous specific meta-analysis of the efficacy of NHS-recommended e-therapies has been attempted. As randomized controlled trials (RCTs) are viewed as the gold standard evaluation [15], we sought to only use RCT studies to increase the quality of the meta-analysis. Second because e-therapies are criticized for generating high dropout rates [16], we sought to compare dropout rates in contrast to controls to appraise acceptability. Finally, we sought to investigate the impact of potential moderating factors (eg, gender, age, severity, treatment approach, treatment duration, setting, focus problem, and risk of bias) on e-therapy outcomes via subgroup and meta-regression analyses.

Methods

The review was registered on the International Prospective Register of Systematic Reviews (PROSPERO; CRD42019130184). The PRISMA (Preferred Reporting Guidelines for Systematic Reviews and Meta-Analyses) are used throughout [17].

Study Selection

A 3-stage search strategy was developed to identify RCTs evaluating all of the e-therapies recommended by the NHS for the treatment of depression, anxiety, and stress. First, each of the 48 NHS-recommended e-therapies identified by Bennion et al [10] was used to determine those e-therapies to be included in the search strategy. The name of each e-therapy and its platform type (website or app) were combined to develop a series of search terms (eg, “Beating the Blues” AND “Website”) [18]. Electronic searches were conducted using PsycINFO, Web of Science, and PubMed databases to identify relevant e-therapy outcome studies published up until April 2019 (date of final search was April 11, 2019; see Multimedia Appendix 1 for an example search strategy). Second, reference lists of identified studies and previous e-therapy reviews were also searched. Third, as many e-therapies are not developed under their commercial name, a survey was disseminated to the 48 app developers of the identified NHS-recommended e-therapies to identify additional gray literature not captured by the terms used in the database searches [11]. This process was to supplement the identification of all studies associated with any one e-therapy, even when the commercial name was not used in the reporting. A total of 36 out of 48 (75%) app developers responded to the survey, and the full process was reported by Bennion et al [11]. Titles and abstracts were screened initially (MB), with the full texts of identified studies then screened against inclusion and exclusion eligibility criteria (MB). Queries regarding study eligibility were resolved through discussion among reviewers (MB, SK, and AM).

Eligibility Criteria

Studies were included if the web-based or smartphone app intervention used was one of the 48 NHS-recommended e-therapies [10] for depression, anxiety, and stress; therefore, all studies of other types of e-therapies and for other clinical conditions were excluded. Studies were eligible for inclusion if, and only if, they used an RCT design to examine the efficacy of e-therapy with an adult population (ie, aged >18 years). To be included, the developer of the e-therapy had to be locatable via a Google search when entering the app name as the search term, and the app had to reference the targeted condition (ie, depression, anxiety, or stress) in its marketing literature or be based on a therapeutic tool known to benefit the targeted condition. Posttreatment outcomes were required to have been assessed using a validated measure of anxiety and/or depression symptoms. Comparators included any control condition, comprising a wait list or no treatment, placebo or attention-control activity, or treatment as usual (TAU). Only English language articles were included.

Outcomes

The 2 main outcomes of interest were participant-reported outcomes of (1) depression and/or (2) anxiety and stress taken at posttreatment and at follow-up (where available, to assess the durability of e-therapy effectiveness). Where multiple measures of one outcome were used (ie, 2 measures of depression), the most frequently used measure across the included studies was prioritized. Therefore, each study only contributed one effect size per outcome. Dropout (as a proxy for acceptability) was classified as the percentage of e-therapy and comparator condition noncompleters, as determined by the definition applied in the original study.

Data Extraction

A priority data extraction tool was designed for the purpose of the review. MB extracted data from the original studies and then reviewers (SK and AM) independently verified the findings. Data were coded according to the following criteria: (1) study information—sample size, trial design, context, comparator type, study length, analytic approach (intention to treat [ITT] or completers), and trial quality; (2) participant characteristics—mean age, percentage of males, population sample, presenting problem, and diagnostic information or relevant inclusion criteria; (3) outcome characteristics—outcome measure and, if applicable, length of follow-up; and (4) intervention features—e-therapy program, regularity of instructed use, duration, intervention component details of the comparator condition, and self-help typology. The self-help typology for each e-therapy was coded based on the framework by Newman et al [19]: minimal contact therapy, predominantly self-help, predominantly therapist-administered treatment, or self-administered therapy. This was selected to provide an assessment of the level and extent of therapist support within the e-therapies. Outcome data on depression, anxiety, and stress symptoms and dropout rates were extracted at treatment completion and follow-up (ie, at 6 months or the closest assessment point available).

Study and Evidence Quality

The Cochrane risk of bias tool [20] was used to assess the methodological quality of the original studies using the Cochrane Review Manager (RevMan) program [21]. All included studies were assessed on 7 elements: (1) randomization, (2) allocation concealment, (3) blinding of participants and personnel, (4) blinding of outcome assessment, (5) data attrition, (6) selective outcome reporting, and (7) other threats to validity. Elements were rated as having low risk, unclear, or high risk of bias. One rater assessed all the included studies, with all studies double rated by 2 other raters (rater 1 assessed 63% [15/24] and rater 2 assessed 37% [9/24]). Cohen kappa coefficient (k) was used to assess the interrater agreement on risk of bias overall scores between the primary rater and 2 second raters [22], and these were interpreted using the Landis and Koch [23] categories: <0 as indicating no agreement, 0 to 0.20 as slight, 0.21 to 0.40 as fair, 0.41 to 0.60 as moderate, 0.61 to 0.80 as substantial, and 0.81 to 1 as almost perfect agreement. There was substantial agreement between the primary rater and rater 1 (k=.63) and moderate agreement between the primary rater and rater 2 (k=.54). Any differences in rating were discussed by the raters to reach a consensus on the overall risk of bias rating for each included study. The grading of recommendations assessment, development, and evaluation (GRADE) approach was used to rate the quality of the evidence included in each meta-analysis conducted [24]. The quality of evidence was assessed on 5 domains: (1) risk of bias in the individual included studies, (2) publication bias, (3) inconsistency, (4) imprecision, and (5) indirectness of treatment estimate effects. The meta-analysis was graded by 2 reviewers (SK and MS) and a consensus agreed (rated as high, moderate, low, or very low quality).

Effect Sizes

Standardized mean differences (SMDs) were used to assess differences in outcome between e-therapy and the comparator conditions at posttreatment and follow-up. SMDs were computed by calculating Cohen d (mean outcome score of the comparator condition subtracted from the mean outcome score of the e-therapy and dividing by the pooled standard deviation). Where available, effect sizes were computed using ITT outcome data. To account for potential biases in studies with small sample sizes, SMDs were converted to Hedges g using the J adjustment [25]. Effect sizes were calculated so that a beneficial effect of e-therapy was represented by a positive SMD and vice versa. Interpretations of effect size magnitude were classified as 0.20 to 0.49=small, 0.50 to 0.79 = medium, and >0.80=large [26]. When studies had multiple treatment arms delivering e-therapies that could be considered comparable (ie, the same e-therapy with different component combinations, such as reminders and telephone support), the data were collapsed into a single group using Cochrane guidelines [20]. When studies had multiple treatment arms that could not be collapsed (ie, three-arm trial comparing 2 different types of recommended e-therapy to a control), the treatment arms were included independently. The sample size of the shared comparator condition was split evenly across independent treatment arm comparisons to avoid participant data being included twice.

Data Synthesis

Meta-Essentials workbooks were used to synthesize e-therapy treatment effects in a random-effects meta-analysis to account for the extent of expected study heterogeneity [27]. Individual study effect sizes were weighted using the inverse of the variance to produce overall pooled treatment effect estimates and 95% CIs. The threshold for statistical significance was set at an αvalue of .05. The I statistic was employed as an indicator of the percentage of between-study heterogeneity, whereas the Q statistic provided a test of the statistical significance of the presence of study variation. Thresholds of heterogeneity were interpreted as <40% may not be relevant, 30% to 60% representing moderate heterogeneity, 50% to 90% representing substantial heterogeneity, and 75% to 100% representing considerable heterogeneity [28]. As recommended by Cochrane, the magnitude and direction of effect sizes were used to interpret the implications of I percentages. The overall pooled effect sizes of e-therapy were translated into numbers needed to treat (NNTs) [29]. NNT is an approximation of how many patients would need treatment with e-therapy to generate an additional outcome of benefit when compared with another intervention (ie, the comparator condition). A Mann-Whitney U test was used to assess for differences in dropout rates between e-therapy and controls.

Moderator and Sensitivity Analyses

Preplanned random-effects moderator analyses were performed using the Meta-Essentials workbooks to evaluate between-study variation in treatment effects in posttreatment comparisons with a minimum of 10 studies [20]. Moderators were selected based on methodological, clinical, and intervention features that were likely to vary between studies. Meta-regressions were applied to 5 continuous variables: mean age, mean number of sessions completed, percentage of males, baseline symptom severity (standardized Z scores), and risk of bias (number of items meeting criteria for low risk of bias: 0-7). Subgroup analyses were applied to 6 categorical variables: 4 of them were specified a priori (control type, e-therapy type, self-help typology, and recruitment setting) and 2 were conducted post hoc (focus problem and analysis method). Owing to multiple testing, the αthreshold for significance of the meta-regression beta-coefficients and the between-subgroup differences was lowered to P<.01. A series of sensitivity analyses were performed to assess the impact of outliers on the pooled effect sizes (with extreme outliers removed) and to further explore treatment effect durability (comparisons of follow-up effects separately at short-term [1-2 months], medium-term [6 months], and long-term [>8 months] follow-up).

Publication Bias

Several methods were employed to assess for the presence of publication bias in the posttreatment comparisons that had a sufficient number of studies (k>10). Visual inspection of the asymmetry of a funnel plot (SE plotted against effect sizes) gave an indication of the extent of potential publication bias, whereas the accompanying Trim and Fill imputation [30] accounted for any reporting bias to provide an adjusted treatment estimate. Finally, additional statistical testing of asymmetrical study distribution was undertaken using Egger regression [31].

Results

The electronic searches returned a total of 944 records. This was combined with the 152 records collected by surveying app developers and 7 records from a manual reference list and review searches, giving a combined total of 1103 records (Figure 1). Duplicates were removed, leaving a total of 910 records to be screened. After excluding records that did not meet the inclusion criteria based on abstracts, 159 full-text articles were retrieved and assessed. Overall, 26 trials were considered eligible, and 2 were excluded because they contained duplicate data from another trial. Thus, a total of 24 studies that tested the efficacy of 7 of the 48 NHS-recommended e‑therapies (Beating the Blues, FearFighter, MoodGYM, IESO, Headspace, Silver Cloud, and Work Guru) in an RCT design were included in the meta-analysis. Details of the included studies can be found in Multimedia Appendix 2 [32-55].
Figure 1

PRISMA (Preferred Reporting Guidelines for Systematic Reviews and Meta-Analyses) flowchart of study selection. NHS: National Health Service; RCT: randomized controlled trial.

PRISMA (Preferred Reporting Guidelines for Systematic Reviews and Meta-Analyses) flowchart of study selection. NHS: National Health Service; RCT: randomized controlled trial. The risk of bias ratings are presented in Table 1. Of the 24 included studies, quality ranged between 1 and 7 quality items meeting low risk of bias criteria (maximum of 7). The overall study quality was moderate to good, with 13 studies meeting low risk of bias criteria on at least five items. A lack of or unclear blinding of participants and personnel or outcome assessment and incomplete outcome data were the most common reasons for risk of bias. For the most poorly rated item across studies, only 3 trials demonstrated suitable blinding of participants and personnel.
Table 1

Risk of bias assessment of the included studies.

StudyRisk of bias items
1a2b3c4d5e6f7g
Proudfoot et al (2003) [32]+h+h?h+++
Grime (2004) [33]++????+
Proudfoot et al (2004) [34]++?+++
Marks et al (2004) [35]+++?+?
Schneider et al (2005) [36]++++++
Mackinnon et al (2008) [37]???+++
Kessler et al (2009) [38]+++++++
Ellis et al (2011) [39]?????+
Farrer et al (2011) [40]++??+?+
Høifødt et al (2013) [41]++?+++
Lintvedt et al (2013) [42]++?+++
Powell et al (2013) [43]++?+++
Sethi (2013) [44]+++++
Howells et al (2016) [45]+++??+
Phillips et al (2014) [46]+++++++
Twomey et al (2014) [47]+++?
Gilbody et al (2015) [48]+++++
Richards et al (2015) [49]++??+++
Richards et al (2016) [50]++??+++
Carolan et al (2017) [51]++?+++
Flett et al (2018) [52]+??+++
Forand et al (2018) [53]+?+++
Bostock et al (2019) [54]+??+?+
Löbner et al (2019) [55]++?++?

aRandom sequence generation (selection bias).

bAllocation concealment (selection bias).

cBlinding of participants and personnel (performance bias).

dBlinding of outcome assessment (performance bias).

eIncomplete outcome data (attrition bias).

fSelective outcome reporting.

gOther potential threats to validity.

h+=low risk; −=high risk; ?=unclear risk.

Risk of bias assessment of the included studies. aRandom sequence generation (selection bias). bAllocation concealment (selection bias). cBlinding of participants and personnel (performance bias). dBlinding of outcome assessment (performance bias). eIncomplete outcome data (attrition bias). fSelective outcome reporting. gOther potential threats to validity. h+=low risk; −=high risk; ?=unclear risk.

Study Characteristics

Out of the 48 NHS e‑therapies identified by Bennion et al [10], a total of 7 (15%) were based on RCT evidence of efficacy, which comprised 6 web-based e‑therapies and 1 smartphone-based e-therapy (Table 2). MoodGYM was the e-therapy with the greatest degree of evaluation (k=11 studies), with 2 of the e-therapies having a single RCT evaluation (ie, Ieso and WorkGuru). All 6 web-based e-therapies had both clinical and academic personnel adding expertise during technological development, but the smartphone-based e-therapy had no clinical or academic personnel being involved in its technological development phase [11]. A summary of e-therapy version numbers used in each study and whether a CONSORT-EHEALTH (Consolidated Standards of Reporting Trials of Electronic and Mobile Health Applications and Online Telehealth) checklist [56] was provided (for studies published post-2011 after the checklist was developed) is reported in Multimedia Appendix 3 [32-55]. Reporting of version numbers was generally inconsistent, meaning establishing whether the e-therapies had been updated between studies was difficult. Beating the Blues had been updated between studies, with version 1.0 used in the early studies (2003-2004) [32,34] and version 2.5 used in the most recent study (2018) [53]. Updates to MoodGYM were unable to be established because of inconsistent reporting of version numbers, but there was an indication that the studies between 2011 and 2018 used version III [41,42,55]. It appeared that Headspace was updated from version 1.0 or above in 2014 to a version equal to or above 2.0 in studies from 2019. Studies of FearFighter, SilverCloud, IESO, and WorkGuru either did not refer to version numbers or were only evaluated in 1 RCT, so updates could not be conclusively determined.
Table 2

Types of e-therapies used in included studies.

E-therapyNumber of trialsaDelivery platformClinical involvementAcademic involvementPsychological theory or clinical approach usedEvidence of updates between studies
Beating the Blues5Web-basedYbYCBTcYes
Fear Fighter2Web-basedYYCBTCould not be determined
Headspace3Phone-basedNdNMindfulnessYes
IESO1Web-basedYYCBTN/Ae
MoodGYM11Web-basedYYCBTCould not be determined
SilverCloud Health2Web-basedYYCBTN/A
WorkGuru1Web-basedYYCBT, mindfulness, and PPfN/A

aA total of 2 e-therapies were evaluated in one trial; therefore, the total number of trials exceeded the overall number of included studies.

bY: yes.

cCBT: cognitive behavioral therapy.

dN: no.

eN/A: not applicable, as e-therapy content was not assessed in multiple studies.

fPP: positive psychology.

All but one of the‑therapies were based on the cognitive behavioral theory (CBT) [11]. E‑therapy treatments lasted between 10 and 70 days (mean 44.52, SD 16.11), comprising between 3 and 45 sessions (mean 8.37, SD 7.98) lasting 10 to 60 min each (mean 48.21, SD 15.26). The majority of e-therapies were administered weekly (k=19), whereas 3 of the trials required daily e-therapy usage (2 trials did not report the instructed frequency of usage). Self-help typology was characterized as self-administered therapy (k=7 studies), predominantly self-help (k=11 studies), minimal contact therapy (k=5 studies), and predominantly therapist-delivered treatment (k=1 study). The control conditions employed in the studies were waitlist or no treatment (k=13), TAU (k=5), and placebo or attention-control tasks (k=9; note: k=3 studies had multiple control conditions). TAU comprised usual general practitioner (GP) care, allowing access to any treatment prescribed or referred to by a GP. Placebo or attention-control conditions included depression information websites (eg, Bluepages; k=2), online peer support forums (eg, MoodGarden; k=1), tracking or structured weekly phone calls (k=2), neutral tasks or note-taking organization apps (eg, Catch notes software or Evernote; k=2), or online self-relaxation (without exposure, ie, a sham treatment; eg, managing anxiety or de-STRESS; k=2). In k=12 trials, clinical participants were recruited from primary care (k=7), psychiatric outpatients (k=2), a university counseling center (k=1), public sector employees (k=1), and a telephone counseling service (k=1). In the remaining 12 trials, community participants were recruited from university students (k=3), occupational health attendees (k=3), the internet (k=2), electoral role (k =1), youth center (k=1), charity users (k =1), and treatment-seeking adults (k=1). Mean ages across the samples ranged from 20 to 45 years (mean 35.71, SD 7.76). E-therapies were delivered for symptoms of depression (k=10), anxiety or panic and phobia (k=3), stress (k=2), or a combination of anxiety and depression symptoms (k=6). Three of the trials did not require participants to have any symptoms or indicators of poor mental health. The Beck Depression Inventory (I or II) was the most commonly used depression outcome measure (k=7), followed by the Centre for Epidemiologic Studies Depression Scale (CES-d; k=6). The most commonly employed anxiety outcome measures were the Generalized Anxiety Disorder-7 (k=4) and the Depression Anxiety Stress Scales—anxiety subscale (k=4). Follow-up assessments were conducted in 18 trials (k=2 had insufficient data to be included in the follow-up analysis). The duration of follow-up ranged between 1 and 20 months (mean 5 months). Dropout rates ranged from 0% to 64%. The average e-therapy dropout rate was 31% (SD 17.35), and the average dropout rate for controls was 17% (SD 13.31). Therefore, significantly more participants dropped out during e-therapies compared with controls (U=181.000; Z=−3.026; P=.002). Types of e-therapies used in included studies. aA total of 2 e-therapies were evaluated in one trial; therefore, the total number of trials exceeded the overall number of included studies. bY: yes. cCBT: cognitive behavioral therapy. dN: no. eN/A: not applicable, as e-therapy content was not assessed in multiple studies. fPP: positive psychology.

Meta-Analysis of E-Therapy Versus Controls

Meta-analytic comparisons were performed to aggregate the effect of e-therapy vs controls on (1) depression and (2) anxiety and stress symptoms at posttreatment and follow-up. GRADE assessments are reported for each comparison, indicating the quality of evidence. All comparisons were based on RCT evidence so they started as high-quality evidence. Across the meta-analyses, limited issues were found in terms of study limitations or publication bias, but some limitations were found for heterogeneity, treatment comparisons, and imprecision. As a result, the level of evidence was downgraded for all comparisons, with the majority demonstrating moderate quality. Comparisons were downgraded one level specifically due to significant and considerable I statistic indicating marked heterogeneity in the original studies, variability in primary outcome measure, differing control groups, and varied effects based on lower and upper bounds of confidence intervals. One comparison was downgraded 2 levels to low-quality evidence because of additional limitations created by the small number of studies restricting subsequent moderator analyses and variability in follow-up time.

Effect of E-Therapy on Depression Outcomes

Posttreatment and Follow-Up Comparisons

Overall, 26 treatment arm comparisons (extracted from 22 studies) totaling 7075 participants evaluated posttreatment e-therapy depression outcomes in comparison with a control condition (e-therapy, n=3545; control, n=3530). The pooled SMD presented in Figure 2 signified a small, significant treatment effect in favor of greater depression reductions following e-therapy (SMD 0.38; 95% CI 0.24 to 0.52; Z=5.78; P<.001; GRADE=moderate). The NNT was 4.72, indicating that for every 5 patients who received e-therapy, there was one additional beneficial depression outcome compared with if they had received a control condition. Between-study variation was significant, indicating substantial heterogeneity between studies (I2=73%; 95% CI 60% to 82%; Q=92.30; P<.001). Furthermore, 16 follow-up treatment arm comparisons (extracted from 13 studies) provided follow-up data on depression outcomes for e-therapies versus control conditions for 5709 participants (e-therapy, n=2850; control, n=2859). There was a small significant pooled SMD in favor of depression outcomes at follow-up compared with controls (Figure 2; SMD 0.25; 95% CI 0.08 to 0.41; Z=3.23; P=.001; NNT=7.12; GRADE=moderate). The between-study variation was significant, indicating moderate-to-substantial heterogeneity (I2=69%; 95% CI 48% to 81%; Q=48.11; P<.001).
Figure 2

Forest plot of post-treatment and follow-up depression outcome effect sizes (ES) for e-therapy versus controls.

Forest plot of post-treatment and follow-up depression outcome effect sizes (ES) for e-therapy versus controls.

Moderator and Sensitivity Analyses

The significant heterogeneity between studies at posttreatment and follow-up was investigated using meta-regression (Table 3) and subgroup moderator analyses (Table 4). Meta-regression analyses found that variations in e-therapy treatment effects were not explained by gender, age, number of sessions, or study quality at posttreatment or follow-up. Although initial depression severity was not significantly associated with effect size at posttreatment, higher levels of depression severity were associated with larger beneficial effects of e-therapy at follow-up. Subgroup analyses showed that variation in posttreatment effect size was associated with the type of control condition (although the effect fell short of significance after accounting for multiple testing). A moderate effect was observed in favor of e-therapy vs wait list controls, whereas the effects for e-therapy compared with placebo conditions and TAU were small. At follow-up, e-therapy effect sizes did not significantly differ according to the control type with e-therapy, showing a small significant beneficial effect compared with placebo and TAU controls and a small nonsignificant effect compared with wait list. Posttreatment and follow-up effects were not significantly affected by the e-therapy type, self-help typology, recruitment setting, focus problem, or analysis method. Substantial significant heterogeneity was evident in approximately half of the subgroups.
Table 3

Meta-regression analyses of effect e-therapy vs controls on depression and anxiety outcomes (posttreatment and follow-up).

Time point and outcome, variable k a B coefficient95% CISEP valuebR2 (%)c
Posttreatment
Depression
Initial severity260.07−0.06 to 0.210.06.264.15
Percentage of males26-0.01−0.02 to 0.000.01.098.30
Mean age (years)260.00−0.02 to 0.010.01.580.95
Mean number of sessions completed170.020.00 to 0.050.01.0810.23
Risk of bias26-0.01−0.11 to 0.080.05.770.28
Follow-upc
Depression
Initial severity160.250.12 to 0.390.06<.00153.17
Percentage of males16-0.01−0.03 to 0.010.01.1311.64
Mean age (years)160.01−0.01 to 0.040.01.383.88
Mean number of sessions completed110.01−0.06 to 0.080.03.780.44
Risk of bias160.02−0.11 to 0.140.06.780.40
Posttreatment
Anxietyd
Initial severity170.12−0.07 to 0.310.09.178.84
Percentage of males17-0.01−0.03 to 0.010.01.245.85
Mean age (years)17-0.01−0.03 to 0.010.01.433.03
Mean number of sessions completed110.020.00 to 0.050.01.0723.93
Risk of bias17-0.01−0.14 to 0.120.06.850.18

ak: number of comparisons.

bAlpha threshold Bonferroni adjusted to P<.01 for multiple testing.

cInsufficient number of comparisons and limited between-study heterogeneity to warrant moderator analyses of anxiety outcomes at follow-up.

dR: percentage of variance explained by the moderator.

Table 4

Subgroup analysis of effect e-therapy versus controls on depression outcomes (posttreatment and follow-up).

Time point and variable, Subgroup k a SMDb (Hedges g)c95% CII2 (%)dP value (between subgroups)eR2 (%)fNNTg
Posttreatment
Control type
Wait list120.54h0.34 to 0.7579h .028.003.36
TAUi70.32h0.06 to 0.5879hj5.58
Placebo70.20h0.06 to 0.3428.89
E-therapy type
MoodGYM140.29h0.15 to 0.4357h.303.946.15
Beating the Blues50.55h0.00 to 1.1089h3.30
Headspace30.36h0.22 to 0.4904.97
Other40.50h0.32 to 0.6823.61
Self-help typology
Self-administered80.30h0.15 to 0.4565h.085.875.95
Predominantly self-help140.39h0.16 to 0.6276h4.60
Minimal contact30.53h0.39 to 0.6703.42
Predominantly therapist delivered1k0.612.95
Setting
Clinical120.39h0.22 to 0.5768h.910.014.60
Community140.38h0.18 to 0.5876h4.72
Focus problem
Depression120.39h0.13 to 0.6484h.740.794.60
Anxiety or stress30.38h0.25 to 0.5204.72
Both70.47h0.29 to 0.6503.84
Analysis method
ITTl90.39h0.24 to 0.5476h.500.494.60
Completers30.33h0.21 to 0.4405.42
Follow-up
Control type
Wait list40.29−0.15 to 0.7371h.751.196.15
TAU70.29h0.03 to 0.5479h6.15
Placebo50.18h0.00 to 0.3609.87
E-therapy type
MoodGYM90.21−0.01 to 0.4373h.790.968.47
Beating the Blues40.31−0.03 to 0.6473h5.76
Other30.32h0.05 to 0.59515.58
Self-help typology
Self-administered40.16−0.10 to 0.4180h.461.2911.10
Predominantly self-help100.29h0.07 to 0.5165h6.15
Minimal contact1k0.0444.32
Predominantly therapist delivered1k0.563.25
Setting
Clinical100.33h0.09 to 0.5777h.134.685.42
Community60.14h0.07 to 0.21012.68
Focus problem
Depression100.22−0.01 to 0.4677h.077.428.08
Anxiety or stress1k0.1511.83
Both30.49h0.32 to 0.6603.69
Analysis method
ITT30.27h0.09 to 0.4571h6.60
Completers1k0.1710.45

ak: number of comparisons.

bSMD: standardized mean difference.

cPositive effect size indicates in favor of e-therapy.

dSignificance of associated Q statistic.

eAlpha threshold Bonferroni adjusted to P<.01 for multiple testing.

fR: percentage of variance explained by moderator.

gNNT: number needed to treat.

hSignificant at P<.05.

iTAU: treatment as usual.

jOne between-groups P value and R2 value are provided for each subgroup comparison, reported on the row of the first subgroup category.

kWhere there is only one comparison within a subgroup, 95% confidence intervals and I2 values are not reported.

lITT: intention to treat.

Meta-regression analyses of effect e-therapy vs controls on depression and anxiety outcomes (posttreatment and follow-up). ak: number of comparisons. bAlpha threshold Bonferroni adjusted to P<.01 for multiple testing. cInsufficient number of comparisons and limited between-study heterogeneity to warrant moderator analyses of anxiety outcomes at follow-up. dR: percentage of variance explained by the moderator. Subgroup analysis of effect e-therapy versus controls on depression outcomes (posttreatment and follow-up). ak: number of comparisons. bSMD: standardized mean difference. cPositive effect size indicates in favor of e-therapy. dSignificance of associated Q statistic. eAlpha threshold Bonferroni adjusted to P<.01 for multiple testing. fR: percentage of variance explained by moderator. gNNT: number needed to treat. hSignificant at P<.05. iTAU: treatment as usual. jOne between-groups P value and R2 value are provided for each subgroup comparison, reported on the row of the first subgroup category. kWhere there is only one comparison within a subgroup, 95% confidence intervals and I2 values are not reported. lITT: intention to treat. Sensitivity analyses explored the impact of the extreme outliers and length of follow-up on the pooled depression effect sizes. Although the removal of outlier effects resulted in a slight reduction in the effect of e-therapy on depression from 0.38 to 0.34 at posttreatment and from 0.25 to 0.22 at follow-up, outcomes still indicated small, significant benefits of e-therapy compared with controls. E-therapy demonstrated a small, beneficial effect compared with controls at short-term and medium-term follow-up, which diminished at long-term follow-up. The full sensitivity analysis results are reported in Multimedia Appendix 4.

Assessment of Publication Bias

Visual inspection of the posttreatment funnel plot (Figure 3) suggested that there was some asymmetry in the distribution of studies, indicating that the smaller included studies were more likely to report larger effects for e-therapy interventions. Trim and fill imputed missing data to represent 4 smaller studies with effects more in favor of controls, producing a slightly reduced adjusted effect size in favor of e-therapy (SMD 0.31; 95% CI 0.15 to 0.46). Statistical testing of publication bias using Egger’s regression did not detect significant asymmetry in the study distribution for posttreatment outcomes (B=−0.15; t25=1.49; P=.15). Assessment of study distribution for follow-up depression outcomes also did not detect a significant influence of publication bias (B=0.31; t15=1.34; P=.20). Taken together, the multiple assessments of publication bias suggest a minimal-to-small influence of bias on the overall e-therapy treatment effect for depression outcomes.
Figure 3

Funnel plot for distribution of studies reporting e-therapy versus controls post-treatment depression outcomes.

Funnel plot for distribution of studies reporting e-therapy versus controls post-treatment depression outcomes.

Effect of E-Therapy on Anxiety and Stress Outcomes

Overall, 17 treatment arm comparisons (extracted from 16 studies) totaling 4863 participants evaluated posttreatment e-therapy anxiety and stress outcomes alongside a control condition (e-therapy, n=2443; control, n=2420). The pooled SMD presented in Figure 4 signified a small-to-moderate, significant treatment effect in favor of greater anxiety reductions following e-therapy (SMD=0.43; 95% CI 0.24 to 0.63; Z=4.63; P<.001; GRADE=moderate). The NNT was 4.18, indicating that for approximately every 4 patients who received e-therapy, there was one additional beneficial anxiety and stress outcome compared with if they had received a control condition. The between-study variation was significant, indicating substantial heterogeneity (I2=73% [95% CI 56% to 83%]; Q=59.13; P<.001). Furthermore, 10 studies provided follow-up data on anxiety and stress outcomes for e-therapies vs control conditions for 3983 participants (e-therapy, n=2000; control, n=1983). At follow-up, there was a small, significant pooled SMD in favor of e-therapy compared with controls (Figure 4; SMD=0.23; 95% CI 0.17 to 0.29; Z=8.30; P<.001; NNT=7.74; GRADE=low). The between-study variation was minimal and not significant (I2=0% [95% CI 0% to 46%]; Q=6.31; P=.71).
Figure 4

Forest plot of post-treatment and follow-up stress/anxiety outcome effect sizes (ES) for e-therapy versus controls.

Forest plot of post-treatment and follow-up stress/anxiety outcome effect sizes (ES) for e-therapy versus controls. The significant heterogeneity between studies at posttreatment was investigated with meta-regression (Table 3) and subgroup moderator analyses. Minimal heterogeneity and an insufficient number of studies (k<10) negated the need for moderator analysis of follow-up effects. Meta-regression analyses found variations in e-therapy posttreatment anxiety and stress effects were not explained by initial severity, gender, age, number of sessions, or study quality. Subgroup analyses showed that posttreatment effect sizes for anxiety and stress symptoms did not significantly differ for different control conditions. However, e-therapy vs wait list produced a moderate, significant effect compared with the small effects observed for TAU and placebo controls (placebo effect not significant). Posttreatment effects were not significantly affected by the e-therapy type, recruitment setting, focus problem, or analysis method. Self-help typology indicated larger effects were observed for therapies with greater therapist involvement (P=.02); however, the effect did not remain significant when applying a Bonferroni correction. Substantial significant heterogeneity was evident in about a quarter of the subgroups. Sensitivity analyses explored the impact of extreme outliers and length of follow-up on the pooled anxiety and stress effect sizes. Although the removal of outlier effects resulted in a slight reduction in the e-therapy treatment effect on anxiety from 0.43 to 0.37 at posttreatment and from 0.23 to 0.22 at follow-up, the outcomes still indicated small, significant benefits of e-therapy compared with controls. E-therapy demonstrated a small, beneficial effect compared with controls at both short-term and medium-term follow-up (insufficient studies of long-term follow-up were available). The full sensitivity analysis results are reported in Multimedia Appendix 4. The significant heterogeneity between studies at posttreatment was investigated with subgroup moderator analyses (Table 5)
Table 5

Subgroup analysis of effect e-therapy versus controls on anxiety and stress outcomes (posttreatment).

Time pointa and variable, Subgroup k b SMDc (Hedges g)d95% CII2 (%)eP value (between subgroups)fR2 (%)gNNTh
Posttreatment
Control type
Wait list90.55i0.24 to 0.8684i.412.993.04
TAUj30.40i0.35 to 0.450k4.49
Placebo50.26−0.02 to 0.55286.86
E-therapy type
MoodGYM70.44i0.01 to 0.8680i.860.504.09
Beating the Blues30.40i0.35 to 0.4504.49
Other70.46i0.24 to 0.6861i3.92
Self-help typology
Self-administered40.23i0.09 to 0.368.0213.387.74
Predominantly self-help80.47i0.11 to 0.8374i3.84
Minimal contact50.60i0.36 to 0.83453.04
Predominantly therapist delivered0l
Setting
Clinical80.44i0.33 to 0.540.990.004.09
Community90.44i0.10 to 0.7884i4.09
Focus problem
Depression30.49i0.05 to 0.9388i.850.823.69
Anxiety or stress50.44i0.27 to 0.6204.09
Anxiety or depression70.58i0.14 to 1.0270i3.14
Analysis method
ITTm70.47i0.27 to 0.6875i.065.763.84
Completers20.18−0.05 to 0.4209.87

aInsufficient number of comparisons and limited between-study heterogeneity to warrant moderator analyses of anxiety outcomes at follow-up.

bk: number of comparisons.

cSMD: standardized mean difference.

dPositive effect size indicates in favor of e-therapy.

eSignificance of associated Q statistic.

fAlpha threshold Bonferroni adjusted to P<.01 for multiple testing.

gR: percentage of variance explained by moderator.

hNNT: number needed to treat.

iSignificant at P<.05.

jTAU: treatment as usual.

kOne between-groups P value and R2 value are provided for each subgroup comparison, reported on the row of the first subgroup category.

lWhere there are no comparisons within a subgroup, SMD, 95% confidence intervals and I2 values are not reported.

mITT: intention to treat.

Subgroup analysis of effect e-therapy versus controls on anxiety and stress outcomes (posttreatment). aInsufficient number of comparisons and limited between-study heterogeneity to warrant moderator analyses of anxiety outcomes at follow-up. bk: number of comparisons. cSMD: standardized mean difference. dPositive effect size indicates in favor of e-therapy. eSignificance of associated Q statistic. fAlpha threshold Bonferroni adjusted to P<.01 for multiple testing. gR: percentage of variance explained by moderator. hNNT: number needed to treat. iSignificant at P<.05. jTAU: treatment as usual. kOne between-groups P value and R2 value are provided for each subgroup comparison, reported on the row of the first subgroup category. lWhere there are no comparisons within a subgroup, SMD, 95% confidence intervals and I2 values are not reported. mITT: intention to treat. Visual inspection of the funnel plot in Figure 5 suggested that there was some asymmetry in the distribution of studies reporting posttreatment anxiety and stress outcomes. However, the trim and fill imputation did not impute any missing data in relation to smaller studies in favor of controls or minimal differences between groups producing an adjusted effect size identical to the initial pooled SMD. The Egger regression failed to detect sufficient asymmetry in the study distribution of posttreatment anxiety and stress outcomes (B=–0.35; t16=1.82; P=.09). Taken together, the multiple assessments of publication bias imply a minimal-to-small influence of reporting bias on the overall e-therapy treatment effect for anxiety and stress outcomes. There were insufficient studies (k<10) to enable accurate assessment of publication bias on comparisons of follow-up anxiety and stress outcomes.
Figure 5

Funnel plot for distribution of studies reporting e-therapy versus controls post-treatment anxiety/stress outcomes.

Funnel plot for distribution of studies reporting e-therapy versus controls post-treatment anxiety/stress outcomes.

Discussion

Principal Findings

This study has been the first attempt to assess the breadth and quality of the evidence base for NHS-recommended e-therapies and to quantify the efficacy of this health technology through a meta-analysis of the clinical trial evidence base. Only 15% (7/48) of the NHS-recommended e-therapies had eligible RCT studies underpinning their clinical evaluation. Of the 7 e-therapies with RCT evidence, 2 contributed a single RCT study to the meta-analysis, and there was poor and variable reporting of version numbers across studies. These findings are at odds with the philosophy of evidenced-based practice, whereby clinical guidelines are underpinned by gold standard evidence of efficacy. Overall, however, the available good quality evidence shows that the e-therapies tested do benefit adult participants in better managing anxiety, stress and depression compared with controls, and this appears to be a durable effect in the short to medium term. The magnitude of the e-therapy treatment effects found here mirrors the effect sizes seen in the overall LI intervention evidence base (g=0.2-0.5) [5]. The NNT analysis suggests that for every 5 patients treated with an e-therapy, one has a good outcome. The acceptability and efficacy of the e-therapies without RCT evidence (ie, 85%, 41/48) of those actually recommended for use in the NHS) remains open to question. It would be premature to clinically champion any single e-therapy as being the most effective at this point in time. MoodGYM has been exposed to most evaluation and scrutiny, but it was unclear whether differing versions were being tested. The acceptability of e-therapies can be called into question because of the higher dropout rates compared with controls reported here. Criticisms of LI psychological interventions, and e-therapies in particular, have been previously made concerning their high dropout rates being an index for poor patient acceptability, because of the low therapist contact and time approach [13,16,57,58]. Dropout rates may also have been influenced by multiple (unmeasured) factors such as the poor face validity of the CBT theoretical approach [59], low readiness to change, poor attitudes to the delivery of eHealth [60], and the usability or characteristics of the web or app design itself [61,62]. Ongoing issues with poor acceptability will remain an obstacle in the commissioning and delivery of e-therapies as frontline LI psychological interventions. Clearly, the clinical utility of any e-therapies needs to be considered in a matrix of cost, safety, acceptability, feasibility, and efficacy evidence [63]. Comparison of study characteristics highlighted noteworthy commonalities and differences across and between e-therapies. First, 5 of the 7 e-therapies evaluated were based on CBT (one other was based on CBT alongside other approaches). This mirrors that LI interventions as a whole tend to be based and focused on variants of CBT [64]. Recent innovations in e-therapies have included acceptance and commitment therapy [65], interpersonal psychotherapy [66], mindfulness [67], and psychodynamic psychotherapy [68]. Second, 6 of the 7 e-therapies were web based, so the clinical utility of smartphone-based app delivery of NHS-recommended e-therapies has not been appropriately empirically evaluated. Variations in e-therapy treatment effects were explored with moderator analyses, as a previous individual participant meta-analysis of e-therapies for depression found few significant moderators [13]. Significantly larger e-therapy effects were apparent when compared with wait list controls (for posttreatment depression outcomes), for patients with greater baseline severity (for follow-up depression outcomes), and when there was a greater amount of therapist input (for end of treatment anxiety and stress outcomes). However, the effects of control type and amount of therapist input did not remain significant after accounting for multiple testing, so caution should be taken with any conclusions. Larger wait list comparison effects are commonly observed in psychotherapy trials and when taken in isolation can lead to overestimated treatment effects [69]. E-therapy effects shrunk as the activeness of comparators increased. In this review, baseline severity was only a significant moderator at follow-up. Greater e-therapy benefits for higher baseline depression severity have previously been shown to predict better outcomes for internet-based CBT [70]. The trend for e-therapies with a greater amount of therapist input generating better outcomes has been widely reported [71-73]. It is worth noting that e-therapy typologies in this meta-analysis emphasized some therapist contact, but that contact time was still relatively brief because of the LI approach. Furthermore, 75% (18/24 studies of 4 different apps) had less than 30 min of real-time person-to-person support. The efficacy of LI interventions appears to be better enabled when supported by even brief interpersonal contact [72,73].

Limitations

This review has several limitations, which also highlight how the e-therapy evidence base could be further developed. First, although the included studies were restricted to high-quality RCT evidence, the GRADE approach highlighted issues with inconsistency across results, treatment comparisons, and some imprecision resulting in meta-analytic comparisons of moderate-to-low quality. Second, there are limitations concerning the generalizability of the findings. This review was limited to the treatment of depression, anxiety, and stress with e-therapies and so cannot comment on applicability to other clinical presentations. Services in the United Kingdom use the NICE guidelines to organize the delivery of treatments for anxiety and depression via stepped-care principles. Therefore, the generalizability of results from this meta-analysis is less applicable for different approaches to mental health delivery, for example, via stratified care [74]. The inclusion of only those e-therapies recommended by the NHS excluded those e-therapies very similar in technical format and content. Third, there were some methodological weaknesses that may have introduced bias, and the conclusions should be treated with caution. The lack of formal screening and selection of articles by a second reviewer is a major limitation that may have led to bias in terms of which studies were selected for inclusion and therefore influenced the results. Similarly, the quality ratings of the studies were made by raters that were not independent from the meta-analysis, and levels of agreement were not optimal [75]. In addition, restrictions in the search strategy may have missed eligible studies or excluded studies evaluating an NHS e-therapy for other clinical presentations or outcomes [76]. Given that eHealth is a rapidly expanding area that makes reviews outdated relatively quickly, the duration since the final searches were conducted (April 2019) means there will undoubtedly be additional relevant e-therapy trials now available. Since the final searches, trials of 3 NHS e-therapies (all with existing trial evidence) have been published; an RCT of SilverCloud used in IAPT [77], evaluations of MoodGYM [78], and Headspace in student samples [79,80]. Finally, synthesis and analysis were restricted by the data from the available studies. The number of trials conducted was small, and thus restricted the power and range of possible moderator analyses. The original studies had the common methodological flaws of limited diagnostic assessments of participants, inconsistent reporting of e-therapy version numbers, overuse of self-reported measures rather than independent assessment, lack of reporting of adverse event rates [63], lack of measures of e-therapy adherence, and lack of true long-term follow-up. The frequent use of passive controls risked inflating treatment effect sizes in meta-analyses [81], and there were insufficient active comparators to establish efficacy of e-therapies vs other therapies. There was no standard definition of dropout or treatment completion across the studies, and therefore, we were forced to adopt the definition used by each study. It is acknowledged that dropout is a limited proxy for acceptability [82] and that wider indices of acceptability also include understanding barriers to e-therapy engagement.

Research and Service Implications

Finding studies relating to a specific e-therapy by searching for its name in academic databases proved difficult. This was because before commercialization, many e-therapy platforms were known by their initial project name and not their eventual product name. A solution to this problem would be to ensure that e-therapy developers and researchers register their software on a public database with a unique identifier to be referenced in any subsequent publications. Trials of e-therapies should also be reported according to the CONSORT-EHEALTH checklist [56], and the e-therapy version should be indicated using semantic versioning to clarify whether the e-therapy program being evaluated has been updated (ie, reporting the major, minor, and patch version [eg, version 2.1.1]). Several e-therapies included in this review were developed to be available without clinical support or guidance (eg, MoodGYM and Headspace). Given that e-therapies outperform controls (with moderate effects compared with wait list), e-therapies may offer particular promise as a waitlist intervention. Although unguided e-therapy may be beneficial to patients waiting for face-to-face psychological interventions, the trend observed in this review and findings from previous studies imply that some clinician involvement is important for ensuring good outcomes if an e-therapy is the sole intervention [72,73]. The manner in which e-therapies can be effectively blended with face-to-face psychological therapies is currently poorly understood and demands more research. Studies also need to be conducted on the utility of e-therapies as wait list interventions. Given the recent availability of differing theoretical approaches, patient choice for e-therapy can now be offered and researched. Treatment completion rates need to be consistently reported, and trials adopt the ITT approach to reduce biasing treatment effects. Consistent reporting of safety issues (eg, via untoward incident rates) is needed for e-therapies. Health economic evaluations that are embedded in clinical trials need to be increased. A dropout meta-analysis (with independent study quality ratings of all studies using the latest version of the Cochrane risk of bias tool) of this evidence base is now also indicated to better index e-therapy acceptability issues [83]. Little is known about why patients’ drop out of e-therapies, and qualitative investigations would be useful here. Treatment adherence (ie, how much time is spent and how many modules of eHealth are completed by participants) needs to be more consistently reported. The role of moderating factors of treatment outcome in e-therapies needs to be better researched, particularly the role of variables such as blended vs pure e-therapy approaches, time spent on the app, and theoretical approach. E-therapies potentially still play an important role in clinical services, regardless of the organizational system used to coordinate delivery of care [84], particularly when the approach has been well evaluated.

Conclusions

In this meta-analysis of gold standard clinical trials, e-therapies have been found to be efficacious as LI psychological interventions that produce small beneficial effects for adults with depression, anxiety, and stress compared with controls. However, only a relatively small proportion of NHS-recommended e-therapies had been subjected to such gold standard evaluation. Although these conclusions should be considered in light of the methodological limitations, the targeted nature of this review to NHS-recommended e-therapies still has relevance to the global field of e-therapies. This is particularly through highlighting the need to consistently integrate high quality and controlled evaluation into the technological development of e-therapies. This is to ensure eventual safe and evidence-based e-therapy practice in routine clinical services. Technological development and scrupulous evaluation of e-therapies need to be conducted in parallel and considered in equipoise.
  73 in total

1.  Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis.

Authors:  S Duval; R Tweedie
Journal:  Biometrics       Date:  2000-06       Impact factor: 2.571

2.  How to Review a Meta-analysis.

Authors:  Mark W Russo
Journal:  Gastroenterol Hepatol (N Y)       Date:  2007-08

3.  App-based psychological interventions: friend or foe?

Authors:  Simon Leigh; Steve Flatt
Journal:  Evid Based Ment Health       Date:  2015-10-12

Review 4.  Stepped care in psychological therapies: access, effectiveness and efficiency. Narrative literature review.

Authors:  Peter Bower; Simon Gilbody
Journal:  Br J Psychiatry       Date:  2005-01       Impact factor: 9.319

5.  The measurement of observer agreement for categorical data.

Authors:  J R Landis; G G Koch
Journal:  Biometrics       Date:  1977-03       Impact factor: 2.571

6.  Internet-based cognitive behavioural therapy for subthreshold depression in people over 50 years old: a randomized controlled clinical trial.

Authors:  Viola Spek; Ivan Nyklícek; Niels Smits; Pim Cuijpers; Heleen Riper; Jules Keyzer; Victor Pop
Journal:  Psychol Med       Date:  2007-04-30       Impact factor: 7.723

7.  A randomized controlled trial of an internet-delivered treatment: Its potential as a low-intensity community intervention for adults with symptoms of depression.

Authors:  D Richards; L Timulak; E O'Brien; C Hayes; N Vigano; J Sharry; G Doherty
Journal:  Behav Res Ther       Date:  2015-10-21

8.  Navigating the challenges of digital health innovation: considerations and solutions in developing online and smartphone-application-based interventions for mental health disorders.

Authors:  Claire Hill; Jennifer L Martin; Simon Thomson; Nick Scott-Ram; Hugh Penfold; Cathy Creswell
Journal:  Br J Psychiatry       Date:  2017-05-18       Impact factor: 9.319

9.  Efficacy of the Mindfulness Meditation Mobile App "Calm" to Reduce Stress Among College Students: Randomized Controlled Trial.

Authors:  Megan Puzia; Jennifer Huberty; Jeni Green; Christine Glissmann; Linda Larkey; Chong Lee
Journal:  JMIR Mhealth Uhealth       Date:  2019-06-25       Impact factor: 4.773

10.  Unaddressed privacy risks in accredited health and wellness apps: a cross-sectional systematic assessment.

Authors:  Kit Huckvale; José Tomás Prieto; Myra Tilney; Pierre-Jean Benghozi; Josip Car
Journal:  BMC Med       Date:  2015-09-07       Impact factor: 8.775

View more
  2 in total

1.  Study protocol for pragmatic trials of Internet-delivered guided and unguided cognitive behavior therapy for treating depression and anxiety in university students of two Latin American countries: the Yo Puedo Sentirme Bien study.

Authors:  Corina Benjet; Ronald C Kessler; Alan E Kazdin; Pim Cuijpers; Yesica Albor; Nayib Carrasco Tapias; Carlos C Contreras-Ibáñez; Ma Socorro Durán González; Sarah M Gildea; Noé González; José Benjamín Guerrero López; Alex Luedtke; Maria Elena Medina-Mora; Jorge Palacios; Derek Richards; Alicia Salamanca-Sanabria; Nancy A Sampson
Journal:  Trials       Date:  2022-06-02       Impact factor: 2.728

Review 2.  Development of a Framework for the Implementation of Synchronous Digital Mental Health: Realist Synthesis of Systematic Reviews.

Authors:  David Villarreal-Zegarra; Christoper A Alarcon-Ruiz; G J Melendez-Torres; Roberto Torres-Puente; Alba Navarro-Flores; Victoria Cavero; Juan Ambrosio-Melgarejo; Jefferson Rojas-Vargas; Guillermo Almeida; Leonardo Albitres-Flores; Alejandra B Romero-Cabrera; Jeff Huarcaya-Victoria
Journal:  JMIR Ment Health       Date:  2022-03-29
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.