Literature DB >> 33598832

The Effectiveness of mHealth and eHealth Tools in Improving Provider Knowledge, Confidence, and Behaviors Related to Cancer Detection, Treatment, and Survivorship Care: a Systematic Review.

Cindy Soloe1, Olivia Burrus2, Sujha Subramanian2.   

Abstract

Mobile health (mHealth) and eHealth interventions have demonstrated potential to improve cancer care delivery and disease management by increasing access to health information and health management skills. However, there is a need to better understand the overall impact of these interventions in improving cancer care and to identify best practices to support intervention adoption. Overall, this review intended to systematically catalogue the recent body of cancer-based mHealth and eHealth education and training interventions and assess the effectiveness of these interventions in increasing health care professionals' knowledge, confidence, and behaviors related to the delivery of care along the cancer continuum. Our initial search yielded 135 articles, and our full review included 23 articles. We abstracted descriptive data for each of the 23 studies, including an overview of interventions (i.e., intended intervention recipients, location of delivery, topic of focus), study methods (i.e., design, sampling approach, sample size), and outcome measures. Almost all the studies reported knowledge gain as an outcome of the education interventions, whereas only half assessed provider confidence or behavior change. We conclude that there is some evidence that mHealth and eHealth interventions lead to improvements in cancer care delivery, but this is not a consistent finding across the studies reviewed. Our findings also identify gaps that should be addressed in future research, offer guidance on the utility of mHealth and eHealth interventions, and provide a roadmap for addressing these gaps.
© 2021. American Association for Cancer Education.

Entities:  

Keywords:  Cancer; Provider training; eHealth; mHealth

Mesh:

Year:  2021        PMID: 33598832      PMCID: PMC7889413          DOI: 10.1007/s13187-021-01961-z

Source DB:  PubMed          Journal:  J Cancer Educ        ISSN: 0885-8195            Impact factor:   1.771


Introduction

Cancer is the second leading cause of death globally. An estimated 9.6 million global deaths in 2018, approximately 1 in 6 deaths overall, were attributed to cancer [1]. The economic burden of cancer is substantial in all countries because of high health care spending and lost productivity caused by morbidity and premature death. As cancer treatment costs increase, prevention and early detection efforts become more cost-effective and potentially cost-saving [2]. Additionally, early detection, high-quality treatment, and survivorship care can lead to improved health outcomes. Mobile health (mHealth) has demonstrated the substantial potential to improve health care delivery and disease management by increasing access to health information and health management skills [3]. Over the past several years, mHealth has increasingly been adopted to provide efficient and effective health care [4]. eHealth, which encompasses mHealth but includes a broader set of information and communication technologies, is also essential for enhancing health care education and training [5]. eHealth comprises multiple interventions, including telehealth, telemedicine, mHealth, electronic medical or health records (EMR/EHR), big data, wearables, and even artificial intelligence [1]. The emergence of the COVID-19 global pandemic has expedited the wide-scale adoption of virtual and digital care [6]. As such, understanding how to support mHealth and eHealth effectiveness is more critical than ever. Although mHealth could improve patient care across the cancer continuum [7], there is a need to better understand the overall impact of these interventions in improving cancer care and to identify best practices to support intervention adoption. Most prior reviews on eHealth interventions have focused broadly on noncommunicable diseases or generally on professional education. The systematic review by Campbell et al. [8] was cancer specific but only evaluated the effectiveness of online cancer education for nurses and allied professionals. Others reviewed the effectiveness of mHealth approaches for noncommunicable disease (NCD) care but focused only in low- and middle-income countries [9, 10] or broadly explored components of eHealth (including mHealth) effectiveness regarding NCD management [11-13]. Still, no systematic review currently addresses cancer-specific eHealth and mHealth education and training interventions, and this limits our understanding of the effectiveness of these interventions for improving cancer care delivery. Cancer has many disease-specific aspects, including screening, early detection, and diagnostic processes and a multitude of technologies to support these processes. This review intended to systematically catalogue the recent body of cancer-based mHealth and eHealth education and training interventions and assess the effectiveness of these mHealth interventions in increasing health care professionals’ knowledge, confidence, and behaviors related to the delivery of care along the cancer continuum. Findings from this review can offer guidance on the utility of mHealth and eHealth interventions and provide a roadmap for addressing gaps in the literature.

Methods

Using the Community Guide Systematic Review Methods as a framework [14], we implemented the following steps in our review process: (1) identify what interventions the review will cover; (2) define a conceptual approach for evaluating the interventions; (3) identify and apply criteria for including or excluding studies; (4) search for, retrieve, and screen abstracts; (5) review the full text of selected studies and abstract relevant study characteristics (as determined in the conceptual approach); (6) assess the quality of each study; (7) summarize the evidence and identify gaps; and (8) develop recommendations and findings. In spring 2020, we engaged RTI International’s Library and Information Services unit to systematically search literature reflecting the use of mHealth learning interventions to improve health care professionals’ delivery of cancer care. The search drew from three databases (PubMed, Embase, and Web of Science). Key search terms included mHealth, cancer, or noncommunicable disease, chronic disease terms, training, and healthcare provider. Although our initial search strategy included noncommunicable disease and chronic disease terms, we eliminated these terms after the search yielded a higher number of results than anticipated. The final search focused on cancer mHealth trainings. The full search strategy is in Appendix A. Because the field of mHealth is rapidly evolving, we restricted our search to studies published from 2010 to 2020. Only articles available in English were included, and no geographic parameters were applied. Two reviewers independently assessed titles/abstracts then reviewed full-text articles, extracted relevant data, and assessed study quality. Articles were included if they presented evaluations of mHealth or eHealth approaches to train health care workers who provide cancer care. Articles were excluded if they did not focus on cancer, described intervention development only (i.e., no evaluation), focused on pediatric care, or included an evaluation limited to satisfaction assessment or if a full article could not be retrieved (i.e., only an abstract available). Review articles were also excluded. For each study included, a single reviewer abstracted relevant study characteristics (i.e., mode of delivery, study design, sampling approach) and data for outcomes of interest into a structured form. A second reviewer checked all data for completeness and accuracy. A single reviewer assessed each study’s methodological quality using applicable National Institute of Health (NIH) Quality Assessment Tools [15] and a standardized approach to categorize manuscripts. The review team then discussed the quality scores to ensure consistency. We summarized the abstracted data and quality ratings into evidence tables, including an overview of reviewed manuscript characteristics (i.e., topic, study design, sampling details, primary outcomes; Table 1); knowledge outcome measurement and findings (Table 2); and confidence, behavior, and intention outcome measurement and findings (Table 3). Within the outcomes tables, we report within group and between group measurement design studies separately.
Table 1

Intervention descriptions, methods, outcomes, quality rating

Author, yearAudience, locationTopicMode of deliveryaStudy designSampling approachSample sizePrimary outcomes of interestQuality rating
KnowledgeConfidenceBehavior/intentions
Asgary et al. (2016)Nurses, GhanaCervical cancer detection

Blended online and in person

SMS based (What’s App)

Post-onlyNon-probability sampling15XH
Beattie et al. (2014)Nurses and Social Workers, AustraliaSupportive cancer care needs and screeningOnline only: asynchronousMultiple time seriesNon-probability sampling18XXH
Blazer et al. (2012)MD, Genetic counselors, advanced practice nurses, USAGenetic cancer risk assessmentBlended online and in person

Quasi-experimental

Comparison intervention: the course as originally designed, in which all sessions are delivered face to face.

Probability sampling96XXXM
Buriak and Potter (2014)General Health Care Providers, USABreast, prostate, colorectal, and non-Hodgkin lymphoma survivorshipOnline only: asynchronousPre-postProbability sampling1521XXM
Choma and McKeever (2015)Nurses, USACervical cancer detection in adolescentsOnline only: asynchronousPre-postNon-probability sampling48XXL
Cueva et al. (2018)Community Health Aides/Practitioners, USACancer prevention and detectionOnline only: asynchronousPost-onlyNon-probability sampling79XXXM
Egevad et al. (2019)Pathologists, Southeast Asia and South AmericaProstate cancer detectionBlended online and in personPre-postNon-probability sampling224XM
Eide et al. (2013)PCPs and GPs, USASkin cancer detectionOnline only: asynchronousPre-postNon-probability sampling54XXH
Gulati et al. (2015)PCPs and GPs, UKSkin cancer detectionOnline only: asynchronousConfidence: pre-postNon-probability sampling1002XXM
Knowledge: post-onlyProbability sampling967X
Ikehara et al. (2019)Endoscopists, JapanGastric cancer detectionOnline only: asynchronousPre-postbProbability sampling365XM
Jiwa et al. (2014)PCPs and GPs, AustraliaBreast cancer treatmentOnline only: asynchronousPost-onlyNon-probability sampling50XL
Karvinen et al. (2017)Oncology Nurses, CanadaCancer survivorshipOnline only: asynchronous

RCT

control intervention: a list of reputable, publicly available websites concerning physical activity and cancer

Probability sampling54XXXH
Krishnamachari et al. (2018)PCPs, OB/GYNs, other non-PCP specialist, USABreast and ovarian cancer detection and treatmentOnline only: asynchronous

RCTc

Control intervention: web training offered after the knowledge survey (pre-test) as opposed to before (post-test)

Non-probability sampling136XL
Leung et al. (2019)Nurses, CanadaCancer pain managementOnline only: synchronous and asynchronousPre-postNon-probability sampling246XL
Markova et al. (2013)PCPs and GPs, USASkin cancer detectionOnline only: asynchronous

RCT

Control intervention: online educational program on assessment and counseling of diet, physical activity, and weight status

Non-probability sampling57XXXM
Moreira et al. (2019)Radiographers, PortugalBreast cancer detectionOnline only: synchronous and asynchronousPre-postNon-probability sampling64XXM
Murgu et al. (2018)Pulmonologist, Respirologists, Medical Oncologists, Pathologists, Thoracic Surgeons, and Allied health professionals, USA and EuropeLung cancer detection and treatmentBlended online and in personPre-post with additional long-term surveyNon-probability sampling187XXH
Palmer et al. (2011)PCPs and GPs, USABreast cancer detectionOnline only: asynchronousPre-postNon-probability sampling103XL
Quinn et al. (2019)Oncology Nurses, USACancer treatment and survivorshipOnline only: asynchronousPre-post and multiple time seriesNon-probability sampling233XXH
Roxo-Goncalves et al. (2017)PCPs and Dentists, BrazilOral cancer detectionOnline only: asynchronousPost-only; comparison groupsSampling methods not described30XL
Tulsky et al. (2011)Oncologists, USACancer treatment

Online only: asynchronous

Computer based

RCT2

Control intervention: communication lecture only (no tailored CD-ROM)

Non-probability sampling48XH
Viguier et al. (2015)Rheumatologists, FranceSkin cancer detectionOnline only: asynchronous

RCT

Control intervention: no training

Probability sampling141XXH
Wee et al. (2016)Physicians, PAs, and NPsOral cancer detectionOnline only: asynchronousPost-onlyNon-probability sampling15XXL

GP, general practitioner; H, high; L, low; M, medium; NP, nurse practitioner; PA, physician’s assistant; PCP, primary care practitioner

aAll interventions are web-based unless otherwise noted

bSecondary analysis of an RCT; however, the authors use the dataset in this instance data as a pre-post design

cNo significant differences in demographics between the intervention and control groups

Table 2

Knowledge outcome measurement and findings

Author, yearMeasurementFindingsStatistical significance
Change in Mean Knowledge Scores (Within Groups: pre-post)
  Buriak and Potter (2014)Change in mean knowledge scores1.4 points (out of 4)Y
  Choma and McKeever (2015)Change in mean knowledge scores− 1.19 (7.12 to 6.02)Y
  Egevad et al. (2019)Change in mean knowledge scores11.5% (60.7% to 72.2%)Y
Change in mean knowledge scores by country resource levelLow resource15.4% (47.4% to 62.8%)Y
Lower-middle resource11.5% (61.0% to 72.5%)Y
Middle-upper resource10.6% (65.8% to 76.4%)Y
  Eide et al. (2013)Change in mean score for appropriate diagnosis and management (pre-post; 6 months post)13% (36.1% to 46.3%); 5.2% (36.1% to 41.3%)Y
Change in mean score for appropriate diagnosis and management by total previous skin cancer training courses017.4% (33.3% to 50.7%)Y
111.6% (35.1% to 46.7%)Y
28% (36.7% to 44.7%)Y
39.3% (44.0% to 53.3%)Y
  Murgu et al. (2018)Change in mean knowledge scores13% (52% to 65%)Not reported
  Palmer et al. (2011)Change in mean knowledge scores24% (70% to 94%)Y
  Quinn et al. (2019)Change in mean knowledge scores4% (75% to 79%)Y
Change in Mean Knowledge Scores (Between Groups: pre-post and intervention vs. control)
  Blazer et al. (2012)Change in mean knowledge scoresIntervention22% (67% to 89%)Y
Control16% (65% to 81%)
  Karvinen et al. (2017)Change in mean knowledge scoresIntervention0.3 (5.96 to 6.26)N
Control− 0.01 (6.27 to 6.23)
  Krishnamachari et al. (2018)Difference in mean knowledge scores across 9 items (mean [range])83.09% [64.71% to 92.75%] vs. 72.23% [32.84% to 91.04%]

N (n = 8)

Y (n = 1)

  Markova et al. (2013)Difference in mean knowledge scores

1 month post

12 months post

58% vs. 49%Y
69% vs. 59%N
  Moreira et al. (2019)Median improvement in knowledge scores4 percentile pointsY
  Viguier et al. (2015)Difference in mean scores on simulated diagnostic accuracy13.4 vs. 11.2Y
Difference in mean knowledge scores21.7 vs. 20.8N
Other Knowledge Outcome Measurements
  Asgary et al. (2016)Total agreement rate between all VIA diagnoses made by all nurses and the expert reviewer95%Y
Mean (SD) rate of agreement between each nurse and the expert reviewer86.6% (12.8%)Not reported
Agreement rates for positive and negative cases61.5% (positive cases); 98.0% (negative cases)Not reported
  Beattie et al. (2014)Change in perceived knowledge (pre-post; 3 months)1.08 (1.97 to 3.05); 0.75 (1.97 to 2.72)Not reported
  Ikehara et al. (2019)Change in diagnostic accuracy [area under the receiver operating characteristic curve] (pre-post)0.11 (0.73 to 0.84)Y
  Jiwa et al. (2014)Change in the proportion of simulated cases diagnosed correctly (phase 1 to phase 2)9.7% (85% to 94.7%)Y

VIA, visual inspection with acetic acid; SD, standard deviation

Table 3

Confidence, behavior, and intention outcome measurement and findings

Author, yearMeasurementFindingsStatistical significance
Change in Confidence Scores (Within Groups)
  Beattie et al. (2014)Change in mean confidence rating (pre-post; 3 months)0.93 (2.39 to 3.32); 0.87 (2.39 to 3.28)Not reported
  Blazer et al. (2012)Change in professional efficacy rating (pre-post; intervention vs. control)Intervention1.0 (3.3 to 4.3)N
Control1.1 (3.1 to 4.2)
  Gulati et al. (2015)Confidence (pre-post)In recognizing skin lesionsLower in 2013 than 2011aY
In knowledge of malignant skin lesion referral pathwaysHigher in 2013 than 2011aY
  Leung et al. (2019)Change in confidence in knowledge and skills (pre-post)b18.2% (57.5% to 75.7%)Yc
  Murgu et al. (2018)Change in confidence (pre-post; high or very high confidence rating)Average percent increase across 6 measures20%Not reported
Change in Confidence Scores (Between Groups)
  Blazer et al. (2012)Difference in professional efficacy rating (intervention vs. control)Intervention1.1(3.1 to 4.2) vs 1.0 (3.3 to 4.3)N
  Karvinen et al. (2017)Difference in mean self-efficacy rating (intervention vs. comparison)0.93 (8.2 vs. 7.27)Y
  Markova et al. (2013)Difference in confidence (intervention vs. control)In ability to perform a skin cancer total body skin examination0.6 (3.6 vs 3.0)Y
To counsel patients about reducing sun exposure0.5 (4.4 vs. 3.9)Y
  Viguier et al. (2015)Difference in the mean level of self-confidence rating (intervention vs. control)0.1 (5.6 vs. 5.7)N
Change in Behavior and Intention Scores (Within Groups)
  Beattie et al. (2014)Self-reported change in use of tool (pre to 3-month post)20.2% (57.6% to 77.8%)Not reported
  Blazer et al. (2012)Observed difference in mean increase in case-based skills (intervention vs. comparison)3 (12 vs 9)N
  Gulatiet al. (2015)Observed change in behavior (intervention vs. comparison)Observed percent change in number of GP referrals for suspected skin cancer1.3% (9.7% vs. 11.0%)N
Observed percent change in number of melanoma diagnoses4.1% (13.0% vs. 8.9%)N
Observed percent change in number of non-melanoma skin cancer diagnoses3.6% (14.1% vs. 17.7%)N
  Moreira et al. (2019)Self-reported change in patient care skillsMedian change across 24 self-reported measures1 (4 to 5)Y
  Tulsky et al. (2011)Observed change in behavior (intervention vs. comparison)Observed mean number of empathetic statements post-intervention0.4 (0.8 vs. 0.4)Y
Observed continuer response to empathetic opportunity0.2 (0.4 vs. 0.2)Y
Change in behavior and intention scores (between groups)
  Karvinen et al. (2017)Self-reported difference in physical activity counseling practice post-intervention (intervention vs. control)5.6% (62.8% vs. 57.2%)N
  Markova et al. (2013)Self-report difference in intention to discuss cancer prevention/control with patients (intervention vs. control; 3 item average)0.5Y
Self-report difference in self-reported skin cancer behaviors with patients (intervention vs. control; 4 item average)0.8Y
Observed difference in patient chart documentation of biopsy at first follow-up post-intervention (intervention vs. control)1% vs. 0%Y

aSpecific data not reported

b21-item survey with a five-point Likert scale (1 = strongly disagree 5 = strongly agree); self-reported confidence in pain management knowledge and skills

cMixed model combining five imputations showed a significant improvement in overall confidence while adjusting for participants’ sociodemographic background, years of experience, primary job function/clinical role, and professional training level

Intervention descriptions, methods, outcomes, quality rating Blended online and in person SMS based (What’s App) Quasi-experimental Comparison intervention: the course as originally designed, in which all sessions are delivered face to face. RCT control intervention: a list of reputable, publicly available websites concerning physical activity and cancer RCTc Control intervention: web training offered after the knowledge survey (pre-test) as opposed to before (post-test) RCT Control intervention: online educational program on assessment and counseling of diet, physical activity, and weight status Online only: asynchronous Computer based RCT2 Control intervention: communication lecture only (no tailored CD-ROM) RCT Control intervention: no training GP, general practitioner; H, high; L, low; M, medium; NP, nurse practitioner; PA, physician’s assistant; PCP, primary care practitioner aAll interventions are web-based unless otherwise noted bSecondary analysis of an RCT; however, the authors use the dataset in this instance data as a pre-post design cNo significant differences in demographics between the intervention and control groups Knowledge outcome measurement and findings N (n = 8) Y (n = 1) 1 month post 12 months post VIA, visual inspection with acetic acid; SD, standard deviation Confidence, behavior, and intention outcome measurement and findings aSpecific data not reported b21-item survey with a five-point Likert scale (1 = strongly disagree 5 = strongly agree); self-reported confidence in pain management knowledge and skills cMixed model combining five imputations showed a significant improvement in overall confidence while adjusting for participants’ sociodemographic background, years of experience, primary job function/clinical role, and professional training level Our initial search yielded 135 articles. After reviewing the abstracts, 81 were eliminated because they did not meet our final inclusion criteria (Fig. 1). We requested 54 full articles; 31 of these were eliminated based on our exclusion criteria. One article was excluded because it described an intervention already included in our review (i.e., duplicate article). Therefore, our full review included 23 articles.
Fig. 1

Abstract and article review and inclusion flowchart

Abstract and article review and inclusion flowchart

Results

Characteristics of Reviewed Manuscripts

We abstracted descriptive data for each of the 23 studies, including an overview of interventions (i.e., intended intervention recipients, location of delivery, topic of focus), study methods (i.e., design, sampling approach, sample size), and outcome measures. These data are presented in Table 1.

Interventions Assessed

Ten studies described interventions directed specifically toward nurses and five toward Primary Care Providers/General Practitioners, seven described interventions designed to address multiple provider types (e.g., physicians, physician assistants, and nurse practitioners), and six described interventions designed for specialists (e.g., pathologists, oncologists). Eleven studies were set in the USA, two in Australia, two in Canada, two in multiple locations, and one study each set in Brazil, France, Ghana, Japan, Portugal, and the UK. Sample sizes ranged from 15 to 1521 with most (two-thirds) falling between 48 and 365. Six interventions focused specifically on breast, cervical, or ovarian cancer; four on skin cancer; two on oral cancers; six on multiple cancers; two not specified; and one each on prostate, lung, and gastric cancer. Most interventions, 17 out of 23, were delivered online only, asynchronously. Five blended in-person and online methods, and one was online only with a mix of synchronous and asynchronous content. Appendix B provides additional details on intervention approaches.

Study Methodology

Four of the 23 studies used a post-only design to measure outcomes, 10 used pre-post, five used randomized controlled trials (RCTs), and four used other methods (combined post-only and pre-post, multiple time series, comparison group, and quasi-experimental). Of the pre-post design studies, two had more than one data collection timepoint for the “post” measure, whereas all others assessed outcomes immediately after intervention exposure. Sixteen studies used non-probability sampling, and six used probability sampling; one did not describe their sampling methods.

Outcome Measures

Twenty-one studies measured provider knowledge outcomes, 11 measured provider confidence (i.e., self-efficacy), and 10 measured provider behavior/intention. Thirteen studies measured only one type of outcome (knowledge only: 11; confidence only: 1; behavior/intention only: 1). Most studies measured knowledge only (n = 11) or knowledge and one (n = 8) or two (n = 5) other outcomes. Knowledge was measured using many different study designs: post-only (n = 5), pre-post (n = 9), RCT (n = 4), and other (n = 2; i.e., multiple time series, quasi-experimental). Confidence was measured using post-only (n = 1), pre-post (n = 5), RCT (n = 3), and other (n = 2; i.e., multiple time series, quasi-experimental). Behavior/intention was measured using post-only (n = 2), pre-post (n = 4), RCT (n = 3), and other (n = 1; i.e., quasi-experimental).

Quality Assessment

Using applicable NIH Quality Assessment Tools, eight articles were categorized as high quality, eight as medium quality, and seven as low quality (see Appendix C).

Impact of Interventions

Table 2 presents provider knowledge outcome findings. Table 3 presents provider confidence and behavior/intention outcome findings. Both tables exclude findings from post-only design studies.

Knowledge Findings

Seventeen studies measured knowledge outcomes using a pre-post design [16-32]. Among studies that measured change in knowledge within a group (n = 7), six reported statistical significance with change in mean knowledge scores ranging from 4 to 24%. Among studies that measured change in mean knowledge scores between groups (intervention vs. control; n = 6), four reported at least some statistically significant difference in knowledge between intervention and control participants immediately after the intervention. However, the one study that measured change 12 months post-intervention found that this difference was not sustained. Of the four studies that reported other methods of measuring knowledge (increased perceived knowledge, high agreement rates between trainees and experts, increased diagnostic accuracy in a simulated patient encounter, median improvement in knowledge scores), two reported a statistically significant change in knowledge outcomes.

Provider Confidence Findings

Eight studies measured the impact of their intervention on provider confidence (i.e., self-efficacy or confidence in the ability to perform the behavior of focus in the intervention) [18, 23, 24, 26, 27, 29, 33, 34]. Five studies measured change in confidence score within group and three between groups (intervention vs. control). Two of the five studies reporting pre-post-changes in mean confidence reported a statistically significant change in provider confidence following the intervention [33, 34]. Two of the three studies that calculated difference in mean confidence scores between intervention and comparison or control groups found statistical significance among the groups [24, 27].

Provider Behavior/Intentions Findings

Seven studies measured change in provider behavior/intention [23, 24, 27–29, 33, 35]. Five of these measured change within group and two between group (intervention vs. control; see Table 2). Two of the four studies measuring pre-post changes in mean behavior or intention scores following intervention reported statistically significant increases in behavior following the intervention [28, 35]. One of the three studies that calculated the mean difference in post-intervention behavior between the intervention and comparison or control groups noted significant differences between the groups [27]. Three of these seven studies relied solely on provider self-report of behavior/intention. One study used a combination of provider self-report and observational data.

Discussion

We conducted a systematic review to identify eHealth- and mHealth-based education interventions to assess their effectiveness in improving cancer care. We identified a total of 23 studies that met the study inclusion criteria. Almost all the studies reported knowledge gain as an outcome of the education interventions, whereas only half assessed provider confidence or behavior change. The majority of the studies with behavior outcomes reported statistically significant improvement but behavior change exhibited wide variation with a range from 4 to 24% among studies that reported percentage change based on assessments before and after the education intervention. Several studies also reported statistically significant changes in confidence levels and self-reported behavior, but there were also multiple studies that did not find the intervention to be effective. Similar to knowledge change, there was variation in the behavior change proportions, from 1 to 20%, among studies that reported percentage differences. Overall, we can conclude that there is some evidence that eHealth interventions lead to improvements in cancer care delivery, but this is not a consistent finding across the studies reviewed. Almost 80% (18/23) of the interventions were delivered via online courses, and the remaining 20% were a blend of online and in-person education. The studies presented in this review do not offer clear insight as to whether multimodal approaches are more effective than stand-alone ones. There is evidence from other settings that the use of multimodal education methods can make teaching more effective [36, 37]. Additional studies in the future could compare combinations of approaches and methods to support evidence-based decisions on the selection of multimodal interventions. Furthermore, evidence on the role of mHealth interventions was limited, with only one study using SMS messaging, and we did not identify any systematic assessments of education apps. Cancer education apps like those created by the American Society of Clinical Oncology for self-evaluation do exist, but there may not be formal evaluations of the effectiveness of these tools in the peer-reviewed literature. Importantly, our findings may also indicate that there is a preference for using online tools to deliver education materials than relying on mHealth approaches. Text messaging and apps may be more appropriate for facilitating data collection and providing expert support during clinical care delivery [12, 38]. Almost all the studies identified for this review report on evaluations conducted in high-income settings. The paucity of research in the low- and middle-income settings is an important finding from this review. Mortality from cancer remains high in limited-resource settings, and it is projected that the burden from cancer will grow in low- and middle-income countries [39, 40]. There is an urgent need to improve knowledge among providers to prevent, screen, diagnose, and treat cancers, and therefore, more research is needed in low-resource settings to create the evidence base on optimal education interventions [41, 42]. Our review identified several other gaps that should be addressed in future research. First, behavior change was only addressed in a small number of studies, and all studies except one used self-reported behavior measurement. The ultimate objective of all education interventions is to foster optimal use of guideline-recommended cancer care. As such, intervention evaluations should include reports of observation-based behavior measurement. Second, studies generally reported changes immediately after the intervention was delivered, and there is a need for longer-term assessments to evaluate the sustainability of the impact of the education delivered. Third, no study included in this review conducted a cost-effectiveness assessment of the interventions to provide guidance for the adoption of the approaches studied. The importance of systematic economic evaluations has been highlighted in a prior review [8], and this is an important omission that should be addressed in future research. Fourth, the overall quality of the studies in this field should be improved. Our review assigned a high-quality rating to only about one-third of the studies included in this manuscript. Fifth, our experience compiling the findings for this review reveals the importance of fostering consistent terminology, outcome measures, and metrics to pool the results from eHealth studies consistently. We acknowledge that it might not always be feasible to implement consistent reporting because the education interventions and the target audience differs but nevertheless attempts at reaching an agreement on standardized measures will be extremely important to generate collaborative evidence to support the field.

Limitations of This Review

This review has several limitations that should be considered when interpreting the findings. A key drawback is that outcome measures are not always reported consistently in the manuscripts reviewed, and this makes it difficult to synthesize the findings to reach concrete conclusions. Some of the differences in the selection of measures could be the variation in the type of education tested, but nevertheless, a more uniform approach would be useful. Our review was also limited to cancer, and there may be lessons to learn from the delivery of other chronic and noncommunicable conditions. Furthermore, we only included studies published in English, and the review team also decided to exclude pediatric-focused physician education because these studies were on specific issues that are likely not generalizable to the population as a whole. eHealth and mHealth technology are rapidly progressing in terms of content displays, interactive graphics and other tools, and virtual reality training approaches. Therefore, although we included all manuscripts published as of early 2020, the field will continue to evolve.

Conclusion

eHealth and mHealth interventions show promise, but the evidence is inconsistent. In general, our results indicate some evidence toward the positive impact of mHealth interventions on provider knowledge but insufficient findings regarding the impact on provider behavior. Findings from the studies currently available in the literature vary widely regarding the use of eHealth to improve provider delivery of cancer care, which highlights the need for additional methodologically rigorous studies with longer-term follow-up. An essential recommendation from our analysis is the need for consistent terminology, measures, and metrics to synthesize results from the studies efficiently, which will build the evidence base required to adopt optimal and cost-effective interventions. Generalizability of the findings is an additional concern and future studies will ideally evaluate eHealth and mHealth education interventions in a wide variety of settings, including those in low- and middle-income countries. (DOCX 63 kb)
  30 in total

1.  Effects on skills and practice from a web-based skin cancer course for primary care providers.

Authors:  Melody J Eide; Maryam M Asgari; Suzanne W Fletcher; Alan C Geller; Allan C Halpern; Waqas R Shaikh; Lingling Li; Gwen L Alexander; Andrea Altschuler; Stephen W Dusza; Ashfaq A Marghoob; Elizabeth A Quigley; Martin A Weinstock
Journal:  J Am Board Fam Med       Date:  2013 Nov-Dec       Impact factor: 2.657

2.  The effectiveness of mHealth for self-management in improving pain, psychological distress, fatigue, and sleep in cancer survivors: a systematic review.

Authors:  Elizabeth Hernandez Silva; Sheleigh Lawler; Danette Langbecker
Journal:  J Cancer Surviv       Date:  2019-01-11       Impact factor: 4.442

3.  Effectiveness of Online Cancer Education for Nurses and Allied Health Professionals; a Systematic Review Using Kirkpatrick Evaluation Framework.

Authors:  Karen Campbell; Vanessa Taylor; Sheila Douglas
Journal:  J Cancer Educ       Date:  2019-04       Impact factor: 2.037

4.  Impact of a Non-small Cell Lung Cancer Educational Program for Interdisciplinary Teams.

Authors:  Septimiu Murgu; Robb Rabito; Greg Lasko; Chad Jackson; Mari Mino-Kenudson; David S Ettinger; Suresh S Ramalingam; Eric S Edell
Journal:  Chest       Date:  2017-12-12       Impact factor: 9.410

5.  The value of mHealth for managing chronic conditions.

Authors:  Saligrama Agnihothri; Leon Cui; Mohammad Delasay; Balaraman Rajan
Journal:  Health Care Manag Sci       Date:  2018-10-31

Review 6.  Evidence on feasibility and effective use of mHealth strategies by frontline health workers in developing countries: systematic review.

Authors:  Smisha Agarwal; Henry B Perry; Lesley-Anne Long; Alain B Labrique
Journal:  Trop Med Int Health       Date:  2015-05-14       Impact factor: 2.622

Review 7.  Impact of mHealth chronic disease management on treatment adherence and patient outcomes: a systematic review.

Authors:  Saee Hamine; Emily Gerth-Guyette; Dunia Faulx; Beverly B Green; Amy Sarah Ginsburg
Journal:  J Med Internet Res       Date:  2015-02-24       Impact factor: 5.428

8.  Scoping review assessing the evidence used to support the adoption of mobile health (mHealth) technologies for the education and training of community health workers (CHWs) in low-income and middle-income countries.

Authors:  Niall Winters; Laurenz Langer; Anne Geniets
Journal:  BMJ Open       Date:  2018-07-30       Impact factor: 2.692

9.  The International Society of Urological Pathology Education web-a web-based system for training and testing of pathologists.

Authors:  Lars Egevad; Brett Delahunt; Hemamali Samaratunga; Katia Rm Leite; Gennady Efremov; Bungo Furusato; Ming Han; Laura Jufe; Toyonori Tsuzuki; Zhe Wang; Jonas Hörnblad; Mark Clements
Journal:  Virchows Arch       Date:  2019-02-21       Impact factor: 4.064

10.  To the Lighthouse: Embracing a Grand Challenge for Cancer Education in the Digital Age.

Authors:  David Wiljer
Journal:  J Cancer Educ       Date:  2020-06       Impact factor: 1.771

View more
  1 in total

Review 1.  eHealth Interventions for Dutch Cancer Care: Systematic Review Using the Triple Aim Lens.

Authors:  Liza van Deursen; Anke Versluis; Rosalie van der Vaart; Lucille Standaar; Jeroen Struijs; Niels Chavannes; Jiska J Aardoom
Journal:  JMIR Cancer       Date:  2022-06-14
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.