| Literature DB >> 32784173 |
Tobias N Bonten1,2, Anneloek Rauwerdink3, Jeremy C Wyatt4, Marise J Kasteleyn1,2, Leonard Witkamp5,6, Heleen Riper7, Lisette Jewc van Gemert-Pijnen8, Kathrin Cresswell9, Aziz Sheikh9, Marlies P Schijven3, Niels H Chavannes1,2.
Abstract
BACKGROUND: Despite the increase in use and high expectations of digital health solutions, scientific evidence about the effectiveness of electronic health (eHealth) and other aspects such as usability and accuracy is lagging behind. eHealth solutions are complex interventions, which require a wide array of evaluation approaches that are capable of answering the many different questions that arise during the consecutive study phases of eHealth development and implementation. However, evaluators seem to struggle in choosing suitable evaluation approaches in relation to a specific study phase.Entities:
Keywords: concept mapping; digital health; eHealth; evaluation; health technology assessment; mHealth; methodology; scoping review; study design
Year: 2020 PMID: 32784173 PMCID: PMC7450369 DOI: 10.2196/17774
Source DB: PubMed Journal: J Med Internet Res ISSN: 1438-8871 Impact factor: 5.428
Figure 1Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram of the article selection process.
Articles included in the systematic scoping review according to the evaluation approach adopted.
| Reference | Year | Country | Evaluation approach |
| Chiasson et al [ | 2007 | United Kingdom | Action research |
| Campbell and Yue [ | 2014 | United States | Adaptive design; propensity score |
| Law and Wason [ | 2014 | United Kingdom | Adaptive design |
| Mohr et al [ | 2015 | United States | Behavioral intervention technology model (bit) in Trials of Intervention Principles; SMARTa |
| Van Gemert-Pijnen et al [ | 2011 | Netherlands | CeHResb Roadmap |
| Alpay et al [ | 2018 | Netherlands | CeHRes Roadmap; Fog model; Oinas-Kukkonen model |
| Shaw [ | 2002 | United Kingdom | CHEATSc: a generic ICTd evaluation framework |
| Kushniruk and Patel [ | 2004 | Canada | Cognitive task analysis; user-centered design |
| Jaspers [ | 2009 | Netherlands | Cognitive walkthrough; heuristic evaluation; think-aloud method |
| Khajouei et al [ | 2017 | Iran | Cognitive walkthrough; heuristic evaluation |
| Van Engen-Verheul et al [ | 2015 | Netherlands | Concept mapping |
| Mohr et al [ | 2013 | United States | CEEBITe framework |
| Nicholas et al [ | 2016 | Australia | CEEBIT framework; single-case experiment (N=1) |
| Bongiovanni-Delaroziere and Le Goff Pronost [ | 2017 | France | Economic evaluation; HASf methodological framework |
| Fatehi et al [ | 2017 | Australia | Five-stage model for comprehensive research on telehealth |
| Baker et al [ | 2014 | United States | Fractional-factorial (ANOVAg) design; SMART |
| Collins et al [ | 2007 | United States | Fractional-factorial (ANOVA) design; MOSTh; SMART |
| Chumbler et al [ | 2008 | United States | Interrupted time-series analysis; matched cohort study design |
| Grigsby et al [ | 2006 | United States | Interrupted time-series analysis; pretest-posttest design |
| Liu and Wyatt [ | 2001 | United Kingdom | Interrupted time-series analysis |
| Kontopantelis et al [ | 2015 | United Kingdom | Interrupted time-series analysis |
| Catwell and Shiekh [ | 2009 | United Kingdom | Life cycle–based approach |
| Han [ | 2011 | United States | Life cycle–based approach |
| Sieverink [ | 2017 | Netherlands | Logfile analysis |
| Kramer-Jackman Popkess-Vawter [ | 2008 | United States | Method for technology-delivered health care measures |
| Wilson et al [ | 2018 | Canada | mHealthi agile and user-centered research and development lifecycle |
| Jacobs and Graham[ | 2016 | United States | mHealth development and evaluation framework; MOST |
| Dempsey et al [ | 2015 | United States | Microrandomized trial; single-case experiment (N=1) |
| Klasnja et al [ | 2015 | United States | Microrandomized trial; single-case experiment (N=1) |
| Law et al [ | 2016 | United Kingdom | Microrandomized trial |
| Walton et al [ | 2018 | United States | Microrandomized trial |
| Caffery et al [ | 2017 | Australia | Mixed methods |
| Lee and Smith [ | 2012 | United States | Mixed methods |
| Kidholm et al [ | 2017 | Denmark | MASTj |
| Kidholm et al [ | 2018 | Denmark | MAST |
| Kummervold et al [ | 2012 | Norway | Noninferiority trial |
| May [ | 2006 | United Kingdom | Normalization process theory and checklist |
| Borycki et al [ | 2016 | Canada | Participatory design; user-centered design |
| Clemensen et al [ | 2017 | Denmark | Participatory design |
| Glasgow [ | 2007 | United States | Practical clinical trial; RE-AIMk framework |
| Danaher and Seeley [ | 2007 | United States | Pragmatic randomized controlled trial; SMART; Stage model of behavioral therapies research |
| Sadegh et al [ | 2018 | Iran | Proposed framework for evaluated mHealth services |
| Harker and Kleinen [ | 2012 | United Kingdom | Rapid review |
| Glasgow et al [ | 2014 | United States | RE-AIM framework |
| Almirall et al [ | 2014 | United States | SMART |
| Ammenwerth et al [ | 2012 | Austria | Simulation study |
| Jensen et al [ | 2015 | Denmark | Simulation study |
| Dallery et al [ | 2013 | United States | Single case experiment (N=1) |
| Cresswell and Shiekh [ | 2014 | United Kingdom | Sociotechnical evaluation |
| Kaufman et al [ | 2006 | United States | Stead et al [ |
| Brown and Lilford [ | 2006 | United Kingdom | Stepped wedge (cluster) randomized trial |
| Hussey and Hughes [ | 2007 | United States | Stepped wedge (cluster) randomized trial |
| Spiegelman [ | 2016 | United States | Stepped wedge (cluster) randomized trial |
| Langbecker et al [ | 2017 | Australia | Survey methods |
| Rönnby et al [ | 2018 | Sweden | Technology acceptance model |
| Bastien [ | 2010 | France | User-based evaluation |
| Nguyen et al [ | 2007 | Canada | Waitlist control group design |
aSMART: Sequential Multiple Assignment Randomized Trial.
bCeHRes: Centre for eHealth Research and Disease management.
cCHEATS: Clinical, human and organizational, educational, administrative, ethnical and social explanatory factors in a randomized controlled trial intervention.
dICT: information and communication technology.
eCEEBIT: continuous evaluation of evolving behavioral intervention technology.
fHAS: Haute Autorité de Santé (French National Authority for Health).
gANOVA: analysis of variance.
hMOST: multiphase optimization strategy.
imHealth: mobile health.
jMAST: Model of Assessment of Telemedicine Applications.
kRE-AIM: Reach, Effectiveness, Adoption, Implementation, and Maintenance.
Characteristics of study participants for each phase of the concept mapping study.
| Characteristic | Brainstorm phase | Sorting phase | Rating phase | ||||
| Participants (n) | 43a | 27 | 32b | ||||
| Age (years), mean (SD) | 39.9 (12.1) | 39.0 (12.6) | 40.5 (13) | ||||
| Female gender, n (%) | 21 (49) | 16 (53) | 16 (50) | ||||
| Research experience (years), mean (SD) | 13.5 (10.8) | 12.6 (10.5) | 13.9 (11) | ||||
| Working in university medical center, n (%) | 37 (73) | 26 (72) | 27 (71) | ||||
|
|
|
|
| ||||
|
| During clinic work, not EHRd | 4 (7) | 3 (9) | 3 (8) | |||
|
| During research | 32 (59) | 21 (60) | 23 (59) | |||
|
| During clinic work and research | 10 (19) | 7 (20) | 8 (21) | |||
|
| No | 1 (2) | 0 (0) | 1 (3) | |||
|
| Other | 7 (13) | 4 (11) | 4 (10) | |||
|
|
|
|
| ||||
|
| Grade 1-2 | 0 (0) | 0 (0) | 0 (0) | |||
|
| Grade 3-4 | 1 (2) | 1 (4) | 1 (3) | |||
|
| Grade 5-6 | 2 (5) | 1 (4) | 1 (3) | |||
|
| Grade 7-8 | 29 (71) | 17 (63) | 21 (68) | |||
|
| Grade 9-10 | 9 (22) | 8 (30) | 8 (26) | |||
|
|
|
|
| ||||
|
| Grade 1-2 | 0 (0) | 0 (0) | 0 (0) | |||
|
| Grade 3-4 | 1 (2) | 1 (4) | 1 (3) | |||
|
| Grade 5-6 | 15 (37) | 8 (30) | 11 (36) | |||
|
| Grade 7-8 | 19 (46) | 15 (56) | 15 (48) | |||
|
| Grade 9-10 | 6 (15) | 3 (11) | 4 (13) | |||
|
|
|
|
| ||||
|
| Biology | 2 (3) | 1 (2) | 1 (2) | |||
|
| Data science | 2 (3) | 1 (2) | 1 (2) | |||
|
| Economics | 1 (1) | 1 (2) | 1 (2) | |||
|
| Medicine | 24 (35) | 14 (30) | 18 (34) | |||
|
| (Health) Science | 9 (13) | 6 (13) | 7 (13) | |||
|
| Industrial design | 1 (1) | 1 (2) | 1 (2) | |||
|
| Informatics | 4 (6) | 3 (7) | 3 (6) | |||
|
| Communication and culture | 4 (6) | 3 (7) | 3 (6) | |||
|
| Psychology | 14 (21) | 11 (24) | 12 (23) | |||
|
| Other | 7 (10) | 5 (11) | 6 (11) | |||
a43 participants participated in the sorting phase, but 41 participants answered the characteristics questions.
bOne of the 32 participants did not finish the third rating question: “importance for proving effectiveness.”
ceHealth: electronic health.
dEHR: electronic health record.
Figure 2Venn diagram showing the origin of the 75 unique evaluation approaches.
Results of step 2: concept mapping study.
| Evaluation approacha | Use of approachb, % “yes” response | Familiarity with approachc | Proving effectivenessd | |||
|
|
| Mean | % of 3 + 4 (n/N) | Mean | % of 3 + 4 (n/N) | |
|
| 58 (SD 32.7) | 2.9 (SD 0.5) | 2.3 (SD 0.3) |
| ||
|
| 3. Feasibility studye | 94 | 3.6 | 88 (28/42) | 2.6 | 52 (16/31) |
|
| 4. Questionnairee | 100 | 3.4 | 84 (27/63) | 2.5 | 52 (16/31) |
|
| 8. Single-case experiments or n-of-1 study (N=1) | 28 | 2.5 | 43 (13/60) | 2.0 | 27 (8/30) |
|
| 12. Action research study | 41 | 2.6 | 50 (15/58) | 2.3 | 38 (11/29) |
|
| 44. A/B testing | 25 | 2.5 | 45 (13/58) | 2.2 | 36 (10/28) |
|
| 37 (SD 29.1) | 2.5 (SD 0.4) | 2.1 (SD 0.3) |
| ||
|
| 5. Focus group (interview) | 91 | 3.2 | 81 (26/62) | 2.3 | 32 (10/31) |
|
| 6. Interview | 94 | 3.1 | 75 (24/62) | 2.3 | 35 (11/31) |
|
| 23. Think-aloud method | 66 | 2.6 | 52 (15/59) | 1.7 | 14 (4/29) |
|
| 25. Cognitive walkthrough | 31 | 2.4 | 37 (11/59) | 1.8 | 17 (5/30) |
|
| 27. eHealthf Analysis and Steering Instrument | 12 | 2.4 | 55 (16/58) | 2.4 | 48 (14/29) |
|
| 28. Model for Assessment of Telemedicine applications (MAST) | 22 | 2.5 | 48 (14/59) | 2.4 | 37 (11/30) |
|
| 29. Rapid review | 31 | 2.0 | 23 (7/58) | 1.8 | 7 (2/29) |
|
| 30. eHealth Needs Assessment Questionnaire (ENAQ) | 6 | 2.4 | 45 (13/58) | 2.0 | 24 (7/29) |
|
| 31. Evaluative Questionnaire for eHealth Tools (EQET) | 3 | 2.4 | 52 (15/58) | 2.3 | 41 (12/29) |
|
| 32. Heuristic evaluation | 19 | 2.2 | 31 (9/57) | 2.1 | 24 (7/29) |
|
| 33. Critical incident technique | 9 | 2.0 | 24 (7/59) | 1.8 | 4 (1/28) |
|
| 36. Systematic reviewe | 94 | 3.1 | 67 (20/62) | 2.9 | 69 (20/29) |
|
| 39. User-centered design methodse | 53 | 3.2 | 73 (22/62) | 2.5 | 50 (14/28) |
|
| 43. Vignette study | 41 | 2.2 | 31 (9/58) | 1.6 | 7 (2/28) |
|
| 45. Living lab | 34 | 2.5 | 41 (12/58) | 2.3 | 54 (15/28) |
|
| 50. Method for technology-delivered health care measures | 9 | 2.3 | 39 (11/58) | 2.1 | 25 (7/28) |
|
| 54. Cognitive task analysis (CTA) | 16 | 2.1 | 23 (7/59) | 1.9 | 18 (5/28) |
|
| 60. Simulation study | 41 | 2.5 | 50 (15/60) | 2.2 | 34 (10/29) |
|
| 62. Sociotechnical evaluation | 22 | 2.3 | 37 (11/60) | 2.1 | 29 (8/28) |
|
| 11 (SD 4) | 2.3 (SD 0.2) | 2.2 (SD 0.2) |
| ||
|
| 21. Multiphase Optimization Strategy (MOST) | 6 | 2.3 | 45 (13/58) | 2.3 | 39 (11/28) |
|
| 26. Continuous evaluation of evolving behavioral intervention technologies (CEEBIT) framework | 6 | 2.4 | 48 (14/60) | 2.3 | 38 (11/29) |
|
| 40. RE-AIMg frameworke | 19 | 2.6 | 61 (17/59) | 2.4 | 52 (14/27) |
|
| 46. Normalization process model | 9 | 2.0 | 25 (7/57) | 1.9 | 18 (5/28) |
|
| 48. CeHResh Roadmap | 16 | 2.4 | 43 (12/58) | 2.3 | 41 (11/27) |
|
| 49. Stead et al [ | 12 | 2.2 | 38 (11/58) | 2.1 | 22 (6/27) |
|
| 51. CHEATSi: a generic information communication technology evaluation framework | 6 | 2.3 | 41 (12/58) | 2.1 | 26 (7/27) |
|
| 52. Stage Model of Behavioral Therapies Research | 9 | 1.9 | 21 (6/58) | 2.0 | 22 (6/27) |
|
| 53. Life cycle–based approach to evaluation | 12 | 2.3 | 45 (13/58) | 2.0 | 21 (6/28) |
|
| 45 (SD 23) | 2.6 (SD 0.3) | 2.6 (0.4) |
| ||
|
| 1. Mixed methodse | 87 | 3.2 | 81 (26/63) | 2.9 | 65 (20/31) |
|
| 2. Pragmatic randomized controlled triale | 62 | 3.1 | 77 (24/63) | 3.3 | 83 (25/30) |
|
| 7. Cohort studye (retrospective and prospective) | 81 | 2.7 | 58 (18/61) | 2.5 | 58 (18/31) |
|
| 9. Randomized controlled triale | 91 | 3.3 | 71 (22/63) | 3.3 | 74 (23/31) |
|
| 10. Crossover studye | 44 | 2.7 | 57 (17/61) | 2.7 | 59 (17/29) |
| 11. Case series | 50 | 2.1 | 20 (6/60) | 1.8 | 10 (3/29) | |
|
| 13. Pretest-posttest study designe | 62 | 2.6 | 45 (14/60) | 2.5 | 50 (15/30) |
|
| 14. Interrupted time-series study | 44 | 2.5 | 43 (13/59) | 2.7 | 59 (17/29) |
|
| 15. Nested randomized controlled trial | 31 | 2.3 | 37 (11/59) | 2.8 | 55 (16/29) |
|
| 16. Stepped wedge trial designe | 56 | 2.8 | 70 (21/60) | 3.2 | 90 (26/29) |
|
| 17. Cluster randomized controlled triale | 50 | 2.8 | 60 (18/60) | 3.1 | 69 (20/29) |
|
| 19. Trials of intervention principles (TIPs)e | 23 | 2.5 | 42 (13/61) | 2.5 | 43 (13/30) |
| 20. Sequential Multiple Assignment Randomized Trial (SMART) | 9 | 2.4 | 45 (13/58) | 2.7 | 62 (18/29) | |
|
| 35. (Fractional-)factorial design | 22 | 2.3 | 45 (13/58) | 2.2 | 36 (10/28) |
|
| 37. Controlled before-after study (CBA)e | 37 | 2.6 | 50 (15/60) | 2.4 | 52 (15/29) |
|
| 38. Controlled clinical trial /nonrandomized controlled trial (CCT/NRCT)e | 47 | 2.9 | 70 (21/60) | 2.9 | 71 (20/28) |
|
| 41. Preference clinical trial (PCT) | 19 | 2.1 | 24 (7/58) | 2.1 | 25 (7/28) |
| 42. Microrandomized trial | 9 | 2.2 | 24 (7/59) | 2.4 | 50 (14/28) | |
| 55. Cross-sectional study | 72 | 2.5 | 40 (12/60) | 2.1 | 29 (8/28) | |
|
| 56. Matched cohort study | 37 | 2.2 | 30 (9/59) | 2.3 | 46 (13/28) |
| 57. Noninferiority trial designe | 53 | 2.6 | 47 (14/60) | 2.6 | 48 (14/29) | |
|
| 58. Adaptive designe | 19 | 2.6 | 52 (15/58) | 2.5 | 50 (14/28) |
| 59. Waitlist control group design | 34 | 2.1 | 28 (8/59) | 2.0 | 32 (9/28) | |
|
| 61. Propensity score methodology | 31 | 2.1 | 30 (9/59) | 2.0 | 21 (6/29) |
|
| 54 (SD 28) | 2.8 (SD 0.5) | 2.6 (SD 0.5) |
| ||
|
| 18. Cost-effectiveness analysis | 81 | 3.4 | 87 (27/63) | 3.2 | 70 (21/30) |
|
| 22. Methods comparison study | 16 | 2.0 | 17 (5/59) | 2.0 | 21 (6/28) |
|
| 24. Patient reported outcome measures (PROMs)e | 84 | 3.1 | 80 (24/60) | 2.9 | 73 (22/30) |
|
| 34. Transaction logfile analysis | 25 | 2.4 | 45 (13/57) | 2.1 | 21 (6/28) |
| 47. Big data analysise | 62 | 3.0 | 73 (22/61) | 2.8 | 59 (17/29) | |
aApproach identification numbers correspond with the numbers used in Figure 3 and Figure 4.
bBased on the rating question: “does your research group use this approach, or did it do so in the past?”; the percentage of “yes” responses is shown.
cBased on the rating question: “according to your opinion, how important is it that researchers with an interest in eHealth will become familiar with this approach?”; average rating scores ranging from unimportant (1) to absolutely essential (4) and percentages of categories 3 plus 4 are represented.
dThe “proving effectiveness” column corresponds with the rating question: “according to your opinion, how important is the approach for proving the effectiveness of eHealth?” Average rating scores ranging from unimportant (1) to absolutely essential (4) and percentages of categories 3 plus 4 are presented.
eThis approach scored above average on the rating questions “familiarity with the approach” and “proving effectiveness, ” which is plotted in the upper right quadrant of the Go-Zone graph (Figure 3).
feHealth: electronic health.
gRE-AIM: Reach, Effectiveness, Adoption, Implementation, and Maintenance.
hCeHRes: Centre for eHealth Research and Disease management.
iCHEATS: Clinical, human and organizational, educational, administrative, ethnical and social explanatory factors in a randomized controlled trial intervention.
Figure 3Go-Zone graph. The numbers refer to the evaluation approaches listed in Table 3.
Figure 4Concept map showing evaluation approaches grouped into five labeled clusters. The numbers refer to the approaches listed in Table 3.
Figure 5The “eHealth evaluation cycle” derived from empirical results of the scoping literature review and concept map study.