Literature DB >> 35847525

Time to act mature-Gearing eHealth evaluations towards technology readiness levels.

Stephanie Jansen-Kosterink1,2, Marijke Broekhuis1,2, Lex van Velsen1,2.   

Abstract

It is challenging to design a proper eHealth evaluation. In our opinion, the evaluation of eHealth should be a continuous process, wherein increasingly mature versions of the technology are put to the test. In this article, we present a model for continuous eHealth evaluation, geared towards technology maturity. Technology maturity can be determined best via Technology Readiness Levels, of which there are nine, divided into three phases: the research, development, and deployment phases. For each phase, we list and discuss applicable activities and outcomes on the end-user, clinical, and societal front. Instead of focusing on a single perspective, we recommend to blend the end-user, health and societal perspective. With this article we aim to contribute to the methodological debate on how to create the optimal eHealth evaluation design.
© The Author(s) 2022.

Entities:  

Keywords:  continuous process; design; eHealth; evaluation; perspectives; technology readiness level

Year:  2022        PMID: 35847525      PMCID: PMC9280845          DOI: 10.1177/20552076221113396

Source DB:  PubMed          Journal:  Digit Health        ISSN: 2055-2076


Introduction

The World Health Organization (WHO) stressed Digital Health guidelines the need for rigorous evaluation of eHealth, in order to generate evidence and to promote the appropriate integration and use of technologies for improving health and reducing health inequalities. In the scientific community that focuses on eHealth evaluation, there is no consensus on how to create the best evaluation design.[2,3] According to the standards of evidence-based medicine, large prospective randomized controlled trials (RCTs) are considered the gold standard for evaluating the safety and effectiveness of medical interventions. As the characteristics of an RCT do not match well with the evaluation of eHealth, it is currently acknowledged among experts that there is an urgent need for other evaluation designs.[5-7] This makes it challenging to perform a proper eHealth evaluation, which hampers the subsequent implementation of eHealth in daily clinical practice.[8,9] In this paper, we define eHealth according to Eysenbach (2001), as not just a technology, but as a concept. Eysenbach’s definition of eHealth is: “An emerging field in the intersection of medical informatics, public health and business, referring to health services and information delivered or enhanced through the Internet and related technologies. In a broader sense, the term characterizes not only a technical development, but also a state-of-mind, a way of thinking, an attitude, and a commitment for networked, global thinking, to improve health care locally, regionally, and worldwide by using information and communication technology.” To streamline the set-up of eHealth evaluations, various frameworks have been developed.[3,11] The most widely used eHealth evaluation framework in European eHealth studies is the Model for Assessment of Telemedicine (MAST). This model is based on the principles of Health Technology Assessment (HTA) and is used to assess the effects and costs of eHealth from a multidimensional perspective. The strong points of MAST are the involvement of all the actors and the assessment of outcomes in seven domains: (1) health problem and description of the application; (2) safety; (3) clinical effectiveness; (4) patient perspectives; (5) economic aspects; (6) organizational aspects; and (7) socio-cultural ethical and legal aspects. Another commonly used framework is the five-stage model for comprehensive research on telehealth by Fatehi et al. This framework outlines five important stages for eHealth intervention: concept development, service design, pre-implementation, implementation, and post-implementation. By outlining these stages, this framework addresses the difference between the assessment of prototypes and the evaluation of mature technology. The assessment of prototypes helps to identity the required improvements, while the evaluation of a mature technology aims to measuring the overall success factors and performance after implementation. The endorsement of applying an iterative approach and to focus on multiple perspectives, are strong points of these current frameworks to streamline the set-up of eHealth evaluations. While these frameworks are useful, we foresee three major limitations. The first limitation is the current frameworks are only applicable for fully mature technologies and are no solution for technology still in development. Therefore, the applicability of these frameworks is limited. The second limitation is that these frameworks do not provide a clear method for determining technology maturity. When using these frameworks, in combination with immature technologies to evaluate the value of an eHealth service, results are likely to be overly negative or, at the very least, biased. Finally, the third limitation is the over-representation of the clinical perspective. Most of the articles that report on the use of these frameworks only present the results of a single perspective. In previous eHealth evaluation studies the clinical perspective is over-represented and findings related to usability, the user-experience, technology acceptance, and costs are rarely addressed. To overcome these limitations, and based on our experience within the field of eHealth evaluation, we present our position toward eHealth evaluations and present a model for the continuous evaluation of eHealth, aligned to technology maturity levels, and incorporating different evaluation perspectives.

Our position towards eHealth evaluation

Evaluation, defined here as the collection, interpretation, and presentation of information in order to determine the value of a result or process, becomes a possibility and necessity as soon as technology development starts. Evaluation should be a continuous process, whereby the evaluation setup is geared towards the maturity of the technology. There is no need to wait with the evaluation until the technology is mature, the evaluation of the technology can start from the first concept. In other disciplines, such as software development in which agile SCRUM is a common approach, continuous evaluation is very common. Next, we think that, instead of focusing on a single perspective, evaluations should incorporate a multitude of complementary evaluation perspectives. In our opinion these perspectives are (1) the end-user, (2) the health, and (3) the societal perspective. The end-user perspective focuses on the task-technology fit (which differs per type of end-user), usability, the user-experience (UX) and technology acceptance, to ensure that a technology is suitable for the intended end-users and their context; the health (or clinical) perspective should safeguard the health benefits that one derives from using the technology; the societal perspective should ensure that the technology can be implemented with the support of relevant stakeholders, and is durable.

Technology readiness levels

The maturity of a technology can be determined based on the technology readiness levels (TRLs). TRLs are a widely-accepted method to assess the maturity level of a technology, also in the context of eHealth.[17-19] These levels (Figure 1) were developed by NASA in the early 70s as a means to determine whether emerging technology is suitable for space exploration. In total, there are nine levels divided in three phases: the research, development and deployment phase. With TRLs we can clearly communicate the level of maturity of a technology and determine whether the technology is ready for tests or evaluations in a real-world setting. When a technology consists of different modules, the weakest (or most immature) module determines the TRL.
Figure 1.

Technology readiness level scale.

Technology readiness level scale.

A model for continuous eHealth evaluation

Our model for continuous eHealth evaluation addresses both the maturity of the technology as the starting point for eHealth evaluations, as well as the inclusion of the different evaluation perspectives. An overview of the suggested activities for the three perspectives in each phase is provided in Table 1.
Table 1.

An overview of the activities on the end-user, health, and societal perspective for the research, development, and deployment phase.

End-user perspectiveHealth perspectiveSocietal perspective
Research phaseTRL 1Testing of basic principles with relevant stakeholders.
TRL 2Testing of basic concept to obtain fit between use and technology.
TRL 3Identifying the merits of the technology.
Development phaseTRL 4Small scale usability/UX studies in a lab setting, to test prototype components.
TRL 5Small scale usability/UX studies in a lab setting, to test integrated system
TRL 6Clinical study to the use, acceptance and potential health benefits of the technology within the daily clinical context.
Deployment phaseTRL 7Large scale clinical study to the use, acceptance, health benefits, and safety of the technology within the daily clinical context.Discussions with relevant stakeholders to assess the forecast of financial and extra-financial value.
TRL 8Long term monitoring of the health benefits and safety of the technology within the broad clinical context.Strengthen the model of financial and extra-financial-value with the outcomes of clinical studies
TRL 9Strengthen model of financial and extra-financial-value with the outcomes of long term monitoring.
An overview of the activities on the end-user, health, and societal perspective for the research, development, and deployment phase.

Research phase

During the research phase, the technology is immature and the new concept, often in the form of a low-fidelity prototype, is discussed with potential end-users (end-user perspective) (e.g. as in van Velsen et al. and Jansen-Kosterink et al.). These discussions aim to gauge the end-users’ reactions towards the basic concepts and main functionality of the prototype. The main aim of the continuous evaluations in the research phase is to optimize the new concept and technology. As the technology mainly consists of ideas and simple prototypes at this stage, applying an iterative approach in this phase is crucial. Quick rounds of testing-redesing-testing should ensure the proper focus of the innovation. The work of Schnall and colleagues[22-24] on their Health Information Technology Usability Evaluation Scale (Health-ITUES) fit very well with this phase.

Development phase

Within this phase the technology evolves from a prototype towards a more mature application. At this moment end-users can interact with a high-fidelity prototype. We start by incorporating small-scale usability tests and short-term clinical studies in a controlled setting should be conducted to identify usability issues and to assess use, acceptance, and potential health benefits (e.g. as in Olde Keizer et al., 2019). The outcomes of the technology-oriented evaluations (e.g. usability tests) should feed an iterative redesign process, in which technology is optimized. The outcomes of the short-term clinical studies help to compose hypotheses concerning health benefits for subsequent evaluations. Next to these activities the discussions with relevant stakeholders can be started to assess the forecast of financial and extra-financial value.

Deployment phase

At this stage, the technology is almost ready for market launch. There are no more critical usability issues left and the next step is a large-scale clinical study combined with a summative usability study in a real-life setting. This clinical study could be a RCT to assess the safety and clinical effectiveness of the technology in comparison to usual care in daily clinical practice (e.g. as in Kosterink et al.). To comply with national or international legislation the technology needs to be certified based on the outcome of these studies, such as a CE marking in Europe. Besides, based on the outcome of these studies the forecast of financial and extra-financial value can be validated and finalized. During the deployment phase there is little focus anymore on research and development. Although it is important to keep monitoring the long-term health benefits and safety of the technology within the broad clinical context by for instance a large cohort study. During this study, the long-term financial and extra-financial value also need to be assessed (e.g. as in Talboom et al.), so as to become aware of additional exploitation opportunities.

Discussion

The evaluation of eHealth should be a continuous process, based on the maturity of the technology, and should focus on the end-user perspective, the health perspective, and the societal perspective. The focus of an evaluation should be aligned with the maturity of the technology that is being put to the test. The use of TRL levels and their alignment to evaluation perspectives is what mainly distinguishes our model from other evaluation models for eHealth. These models only focus on one perspective,[22-24] are only applicable for mature technology,[12,15] or do not specify how to assess the maturity of a technology.[14,28] Our model for the continuous evaluation of eHealth is based on our experience within the field of eHealth evaluation and the lessons we have learned during our involvement in various national and international eHealth projects. However, since this model reflects a vision on eHealth evaluation, it would be impossible to prove the truth of it. Therefore, case studies should inform us of its worth and the opportunities for improvement. Additionally, the role of the environment and technical infrastructure in which an eHealth technology is embedded plays a role. How does environmental and infrastructure maturity affect evaluation? While we consider these factors to be aspects of the technology maturity, it would be interesting to see studies that aim to distinguish among the different types of maturity. We hope that the research community sees this article as a source of inspiration to combine evaluation approaches with TRL levels and will share their experiences with us.
  24 in total

1.  The HTA core model: a novel method for producing and reporting health technology assessments.

Authors:  Kristian Lampe; Marjukka Mäkelä; Marcial Velasco Garrido; Heidi Anttila; Ilona Autti-Rämö; Nicholas J Hicks; Björn Hofmann; Juha Koivisto; Regina Kunz; Pia Kärki; Antti Malmivaara; Kersti Meiesaar; Päivi Reiman-Möttönen; Inger Norderhaug; Iris Pasternack; Alberto Ruano-Ravina; Pirjo Räsänen; Ulla Saalasti-Koskinen; Samuli I Saarni; Laura Walin; Finn Børlum Kristensen
Journal:  Int J Technol Assess Health Care       Date:  2009-12       Impact factor: 2.188

Review 2.  Determinants of successful telemedicine implementations: a literature study.

Authors:  Tom H F Broens; Rianne M H A Huis in't Veld; Miriam M R Vollenbroek-Hutten; Hermie J Hermens; Aart T van Halteren; Lambert J M Nieuwenhuis
Journal:  J Telemed Telecare       Date:  2007       Impact factor: 6.184

3.  A user-centered model for designing consumer mobile health (mHealth) applications (apps).

Authors:  Rebecca Schnall; Marlene Rojas; Suzanne Bakken; William Brown; Alex Carballo-Dieguez; Monique Carry; Deborah Gelaude; Jocelyn Patterson Mosley; Jasmine Travers
Journal:  J Biomed Inform       Date:  2016-02-20       Impact factor: 6.317

Review 4.  Smart homes and home health monitoring technologies for older adults: A systematic review.

Authors:  Lili Liu; Eleni Stroulia; Ioanis Nikolaidis; Antonio Miguel-Cruz; Adriana Rios Rincon
Journal:  Int J Med Inform       Date:  2016-04-19       Impact factor: 4.046

5.  A framework for m-health service development and success evaluation.

Authors:  S Saeedeh Sadegh; Parisa Khakshour Saadat; Mohammad Mehdi Sepehri; Vahid Assadi
Journal:  Int J Med Inform       Date:  2018-04       Impact factor: 4.046

6.  How to formulate research questions and design studies for telehealth assessment and evaluation.

Authors:  Farhad Fatehi; Anthony C Smith; Anthony Maeder; Victoria Wade; Leonard C Gray
Journal:  J Telemed Telecare       Date:  2016-10-16       Impact factor: 6.184

7.  A model for assessment of telemedicine applications: mast.

Authors:  Kristian Kidholm; Anne Granstrøm Ekeland; Lise Kvistgaard Jensen; Janne Rasmussen; Claus Duedal Pedersen; Alison Bowes; Signe Agnes Flottorp; Mickael Bech
Journal:  Int J Technol Assess Health Care       Date:  2012-01       Impact factor: 2.188

8.  A Paradigm Shift: Sharing Patient Reported Outcome via a National Infrastructure.

Authors:  Karen Marie Lyng; Sanne Jensen; Morten Bruun-Rasmussen
Journal:  Stud Health Technol Inform       Date:  2019-08-21

9.  Evidence-Based Evaluation of eHealth Interventions: Systematic Literature Review.

Authors:  Amia Enam; Johanna Torres-Bonilla; Henrik Eriksson
Journal:  J Med Internet Res       Date:  2018-11-23       Impact factor: 5.428

Review 10.  A review of telehealth service implementation frameworks.

Authors:  Liezl van Dyk
Journal:  Int J Environ Res Public Health       Date:  2014-01-23       Impact factor: 3.390

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.