Literature DB >> 32607495

National monitoring and evaluation of eHealth: a scoping review.

Sidsel Villumsen1, Julia Adler-Milstein2, Christian Nøhr1.   

Abstract

OBJECTIVE: There has been substantial growth in eHealth over the past decade, driven by expectations of improved healthcare system performance. Despite substantial eHealth investment, little is known about the monitoring and evaluation strategies for gauging progress in eHealth availability and use. This scoping review aims to map the existing literature and depict the predominant approaches and methodological recommendations to national and regional monitoring and evaluation of eHealth availability and use, to advance national strategies for monitoring and evaluating eHealth.
METHODS: Peer-reviewed and grey literature on monitoring and evaluation of eHealth availability and use published between January 1, 2009, and March 11, 2019, were eligible for inclusion. A total of 2354 publications were identified and 36 publications were included after full-text review. Data on publication type (eg, empirical research), country, level (national or regional), publication year, method (eg, survey), and domain (eg, provider-centric electronic record) were charted.
RESULTS: The majority of publications monitored availability alone or applied a combination of availability and use measures. Surveys were the most common data collection method (used in 86% of the publications). Organization for Economic Co-operation and Development (OECD), European Commission, Canada Health Infoway, and World Health Organization (WHO) have developed comprehensive eHealth monitoring and evaluation methodology recommendations. DISCUSSION: Establishing continuous national eHealth monitoring and evaluation, based on international approaches and recommendations, could improve the ability for cross-country benchmarking and learning. This scoping review provides an overview of the predominant approaches to and recommendations for national and regional monitoring and evaluation of eHealth. It thereby provides a starting point for developing national eHealth monitoring strategies.
© The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association.

Entities:  

Keywords:  medical informatics; program evaluation; review

Year:  2020        PMID: 32607495      PMCID: PMC7309231          DOI: 10.1093/jamiaopen/ooz071

Source DB:  PubMed          Journal:  JAMIA Open        ISSN: 2574-2531


INTRODUCTION

eHealth adoption has been growing substantially with high expectations for resulting improvements in healthcare system performance. eHealth investment was motivated by the need to improve healthcare quality, clinical care processes, and patient safety. However, eHealth infrastructure has proven highly costly to procure and maintain., Given these large investments, there has been a demand for monitoring of resulting adoption, use, and impact., Monitoring enables an understanding of what works and what does not, thus guiding improvements in implementation and adoption. Longitudinal monitoring can provide valuable feedback for adjusting and improving implementation strategy and underlying policies but is often both costly, time-consuming, and highly complicated., The fact that it may take years for potential benefits and consequences to appear substantiates the complex nature of monitoring and evaluating eHealth., Although the importance of monitoring and evaluating is recognized and essential in formulating future eHealth policies, repeated monitoring of implementation progress of the policies are often scarce. In 2009, a comprehensive study was conducted by Empirica, on behalf of the European Commission, aiming “to collate and analyze existing eHealth monitoring and benchmarking sources in order to identify best practice in data gathering and to develop a framework for an EU-wide eHealth Benchmarking approach.” The report presents a comprehensive list of indicators and approaches. However, the eHealth landscape has progressed vastly in the past decade, thus calling for renewed methods for national monitoring. We, therefore, created an overview of the current approaches and methodologies for national and regional monitoring and evaluating eHealth availability and use. With this scoping review, we aim to provide an overview of the current literature produced by researchers, organizations, or government bodies, and to assess the foci, methodology, and scope of monitoring and evaluating eHealth. The focus of this scoping study lies not in addressing the quality of the studies and obtaining ‘best evidence’, but in creating an overview of the monitoring and evaluation activities to advance national strategies for monitoring and evaluating eHealth.

MATERIALS AND METHODS

This scoping review is based on the approach suggested by the Joanna Briggs Institute, adapted and developed from the five stages by Arksey and O’Malley and the enhancements proposed by Levac et al. The objective, inclusion criteria, and methods for the scoping review were presented in a protocol. The reporting of this scoping review follows the checklist and flow described in the PRISMA Extension for Scoping Reviews (PRISMA-ScR).

Identifying relevant studies

eHealth is “the application of information and communication technologies across the whole range of functions that affect the health sector and including products, systems, and services that go beyond simply Internet-based applications.” This scoping review was restricted to eHealth in primary and secondary care. The Joanna Briggs Institute methodology suggests that the scope of the review should balance feasibility and maintaining a broad and comprehensive approach. This led us to focus on the three most prominent domains within information and communication technology (ICT) in health: provider-centric electronic records, patient-centric electronic records, and health information exchange (HIE). The search strategy aimed to ensure the identification of both peer-reviewed publications providing quantitative and/or qualitative evidence on monitoring or evaluating eHealth at a national or regional level; and other publications, peer-reviewed or not, opinions or reports. The search targeted a number of potential sources including journal citation databases, bibliographic databases, and output from known centers of excellence and governments. The protocol for this scoping review contains further information on the preliminary search strategies. A search for published scoping reviews did not reveal any scoping reviews with similar aims (databases searched JBISRIR, PubMed, The Cochrane Library, CINAHL, SCOPUS, and Web of Science). To identify original peer-reviewed publications, we used the databases PubMed, SCOPUS, and Web of Science. Further, a structured search for grey literature, such as national or organizational reports, was performed using the Canadian Agency for Drugs and Technologies in Health (CADTH) checklist for grey literature “Grey Matters”. Danish, Norwegian, and Swedish national bibliographic databases were searched to identify Scandinavian literature on the topic. In addition, an informal chain search was applied through reference lists of relevant publications. The structured search was divided into two sections: Monitoring or evaluating availability of eHealth Monitoring or evaluating use of eHealth In the availability section, key search terms were Monitoring and Evaluation, eHealth, and Availability. Each key term had several synonymous sub-terms. When applicable, major terms were used (ie, MeSH terms). To ensure the detection of literature not yet indexed with major terms, free text search was used. We did not seek to identify the specific metrics used for monitoring (ie, indicators) as these metrics would be specific to the organizational setup of national health systems and context. A full search strategy for a PubMed search can be found in Table 1. For further information on the search strategy, see Supplementary material. The structured literature search was performed last on March 11, 2019.
Table 1.

Search strategy for PubMed

Monitoring and evaluationeHealthAvailabilityCombinations

“Program Evaluation”[Mesh]

“Benchmarking”[Mesh]

“Process Assessment (Health Care)”[Mesh]

“evaluation”

“evaluating”

“monitoring”

“assessment”

“benchmark”

”benchmarking”

“Medical Informatics Applications”[Mesh]

“Medical Informatics”[Mesh]

“Information Systems”[Mesh]

“Medical Records Systems, Computerized”[Mesh]

“Telemedicine”[Mesh]

“Hospital Information Systems”[Mesh]

“Health Information Management”[Mesh]

“Telemedicine”

“Electronic medical record”

“Hospital information system”

“Electronic patient record”

“Health information management”

“Medical informatics”

“health information and communication technology”

“Healthcare Disparities”[Mesh]

“Health Services Accessibility”[Mesh: NoExp]

“Diffusion of Innovation”[Mesh: NoExp]

“Accessibility”

“Availability”

“availabilities”

“Disparity”

“Disparities”

(((((((((“Health Information Management”[Mesh]) OR “Hospital Information Systems”[Mesh]) OR “Telemedicine”[Mesh]) OR “Medical Records Systems, Computerized”[Mesh]) OR “Information Systems”[Mesh]) OR “Medical Informatics”[Mesh]) OR “Medical Informatics Applications”[Mesh])) AND (((“Health Services Accessibility”[Mesh: NoExp]) OR “Diffusion of Innovation”[Mesh: NoExp]) OR “Healthcare Disparities”[Mesh])) AND (((“Benchmarking”[Mesh]) OR “Program Evaluation”[Mesh]) OR “Process Assessment (Health Care)”[Mesh])

((((((((((“benchmarking”) OR “benchmark”) OR “monitoring”) OR “assessment”) OR “evaluation”) OR “evaluating”)) OR (((“Benchmarking”[Mesh]) OR “Program Evaluation”[Mesh]) OR “Process Assessment (Health Care)”[Mesh]))) AND ((((((((“Telemedicine”) OR “electronic medical record”) OR “Hospital information system”) OR “medical informatics”) OR “Electronic patient record”) OR “Health information management”)) OR (((((((“Health Information Management”[Mesh]) OR “Hospital Information Systems”[Mesh]) OR “Telemedicine”[Mesh]) OR “Medical Records Systems, Computerized”[Mesh]) OR “Information Systems”[Mesh]) OR “Medical Informatics”[Mesh]) OR “Medical Informatics Applications”[Mesh]))) AND (((((((“Disparities”) OR “Disparity”) OR “Availability”) OR “availabilities”) OR “Accessibility”)) OR (((“Health Services Accessibility”[Mesh: NoExp]) OR “Diffusion of Innovation”[Mesh: NoExp]) OR “Healthcare Disparities”[Mesh]))

((((((((“Telemedicine”) OR “electronic medical record”) OR “Hospital information system”) OR “medical informatics”) OR “Electronic patient record”) OR “Health information management”)) AND ((((((“benchmarking”) OR “benchmark”) OR “monitoring”) OR “assessment”) OR “evaluation”) OR “evaluating”)) AND (((((“Disparities”) OR “Disparity”) OR “Availability”) OR “availabilities”) OR “Accessibility”)

Search strategy for PubMed “Program Evaluation”[Mesh] “Benchmarking”[Mesh] “Process Assessment (Health Care)”[Mesh] “evaluation” “evaluating” “monitoring” “assessment” “benchmark” ”benchmarking” “Medical Informatics Applications”[Mesh] “Medical Informatics”[Mesh] “Information Systems”[Mesh] “Medical Records Systems, Computerized”[Mesh] “Telemedicine”[Mesh] “Hospital Information Systems”[Mesh] “Health Information Management”[Mesh] “Telemedicine” “Electronic medical record” “Hospital information system” “Electronic patient record” “Health information management” “Medical informatics” “health information and communication technology” “Healthcare Disparities”[Mesh] “Health Services Accessibility”[Mesh: NoExp] “Diffusion of Innovation”[Mesh: NoExp] “Accessibility” “Availability” “availabilities” “Disparity” “Disparities” (((((((((“Health Information Management”[Mesh]) OR “Hospital Information Systems”[Mesh]) OR “Telemedicine”[Mesh]) OR “Medical Records Systems, Computerized”[Mesh]) OR “Information Systems”[Mesh]) OR “Medical Informatics”[Mesh]) OR “Medical Informatics Applications”[Mesh])) AND (((“Health Services Accessibility”[Mesh: NoExp]) OR “Diffusion of Innovation”[Mesh: NoExp]) OR “Healthcare Disparities”[Mesh])) AND (((“Benchmarking”[Mesh]) OR “Program Evaluation”[Mesh]) OR “Process Assessment (Health Care)”[Mesh]) ((((((((((“benchmarking”) OR “benchmark”) OR “monitoring”) OR “assessment”) OR “evaluation”) OR “evaluating”)) OR (((“Benchmarking”[Mesh]) OR “Program Evaluation”[Mesh]) OR “Process Assessment (Health Care)”[Mesh]))) AND ((((((((“Telemedicine”) OR “electronic medical record”) OR “Hospital information system”) OR “medical informatics”) OR “Electronic patient record”) OR “Health information management”)) OR (((((((“Health Information Management”[Mesh]) OR “Hospital Information Systems”[Mesh]) OR “Telemedicine”[Mesh]) OR “Medical Records Systems, Computerized”[Mesh]) OR “Information Systems”[Mesh]) OR “Medical Informatics”[Mesh]) OR “Medical Informatics Applications”[Mesh]))) AND (((((((“Disparities”) OR “Disparity”) OR “Availability”) OR “availabilities”) OR “Accessibility”)) OR (((“Health Services Accessibility”[Mesh: NoExp]) OR “Diffusion of Innovation”[Mesh: NoExp]) OR “Healthcare Disparities”[Mesh])) ((((((((“Telemedicine”) OR “electronic medical record”) OR “Hospital information system”) OR “medical informatics”) OR “Electronic patient record”) OR “Health information management”)) AND ((((((“benchmarking”) OR “benchmark”) OR “monitoring”) OR “assessment”) OR “evaluation”) OR “evaluating”)) AND (((((“Disparities”) OR “Disparity”) OR “Availability”) OR “availabilities”) OR “Accessibility”)

Study selection

An iterative approach to selecting literature was applied, entailing continuous assessment of eligibility criteria and the screening process. A literature directory was created in Mendeley (Mendeley, v.1.19.4, Mendeley Ltd) and publications selected for screening were imported to Covidence, a web-based program for assisting review studies. All publications were checked for duplicates in both Mendeley and Covidence and screened by title and abstract, applying eligibility criteria. Literature published or in press between January 1, 2009 and March 11, 2019 was eligible for inclusion. To enable a thorough understanding of the included publications, we included literature published in English or Scandinavian languages only. Literature was excluded if it (1) described only a single IT-system, (2) was from a developing country, (3) described eHealth applications in dentistry, education and training of healthcare personnel, tele-homecare, telemedicine, nursing homes, or long-term care facilities, (4) the full text was not available, (5) the publication was an undergraduate, MSc, or PhD dissertation, or (6) the publication was a book review. Following the screening on title and abstract, full-text review was performed to determine final inclusion in this scoping review. All screening was performed by dual-review and differences in the assessment were resolved through consensus. All materials were categorized into one of three broad categories based on an approach described by Wong et al. modified to our context: Category 1: Peer-reviewed study with empirical measures of availability/use. Clear articulation of the methodological approach to monitoring or evaluating availability or use of eHealth at a national or regional level, covering design, data collection, analyses, and relevance. Category 2: Non- peer- reviewed report with empirical measures of availability/use. Reports by government or non-government, health associations, professional bodies, and centers of excellence. We included these because national monitoring data is intended for public/broad consumption and therefore often not submitted for peer-review. Category 3: Methodology recommendations. Material presenting comprehensive recommendations of methodology of national monitoring or evaluating availability and/or use of eHealth. Often non-peer-reviewed. Based on the approach by Arksey and O’Malley and Meyer et al., information on category (ie, category 1, 2, or 3), country source, level of scope (national or regional), publication year, methods for monitoring or evaluating (eg, survey), whether one-off or repeated data collection, primary purpose, and eHealth domain were entered in a data charting form, see Supplementary material.

Domains

A general issue when comparing systems and services across countries is the subtle differences in terminology and understanding of what constitutes, for example, an electronic health record. Collecting and comparing data on functionalities rather than systems is a method of overcoming these cultural differences. Having mapped the publications descriptively, a narrative analysis, anchored within an adaption of the “Categories of broadly defined ICT domains” developed by Organization for Economic Co-operation and Development (OECD), see Table 2, will elaborate on how the monitoring and evaluation activities relate to the ICT domains in the health sector.
Table 2.

Categories of broadly defined ICT domains

Provider-centric electronic recordsPatient-centric electronic recordsHealth information exchange
Entry of core patient data (eg, medication allergies, clinical problem list)Viewing of clinical data (eg, test results)Secure messaging between professionals
Decision support (eg, drug–drug alerts)Supplementation of clinical data (eg, entering or modifying current medications)Ordering and reporting of medications and lab tests with result receipt
Closed-loop medication administrationAppointment schedulingPatient referrals
Clinical documentationMedication renewal

Based on OECD16
Categories of broadly defined ICT domains Provider-centric electronic records cover the range of Electronic Medical Records (EMRs), Electronic Health Records (EHRs), and Electronic Patient Records (EPRs) and “include systems that are used by healthcare professionals to store and manage patient health information and data, and include functionalities that directly support the care delivery process.” The definition emphasizes that the users are healthcare professionals. From the patient perspective, the patient-centric electronic records cover systems and functionalities such as Personal Health Records (PHRs) and patient portals, providing access to health information and allowing patients and their informal carers to “manage their health information and organize their health care.” HIE is the necessary link between different systems and organizations. It is the “process of electronically transferring, or aggregating and enabling access to, patient health information and data across provider organizations.”

RESULTS

The results of the search strategy provided a list of 1135 indexed publications for monitoring and evaluating eHealth availability and 1219 indexed publications for monitoring and evaluating eHealth use, see Figure 1. The grey literature search resulted in an additional 42 reports, and the informal search resulted in an additional 38 publications, including peer-reviewed original research, non-peer-reviewed papers, opinions, and reports.
Figure 1.

PRISMA flow chart.

PRISMA flow chart. A total of 117 full-text publications were reviewed with 80 excluded due to not fulfilling eligibility criteria after all (eg, single system or wrong setting) or not having a clear description of the methodology. After removing one additional duplicate, a total of 36 publications were included in this scoping review. Of the 36 publications, 64% were empirical research studies, 28% were reports by governments or organizations, and 8% were published recommendations of methodology. Table 3 provides an overview of the characteristics of the included publications. The full data charting form is available in the Supplementary material.
Table 3.

Characteristics of publications included in this scoping review

n %
Total number of publications identified36100%
CategoryCategory 1: Empirical research study2364%
Category 2: Published reports1028%
Category 3: Published recommendations of methodology38%
SourceNo country source (no data material)38%
Single country source (EU member states)925%
Single country source (non-EU member states)1336%
Multinational sources1131%
Of whichCovering 2–10 countries655%
Covering >10 countries545%
ScopeNational scope3083%
Regional scope411%
Other26%
Data collection methodologySurvey3186%
Business process data38%
Other methods or no data gathering617%
One-time or repeatedContinuous/repeated1542%
Non-continuous/one-off activities1747%
Other or no data gathering411%
Primary purposeMeasuring eHealth/ICT availability and use3289%
Of which focused onAvailability only1237.5%
Use only825%
Availability and use1237.5%
Evaluation38%
Other13%
eHealth DomainaProvider-centric electronic records3186%
Patient-centric electronic records1644%
Health information exchange1953%

aEach publication can cover more than one OECD domain. Published recommendations of methodology are noted as well.

Characteristics of publications included in this scoping review aEach publication can cover more than one OECD domain. Published recommendations of methodology are noted as well. Figure 2 shows the distribution of the publication year for the included publications. The distribution has been relatively evenly distributed throughout the period. The peak in 2013 reflects the Nordic eHealth Research Network reporting on the results of their mandate period, as well as the OECD publishing their Guide to Measuring ICTs in the Health Sector.
Figure 2.

Distribution of publication year (n = 36).

Distribution of publication year (n = 36). The geographical origin of monitoring and evaluating activities (Figure 3) is distributed across the United Nations’ definitions of regions (UN-region), with Northern European countries leading in multinational source publications and the United States leading on single-source publications. Figure 4 further shows the distribution of publications by UN-region data source and OECD-domain.
Figure 3.

Distribution of publications on national monitoring and evaluating eHealth presented by single-country (n = 22) and multiple country sources (n = 11). Methodological recommendations are not included.

Figure 4.

Distribution of publications by UN-region data source and OECD-domain (n = 33). Methodological recommendations are not included.

Distribution of publications on national monitoring and evaluating eHealth presented by single-country (n = 22) and multiple country sources (n = 11). Methodological recommendations are not included. Distribution of publications by UN-region data source and OECD-domain (n = 33). Methodological recommendations are not included. Mapping the publications showed that the majority of monitoring and evaluation activities were set in the Northern European countries and the United States. In 89% of the publications, the primary purpose was to monitor the availability and use of eHealth. Surveys were most commonly used (86% of the publications) and 42% of the publications referred to continuous or repeated data gathering activities.

Provider-centric electronic records

Publications: n = 28 ,, Category: 20 empirical studies,,,,,,, and eight reports,,,,, Focus: The main aims were on monitoring the availability and use of provider-centric electronic records or functionalities, for example, entering and viewing clinical data, medication list, picture archiving, and clinical decision support.,,,, Two focused mainly on evaluation, for example, the impact on the organization., Several publications explicitly aimed at presenting and testing a novel or modified methodology or approach to monitoring or evaluating.,,,,,, Methods: Most publications had a national scope (n = 24). Surveys were the data collection method most used to gauge the availability and use of provider-centric electronic records. Few publications also used business data (eg, log files) to measure availability and use of provider-centric electronic record functionality.,, Data collection was a mix of non-continuous (n = 18) and continuous or repeated activities (n = 10).

Patient-centric electronic records

Publications: n = 13 , , , , , , , Category: Five empirical studies,,,, and eight reports,,,,,,, Focus: The main aims in all publications were on monitoring the availability and use of patient-centric electronic records or functionalities, for example, online appointment scheduling, medication renewal, viewing of clinical data, and electronic communication with General Practitioners. Four publications addressed the citizens’ perceptions of eHealth.,,, Methods: All publications had a national scope and used surveys to gauge the availability and use of patient-centric electronic records. One publication also used business data (ie, log files) to assess the amount of use of patient-centric electronic record functionality. Data collection was a mix of non-continuous (n = 7) and continuous or repeated activities (n = 6). Only two publications surveyed patients directly, and two publications used data already collected from citizen surveys performed in the Nordic Countries.,

Health information exchange

Publications: n = 16 , , , , , , , , Category: 10 empirical studies,,,,,,,,, and six reports,,,,, Focus: The main aim was on monitoring the availability and use of HIE (n = 15), such as ePrecriptions, eReferrals, and exchange of clinical history, laboratory results, or radiology reports with external organizations. Only one publication regarded evaluating the systems’ effect on the organization. Methods: Most publications had a national scope (n = 13). Surveys were the data collection method most used to gauge the availability and use of HIE functionalities. A few publications also used business data (eg, log files).,, Data collection was a mix of non-continuous (n = 6) and continuous or repeated activities (n = 10).

Methodological recommendations

Publications: n = 3 , , Origin: Canada Health Infoway, European Commission, OECD. Focus: The publications present thorough methodological recommendations and approaches to monitoring and evaluating eHealth. Methodological recommendations and a wide selection of indicators are provided within different domains and functionalities. All OECD domains presented in Table 2 are addressed. Canada Health Infoway focuses on benefits evaluation indicators, whereas the other publications aim at providing methodologies for cross-country benchmarking of eHealth availability and use., Methods: Data collection through survey methods is the main methodology described. Canada Health Infoway also emphasizes the use of business data (eg, log data and administrative data) and describes which indicators could be monitored by methods other than surveys. The methodology described in Canada Health Infoway focuses on national or regional evaluations, in contrast to the multinational scope of the European Commission and OECD.,

DISCUSSION

This scoping review synthesizes the current literature on national approaches to monitoring and evaluation of availability and use of eHealth.

Monitoring availability and use of eHealth

While availability and use are distinct concepts and of independent value to national measurement strategies, the literature reflects a lack of clear distinction between them. Many of the titles and abstracts of the publications indicate and state measurements of use (eg,,,,) but in fact, they monitor if services or functionalities are available to the users. Several publications report use as the ability to use a given functionality or system (eg,,,,) which is not the same as whether the functionality is actually being used and to which extent. It seems adoption as a term is often used when applying measures of availability of eHealth functionality as a proxy measure for actual use. This calls for a clearer distinction between monitoring the availability and the use of eHealth, as once saturation of availability is reached, use is the next step on the causal pathway to achieving impact., Hence, monitoring the actual use of a functionality, and whether it is used as intended, is a key element in evaluating the functionality and moving toward eHealth supporting clinical practice. Our study also reveals that only a few of the resources assessed national or regional eHealth impact as part of the monitoring strategy.,,

Some ICT domains better covered than others

The distribution of OECD domains covered in the included publications shows that provider-centric electronic records was by far the domain most often addressed (86% of all publications), whereas patient-centric electronic records were only addressed in 44% (Table 3). This could be ascribed to patient-centric electronic records being a relatively new point of focus, with no publications available before 2013. The focus on patient-centric electronic records varies among the regional distribution of the included publications. We found that the patient-centric domain is most frequently addressed in publications that include data from Northern European countries (Figure 4). This may be partly attributed to the Nordic countries’ focus on patient-oriented eHealth services. As eHealth evolves in complexity and coverage, focal points in monitoring and evaluating new functionalities and methods of doing so needs to be addressed. Thus, methodological approaches to monitoring and evaluating eHealth must be under continuous development.

Methodological approaches and recommendations to monitoring and evaluating availability and use of eHealth

Surveys are by far the most common data gathering method for monitoring and evaluating national availability and use of eHealth (used in 86% of the publications). Surveys are cost-efficient and can be used to obtain information on phenomena that are not easily measured otherwise. However, surveys are prone to issues of low external validity and bias. Recall and social desirability biases are also common limitations of surveys. Using other sources of data that may be more objective, for example, log data, to monitor eHealth use, is a way to circumvent the drawbacks of surveys. Harvesting log data from central servers may be a reliable and valid approach. However, only three publications explicitly used such data,,, likely because needed centralized infrastructure does not exist and data on indicators of interest might not be logged in a manner that enables extraction. Furthermore, there is an issue of data ownership. Private vendors typically regard their data models as intellectual property and therefore do not want them to be made public, which may be needed to collect national-level data. A method of enhancing the possibilities of monitoring eHealth implementation through system-generated data is by defining indicators up-front and designing the data model of the systems in a way that allows for easy data extraction. Even so, there may be discrepancies between the clinical work routine and how it is captured by the system. Therefore, a prerequisite for analyzing and interpreting such data is knowledge of the context. Our results also reveal the potential challenge of lack of repeated national monitoring and evaluation efforts. Repeated or continuous data collection is needed to measure secular progress or to evaluate the impact of policy changes (or other interventions). Ongoing measures of eHealth progress, therefore, supports evidence-based approaches to eHealth policy. We suspect that our finding that only 42% of the publications are part of or present data from continuous data gathering activities, such as annual or biannual surveys, reflects the time, resources, and complexity involved in large-scale data collection as well as changing national priorities. As previously described, building approaches to measurements relying on system-generated indicators could help increase the ability to pursue repeated measures. Finally, our results reveal that, while there are national or international methodological approaches to eHealth monitoring, there are multiple approaches that are not harmonized. OECD, European Commission, and Canada Health Infoway have developed comprehensive approaches to eHealth monitoring and evaluation. The European Commission approach is only explicitly applied within the European Commission studies.,, Furthermore, WHO developed their own approach to international eHealth monitoring. The approach can be found applied in and, but since the report describing this was published in 2008, the report was not included in this scoping review. Finally, the OECD and Canadian methodological recommendations to monitoring and/or evaluating availability and use are more frequently applied. The Canadian approach, which focuses on benefits evaluation, and the OECD approach aiming at cross-country benchmarking, might be the most promising candidate methodologies for consistent national eHealth monitoring and evaluating.

Limitations of this scoping review

The search strategy required iteration as the terminology within the research field of eHealth changed over the years, and it required adding new terms and definitions (eg, mHealth). In addition, many publications on eHealth monitoring and evaluation might only be disseminated through conferences or posters, which are not indexed in bibliographic databases in general. Thus, the choice of search terms and the focus on bibliographic databases may induce selection bias. Most publications evaluated eHealth at the single institution or single-system level - and were therefore excluded. To capture a broader set of eHealth monitoring efforts, we included grey literature and it is possible that our results would be different if we had limited studies to the peer-reviewed literature. However, we do not feel that the peer-review process would fundamentally alter the content or methods of the monitoring that was the focus of our review.

CONCLUSIONS

Monitoring eHealth adoption is essential for providing an evidence base on which to formulate future national eHealth policies and for evaluating the effectiveness of the efforts. Monitoring the adoption and impact of eHealth is key to learning from the past and current initiatives to provide evidence for decision-makers to base eHealth policy decisions upon. This scoping review provides an overview of the predominant approaches and methodological recommendations to national and regional monitoring and evaluation of eHealth. In order to establish an evidence base for eHealth policies, monitoring and evaluation should be continuous, allowing for trends and developments to unfold. Furthermore, applying a framework that allows for cross-country comparisons will broaden the evidence base of what works and what does not. The monitoring and evaluation activities should be transparent and published to facilitate benchmarking and learning. Implications for practice are to establish a governance structure around national eHealth monitoring, ensuring repeated and valid data on eHealth implementation progress.

Author Contributors

The authors contributed to the manuscript as follows: Substantial contributions to the conception and design of the work (Villumsen and Nøhr); and the acquisition, analysis, or interpretation of data for the work (Villumsen, Adler-Milstein, and Nøhr). Drafting the work (Villumsen and Nøhr) and revising it critically for important intellectual content (Adler-Milstein). Final approval of the version to be published (all authors). Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved (all authors). Click here for additional data file.
  34 in total

1.  EMRs and clinical IS implementation in hospitals: a statewide survey.

Authors:  Mirou Jaana; Marcia M Ward; James A Bahensky
Journal:  J Rural Health       Date:  2011-08-04       Impact factor: 4.333

2.  Benchmarking electronic medical records initiatives in the US: a conceptual model.

Authors:  Carlos Palacio; Jeffrey P Harrison; David Garets
Journal:  J Med Syst       Date:  2010-06       Impact factor: 4.460

3.  Danish Citizens and General Practitioners' Use of ICT for their Mutual Communication.

Authors:  Pernille Bertelsen; Lone Stub Petersen
Journal:  Stud Health Technol Inform       Date:  2015

4.  Comparing approaches to measuring the adoption and usability of electronic health records: lessons learned from Canada, Denmark and Finland.

Authors:  Andre Kushniruk; Johanna Kaipio; Marko Nieminen; Christian Nøhr; Elizabeth Borycki
Journal:  Stud Health Technol Inform       Date:  2013

5.  Development and Progression in Danish eHealth Policies: Towards Evidence-Based Policy Making.

Authors:  Sidsel Villumsen; Arild Faxvaag; Christian Nøhr
Journal:  Stud Health Technol Inform       Date:  2019-08-21

6.  Comprehensive methodology to monitor longitudinal change patterns during EHR implementations: a case study at a large health care delivery network.

Authors:  Tiago K Colicchio; Guilherme Del Fiol; Debra L Scammon; Julio C Facelli; Watson A Bowes; Scott P Narus
Journal:  J Biomed Inform       Date:  2018-05-29       Impact factor: 6.317

7.  The use of electronic health records in Spanish hospitals.

Authors:  Guillem Marca; Angel Perez; Martin German Blanco-Garcia; Elena Miravalles; Pere Soley; Berta Ortiga
Journal:  Health Inf Manag       Date:  2014       Impact factor: 3.185

8.  The Number Of Health Information Exchange Efforts Is Declining, Leaving The Viability Of Broad Clinical Data Exchange Uncertain.

Authors:  Julia Adler-Milstein; Sunny C Lin; Ashish K Jha
Journal:  Health Aff (Millwood)       Date:  2016-07-01       Impact factor: 6.301

Review 9.  Information bias in health research: definition, pitfalls, and adjustment methods.

Authors:  Alaa Althubaiti
Journal:  J Multidiscip Healthc       Date:  2016-05-04

10.  A Brief Survey on Six Basic and Reduced eHealth Indicators in Seven Countries in 2017.

Authors:  Reinhold Haux; Elske Ammenwerth; Sabine Koch; Christoph U Lehmann; Hyeoun-Ae Park; Kaija Saranto; C P Wong
Journal:  Appl Clin Inform       Date:  2018-09-05       Impact factor: 2.342

View more
  1 in total

1.  SERIES: eHealth in primary care. Part 5: A critical appraisal of five widely used eHealth applications for primary care - opportunities and challenges.

Authors:  Marise J Kasteleyn; Anke Versluis; Petra van Peet; Ulrik Bak Kirk; Jens van Dalfsen; Eline Meijer; Persijn Honkoop; Kendall Ho; Niels H Chavannes; Esther P W A Talboom-Kamp
Journal:  Eur J Gen Pract       Date:  2021-12       Impact factor: 1.904

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.