Literature DB >> 26770800

Benchmarking facilities providing care: An international overview of initiatives.

Frédérique Thonon1, Jonathan Watson2, Mahasti Saghatchian1.   

Abstract

We performed a literature review of existing benchmarking projects of health facilities to explore (1) the rationales for those projects, (2) the motivation for health facilities to participate, (3) the indicators used and (4) the success and threat factors linked to those projects. We studied both peer-reviewed and grey literature. We examined 23 benchmarking projects of different medical specialities. The majority of projects used a mix of structure, process and outcome indicators. For some projects, participants had a direct or indirect financial incentive to participate (such as reimbursement by Medicaid/Medicare or litigation costs related to quality of care). A positive impact was reported for most projects, mainly in terms of improvement of practice and adoption of guidelines and, to a lesser extent, improvement in communication. Only 1 project reported positive impact in terms of clinical outcomes. Success factors and threats are linked to both the benchmarking process (such as organisation of meetings, link with existing projects) and indicators used (such as adjustment for diagnostic-related groups). The results of this review will help coordinators of a benchmarking project to set it up successfully.

Entities:  

Keywords:  Health-care benchmarking; health facilities; hospitals; quality improvement; quality indicators

Year:  2015        PMID: 26770800      PMCID: PMC4712789          DOI: 10.1177/2050312115601692

Source DB:  PubMed          Journal:  SAGE Open Med        ISSN: 2050-3121


Introduction

In an operating environment where funding for health care and research is both more insecure and competitive, health-care quality has become increasingly important.[1] One possible way of helping health facilities to maintain and improve high-quality health care is by benchmarking their services. Benchmarking in health care is defined as a process of comparative evaluation and identification of the underlying causes leading to high levels of performance.[2] One important point in the definition of benchmarking is that it is not intended to be only a general measurement of one organisation (or part of an organisation) against another, but it also includes the study and transfer of exemplary practice.[3] According to Stanford,[4] a benchmarking process is described as the process of identifying leaders in the field so that the practice of these leaders may be understood and emulated. The benchmark is considered as the point of comparison. Another important point in benchmarking is to understand the processes by which performance can be enhanced, rather than simply to copy another process, as what is best for one organisation may be disastrous for another.[5] Benchmarking was first developed for use by industries in the 1930s. In the health-care sector, comparison of outcome indicators dates back to the 17th century with the comparison of mortality in hospitals. Its utilisation as a structured method began only in the mid-1990s. It emerged in the United States and United Kingdom with the imperative of comparing hospitals outcomes to rationalise their funding.[6] Van Lent et al. report the following definition of health-care benchmarking provided by Gift and Mosel: Benchmarking is the continual and collaborative discipline of measuring and comparing the results of key work processes with those of the best performers. It is learning how to adapt these best practices to achieve breakthrough process improvements and build healthier communities.[7] The rationale for health-care benchmarking is that institutions with excellent performance for a given outcome apply specific clinical practices that are most effective. They may also display structural or cultural organisational features that contribute to excellent outcomes.[8] By visiting these centres and reviewing the evidence in the literature, teams from other institutions can identify these practices and organisational features. Then, by applying methods learned in quality improvement training, the teams should be able to implement the identified practices and to modify their organisations in ways that lead to better outcomes.[8] Benchmarking in health care has also undergone several modifications: initially, benchmarking was essentially the comparison of performance outcomes to identify disparities. Then it expanded to include the analysis of processes and success factors for producing higher levels of performance. The most recent modifications to the concept of benchmarking relates to the need to meet patients’ expectations.[6] Benchmarking can be extremely useful in supporting the development of good clinical practice because of its structure of assessment and reflection.[9] In essence, benchmarking is a collaborative rather than a competitive enterprise that initially involves the sharing of relevant information on the delivery of care with other organisations. The findings are shared, and elements of best practice are adopted with the aim of improving performance. That said, good practice in one health-care provider cannot often be transferred to another health-care provider in the same country or across borders. Different factors will affect performance and need to be identified and addressed as part of action to achieve improvements. According to the Joint Commission Resources, there are two types of benchmarking: external and internal. Internal benchmarking compares different services in the same organisation. External benchmarking compares performance targets between different organisations. There are common activities to Benchmarking projects: determining what to study, forming a benchmarking team, identifying benchmarking partners, collecting data, analysing data and taking action.[10] Van Lent et al. gave a detailed description of how a benchmarking process is conducted in health services. This 13-step process is detailed in Figure 1.[7]
Figure 1.

Description of the 13 steps of a benchmarking project according to Van Lent et al.[7]

Description of the 13 steps of a benchmarking project according to Van Lent et al.[7] In order to develop effective benchmarking of cancer hospitals, there is a need to fully understand the functioning of a benchmarking process and learn lessons from previously successful benchmarking projects. A critical review of existing or past benchmarking project can give a valuable insight. Examining the motivations for the development of a benchmarking project and for health facilities to participate can inform benchmarking coordinators on how to design a benchmarking project that is relevant and that the participants will subscribe to. Analysing the factors that contribute to the success of a benchmarking project and its threats can help coordinators in avoiding those pitfalls for their own benchmarking projects and in increasing their chances of successfully developing such projects. Listing the indicators used in benchmarking projects, and feedback on their use can avoid duplication of work and prevent the use of indicators that are not pertinent or not feasible in practice. So far we have encountered no detailed review of existing benchmarking projects. One study[11] reviewed literature on previous benchmarking of health care. However, this study, dating from 1997, reviews only 10 articles and is focused on health-care practices, not on the global benchmarking of health facilities. Thus, we have conducted this review of existing and past benchmarking projects of health facilities with the aim of learning lessons to apply in the design of a new benchmarking project. Specifically, we wanted to explore: the rationale for the development of those benchmarking projects, the motivation for health facilities to participate, the indicators used in those projects, their validity and how they influence the benchmarking process and the success factors and threats to those projects.

Methodology

We reviewed peer-reviewed and grey literature describing a benchmarking project for health facilities. We chose to also include grey literature related to the same projects, such as technical reports, user manual, or presentation of projects to stakeholders in order to gain more in-depth information on the process of the projects.

Search strategy

As our review is focused on health benchmarking, we operated our search from the PubMed database only and did not expand it to databases containing non-health journals. We initially searched for articles using the following keywords: [benchmark*] AND ([health facilit*] OR [Hospital*]). We undertook a subsequent search using the following keywords [Benchmark*] AND [Europe*] OR [international] in order to include European and International Benchmarking projects. Through snowballing, we included relevant articles found in the references. After enlisting the projects mentioned in the articles, we searched for grey literature related to those projects using the website of the benchmarking organisations listed or through a general Internet search.

Inclusion and exclusion criteria

We defined the scope of our analysis to include all benchmarking projects conducted in health facilities. A health facility is a place that provides health care. It can include a hospital, a clinic, an outpatient care centre or a specialised care centre (http://www.nlm.nih.gov/medlineplus/healthfacilities.html). In our analysis, we chose to include literature related to projects that were defined by the author or the organisation managing it as a benchmarking project and that focused on either the entire health facility, or on one or more specialised unit or service within this health facility. In order to be included, a publication had to explain the development of a benchmarking project (including indicators selection) and/or give critical feedback on the benchmarking project, such as assess its impact, identify success factors or threats, or draw lessons. We encountered one article referring to a project that aimed to measure health-care systems, included but not limited to hospital care.[12] We chose to include this article because it was relevant to the review.

Data collection

For each benchmarking project, we collected general information about the project, analytical data and indicators used in those projects. The general information was extracted in order to be able to draw a general picture of the benchmarking projects and be able to describe them. It included the following: The domain of application of the project (such as palliative care, oncology, emergency care …), and its setting (general hospital, cancer centre …); The geographical area; The scope of the project (is it a regional, national, international project?); The number of facilities benchmarked; The dates of the project. The analytical data were chosen according to our review objective. They included the following: The rationale for the development of this project (why was it developed?); Data on the participation of facilities in this project (what were the incentives to participate, did the participation rate increase or decline over time?); If and how identification of leaders and sharing of best practices was organised; The practice of the benchmarking project regarding data sharing and anonymity; The impact of the benchmarking project; The success factors and threats to the completion of the project. Finally, we listed indicators used in each project whether the projects used existing indicators or developed new ones as part of the benchmarking project and if so, how. The data extraction for each study was carried out by one author (F.T.) after the data extraction methodology was tested among a sample of six studies by two authors (F.T., M.S.). The data extraction forms can be found in Appendix 1 in supplementary material.

Results

Literature found

We found 38 peer-reviewed articles and 11 documents from the grey literature. From these 38 research articles, 33 reported the outcome of one or several benchmarking projects and 5 related the development of a benchmarking project or indicators to use for benchmarking projects. Of the 11 documents from the grey literature, 4 presented the results of benchmarking projects for stakeholders and 6 were practical manuals for users. One project (Believe) was referred to in a peer-reviewed article that did not report on its implementation or development. Therefore, we included only the grey literature and not the peer-reviewed article related to that project.

Description of the benchmarking projects

We found a total of 23 benchmarking projects reported, including 4 that were only in the development phase at the time the articles were published (see Table 1). Most of the projects (N = 12) had a national scope, followed by international (N = 5), regional (N = 4) or European (N = 2) projects. The benchmarking projects applied either to the whole hospital, or to a care specialty (usually oncology) or a service (such as palliative care or emergency care). The complete overview of benchmarking projects is detailed in Table 1.
Table 1.

Overview of the benchmarking projects retrieved.

Benchmarking project numberReferences (peer-reviewed articles)References (grey literature)Name of the projectDomain of applicationGeographical area(s)ScopeNr of health-care facilities benchmarkedDates/period
BMP11315National Oncology Practice BenchmarkOncologyUSANational1872007–2014
BMP216Benchmarking LombardyGeneral hospitalsLombardy (Italy)Regional1502011
BMP317Benchmarking for length of stayGeneral hospitals (length of stay)NetherlandsNational692006
BMP41820Benchmarking of breast cancer unitsBreast cancerGermanyNational220 (in 2007)2003–present
BMP521, 22Benchmarking trauma centreTrauma centresUK and AustraliaInternational22001–2002
BMP6232728National Mental Health Benchmarking ProjectMental health in 4 domainsAustraliaNational232005–2008
 General adult
 Child and adolescent
 Older person
 Forensic
BMP7929National Care of the Dying Audit of Hospitals (NCDAH)Palliative careUKNational402006–2007
BMP830, 313234Performance Assessment Tool for Quality Improvement in Hospitals (PATH)General hospitalsBelgium, Canada, Denmark, France, Slovakia, South AfricaInternational512005–2006
BMP935Danish Indicator ProjectGeneral Health careDenmarkNational52003–2008
BMP1036Nordic Indicator ProjectGeneric and disease-specific indicators, plus other general health serviceDenmark, Finland, Greenland, Iceland, Norway, SwedenEuropeanNot specified2005
BMP113738Cancer Network Management BenchmarkingCancer careUKNational72007
BMP1239EmergeEmergency careSwitzerlandNational122000
BMP1340Benchmarking by the National Comprehensive Cancer Network (NCCN)Clinical productivity in cancer careUSANational132003
BMP14414344Benchmarking Collaborative Alliance for Nursing Outcomes (CALNOC)General hospitalsCalifornia, Washington, Oregon, Arizona, Nevada, HawaiiRegional196Since 1996
BMP157Benchmarking of comprehensive cancer centresCancer care (comprehensive cancer centre)Not specifiedInternational32009
BMP167Benchmarking of radiotherapy departmentsCancer care (radiotherapy departments)Not specifiedInternational42009
BMP177, 45Benchmarking of chemotherapy day unitsCancer care (chemotherapy day units)USA and EuropeInternational32005
BMP1846505153Essence of CareNursing careUKNationalNot specified2001–2010
BMP1954BELIEVEGeneral hospitals (pain control)Aquitaine (France)Regional322009–2012
BMID155Consumer Quality IndexCancer careNetherlandsNational
BMID256Hospital Information System (HIS)Hospital Information SystemAustriaNational
BMID312OECD Health-Care Quality IndicatorsGeneral health serviceEU countriesEuropean
BMID457Benchmarking patient satisfactionPatient satisfaction in general hospitalLombardy (Italy)Regional

BMP: Benchmarking Project number; BMID: Benchmarking Indicators Development; OECD: Organisation for Economic Co-operation and Development; EU: European Union.

Overview of the benchmarking projects retrieved. BMP: Benchmarking Project number; BMID: Benchmarking Indicators Development; OECD: Organisation for Economic Co-operation and Development; EU: European Union.

Indicators used in benchmarking projects

According to Donabedian,[58] indicators can be classified into three categories: structure indicators (measuring all factors that affect the context in which the health care is delivered), process indicators (the sum of actions that make health care) and outcome indicators (effects of health care on patients or population). Most of the projects use a mix of process, structure and outcome indicators (N = 6) or a mix of process and outcomes (N = 9). Four projects use process indicators only, two used a mix of process and structure indicators and two used outcome indicators only. Two projects used unusual indicators: one about Hospital Information System and one about clinical productivity. One benchmarking project (National Oncology Practice) used two levels of indicators with ‘core’ data and ‘additional’ data. Many indicators focus on patients/user satisfaction. The complete list of indicators used in those projects, including the methodology used to select or develop those indicators can be found in Appendix 2 in supplementary material. For most projects (N = 15), benchmarks were developed as part of the project and for others (N = 4) the project coordinators used established benchmarks such as national or international standards. The organisations used classical methods to develop and select indicators, such as expert consultation (including Delphi surveys or other consensus methods), literature search, and interviews or clinical guidelines in place. Only 1 project (the Essence of Care project) included patients and carers in the definition of best practices for benchmarks.

Analysis of the benchmarking projects

A summary analysis of the benchmarking projects can be found in Table 2.
Table 2.

Summary of the analysis of benchmarking projects.

Benchmarking project numberProjectRationale for the benchmarking projectParticipation of centres in the programmeWhat impact did the benchmarking project have?Success factorsFailure factors
BMP1National Practice BenchmarkingTo promote measurement of clinical activity.Decrease of participation after 8 years.To make the survey more accessible, it was stratified into 2 sections (minimum data set and extra).
BMP2Benchmarking LombardyTo give feedback to hospitals about their performance and create a culture of evaluation. Few existing analysis of performance.It helped directors draw plans to improve critical areas.Adjustment for diagnostic-related groups.Public disclosure of results might promote risk-averse behaviour by providers (discourage them from accepting high-risk patients). This is subject to debate.
Use of regional administrative data so employees more likely to accept the results.
BMP3Benchmarking for length of stayTo determine the potential for reduction in length of stay.At the beginning: full participation rate, then more hospitals stopped participation because engaged in other compulsory registration projects.It has helped to identify the medical specialties for which the decrease of length of stay is the most possible.
BMP4Benchmarking of breast cancer unitsTo ensure that care provided to breast cancer patients was based on clinical guidelines and quality assurance.Participation is voluntary. Increase in specialist breast centres participating in the programme from 2003 to 2009.Improvement on many clinical indicators and indicators of use of clinical guidelines.The project was voluntary and used anonymised data.
BMP5Benchmarking of trauma centresTo improve outcomes of the trauma centres.Highlighted the need for greater cooperation between trauma registry programme coordinators to ensure standardisation of data collection.Crude hospital mortality is not a robust indicator for trauma centres as it does not take into account mortality after discharge.
BMP6National Mental Health Benchmarking ProjectPart of the National Mental Health Strategy.Selection criteria set for the candidate organisations.Modification of practices.Commitment of the management and securing resources.Data quality and variability in information systems/data interpretation.
Feeding back benchmarking data to clinical staff to maintain their motivation to the project.
Forums for participants provided them the opportunity to discuss the performance of their organisation and draw lessons from other organisations.
BMP7NCDAHMeasuring quality in palliative care is challenging.There was a 13% increase in programme participation between round 2 and round 3.Improvement in practices and in communications between health professionals.Holding a workshop for participants to reflect on data, enhance understanding and learn from others.The feedback report should not have too heavy data or contain too complex information.
Participants found exercise was useful and improved care in the organisation.
BMP8PATHIn Europe, hospital performance assessment is a priority for WHO Regional Office for Europe. There are few initiatives to compare hospital performance internationally.66 hospitals initially registered for participation but a total of 51 hospitals actually participated.Participation in the project facilitated the integration of different quality assessment activities and data collection.If the project focuses much more strongly on international comparisons and improved validity.Lack of personnel, expertise and time for participating hospitals to collect data.
In some countries it was a stepping stone for starting quality implementation projects (when there was none).Some issues addressed by the indicators felt too vague and difficult to put in place.
Competing priorities and reorganisation of hospitals.
Competing or overlapping projects.
BMP9Danish Indicator ProjectThere is no systematic outcome assessment of patient care.Participation was mandatory for all hospitals and relevant clinical departments and units treating patients with the 8 diseases.Increase in the percentage of patients receiving recommended care and interventions according to national practice guidelines.Easy data collection: in the participating hospitals, data are collected electronically and transmitted safely via the Internet to the project national database.
Improvement in waiting time.In Denmark it is possible to assign a unique patient identifier, thus facilitating data collection.
For lung cancer patients, a concerted action has been set up in order to improve this area.
BMP10Nordic Indicator ProjectNeed to document and monitor the quality of health service performance.It has allowed us to gather evidence about differences in survival rate from prostate cancer.Not all countries are equally able to track patients after hospital discharge (some countries assign unique patient identifiers, others not).
Desire for transparency and accountability.
BMP11Cancer Network Management BenchmarkingThe United Kingdom has the worst cancer survival rate in Europe. Benchmarking project set up to support a quality improvement strategy.Using a mix of structure, process and outcome indicators.
BMP12EmergeTo improve the quality of care in hospitals.Participation was voluntary.Quality improvement between the two cycles of benchmarking.Interpretation of results should be guided by a culture of organisational learning rather than individual blame.In emergency department, there is a selection bias in patients’ survey.
BMP13Benchmarking NCCNThere is no information on clinical productivity.Participating centres are members of the NCCN.
BMP14Benchmarking CALNOCNurses comprise the largest group of professionals employed in hospitals, and are thus uniquely positioned to significantly influence patient safety and quality of care.Low attrition rate (fewer than 3% hospitals withdrawing from project since 1998).Participating CALNOC hospitals reduced their Hospital Acquired Pressure Ulcer rates from 10% to 2.8% with half of the hospitals achieving 0%.Outcome measures include not only injuries but also near-misses, allowing us to correct the system.
Measures are tied to reimbursement possibly providing financial incentives for hospitals to participate.CALNOC also offers educational and consultancy service in best practices, possibly contributing to success of the project.
BMP15Benchmarking of Comprehensive Cancer CentresCentres selected by a case study.Internal stakeholders must be convinced that others might have developed solutions for problems that can be translated to their own settings.Due to different reimbursement mechanisms in different countries the use of financial indicators is complex.
Management must reserve sufficient resources for the total benchmarks.
Limit the scope to a well-defined problem.
Define criteria to verify the comparability of benchmarking partners based on subjects and process.
Construct a format that enables a structured comparison.
Use both quantitative and qualitative data for measurement.
Involve stakeholders to gain consensus about the indicators.
Keep indicators simple so that enough time can be spent on the analysis of the underlying processes.
For indicators showing a large annual variation in outcomes, measurement over a number of years should be considered.
Adapt the identified better working methods so that they comply with other practices in the organisation.When the CCC is in a middle of a complex merger.
BMP16Benchmarking of radiotherapy departmentCentres selected by a case study.Measuring the percentage of patients in clinical trials not useful for radiotherapy.
As some indicators were subject to large yearly variations, measuring indicators over a 1-year period does not always give a good impression of performance.
BMP17Benchmarking of chemotherapy unitsIt is part of applying a business approach to improve the efficiency of chemotherapy by identifying best practices.Centres selected by a case study.Best practices from benchmarking were used in discussion about the planning system.Benchmarking should not only be used for comparison of performance, but also to gain insight into underlying organisational principles.Using business jargon can make medical and care professional left out.
Benchmarking made the partners aware that other organisations with similar problems were able to achieve better outcomes.
BMP18Essence of CareThere are unacceptable variations in the standards of care across the countries and reports showed a decline in the quality of care.No information.Many improvements were reported at the local level rather than institutional level.High awareness of the project among nurses.Although the definition of standards was detailed, the process for measuring them was not.
Issues of costs associated with litigation for negligence might be a factor for the development of quality initiatives.Improved motivation of staff after receiving positive feedback.The project is seen as a top priority at the clinical governance level.Lack of dedicated funding.
In one area the experience of the benchmarking process itself has brought together sections of the division that would not normally meet.Lack of interest by physicians (seen as a nurse initiative).
The benchmarking process has given more power and authority to matrons.
BMP19BELIEVETo improve pain control.Mix of public and private health facilities. Medical and surgical services.52 action plans written including training, adaptation of patient record, protocols, development of pain measurement tools.Project piloted by the CCECQA, an organisation that most hospitals are familiar with, and that has a good reputation for its work.When questions are difficult to interpret.
Pain control put higher on the agenda and staff more aware of it.Benchmarking process was transparent.Too heavy workload.
Improvement of practices.Before audit visits, a meeting was organised to share experiences.

BMP: Benchmarking Project number; CALNOC: Collaborative Alliance for Nursing Outcomes; CCC: Comprehensive Cancer Centre; CCECQA: Committee for Coordination of Evaluation and Quality in Aquitaine; WHO: World Health Organisation; PATH: Performance Assessment Tool for Quality Improvement in Hospitals; NCCN: National Comprehensive Cancer Network.

List of indicators used in projects in Appendix 2 in supplementary material and full table on the website http://www.oeci.eu/benchcan

Summary of the analysis of benchmarking projects. BMP: Benchmarking Project number; CALNOC: Collaborative Alliance for Nursing Outcomes; CCC: Comprehensive Cancer Centre; CCECQA: Committee for Coordination of Evaluation and Quality in Aquitaine; WHO: World Health Organisation; PATH: Performance Assessment Tool for Quality Improvement in Hospitals; NCCN: National Comprehensive Cancer Network. List of indicators used in projects in Appendix 2 in supplementary material and full table on the website http://www.oeci.eu/benchcan

Rationale for the development of benchmarking projects

Improving quality of care, fighting inequalities in care delivery and measuring quality were presented as the main reasons for developing a benchmarking project. Most of the projects were the results of a ‘top-down’ approach to quality of care improvement. Indeed, 12 projects were initiated by an official health body, such as a health agency or administration and 4 projects were reported as being the initiative of the network of facilities (for the remaining 3 projects, it was not specified). Three projects were developed to measure care specialties not usually measured, or not in-depth, such as clinical productivity, trauma care or palliative care. For 1 project (the benchmarking of chemotherapy units), the benchmarking was part of a business approach to improve their efficiency.[45] One article discussing the ‘Essence of Care’ project mentioned the rise in litigation costs for negligence (in the case of pressure ulcers) as one of the reasons for developing quality control initiatives.[46]

Incentives of hospitals to participate in benchmarking

We found little information about the participation of hospitals or health facilities in the benchmarking process. For 11 projects, the participation was noted as voluntary and for only 1 project (the Danish Indicator Project), participation was mandatory.[35] For 7 projects, it was not documented whether participation was mandatory or voluntary. An increase in the participation in benchmarking projects was noted for the benchmarking by the German Cancer Society/German Senology Society (Deutschen Krebsgesellschaft (DKG)/Deutschen Gesellschaft für Senologie (DGS)) and the National Care of the Dying Audit of Hospitals (NCDAH) project. A decrease in participation was noted for 2 projects: the National Practice Benchmark after the project has been running annually for 8 years,[15] and the Benchmarking of length of stay in hospitals by the National Medical Registration. In the latter case, the decrease in participation was explained by the fact that more hospitals became engaged in another compulsory registration project.[17] Little information was available about the incentives for centres to participate in benchmarking projects. For the Collaborative Alliance for Nursing Outcomes (CALNOC) project, a financial incentive was mentioned. Indeed, the measure of quality indicators is tied to reimbursement from Medicaid and Medicare. From 2009 onward, Medicaid and Medicare services withheld reimbursements for treatments related to hospital-acquired pressure ulcers, one of the indicators measured by the project: hence a need to improve quality in those areas.

Impact of the benchmarking projects

A positive impact was reported for 14 projects. Two benchmarking projects resulted in changes at the institutional level, such as the setting up of action plans in critical areas.[16,54] Improvements in clinical outcome indicators are reported for the benchmarking of a breast cancer unit in Germany and the CALNOC project, while improvement in practices or use of guidelines are also reported for the Benchmarking of breast cancer units in Germany, NCDAH project, Danish Indicator Project, Essence of Care and BELIEVE projects. Three projects have resulted in increased communication or collaboration between health professionals or different services in a hospital that did not communicate well (Benchmarking of trauma centres, NCDAH, Essence of Care). Two benchmarking projects resulted solely in the validation (or invalidation) of a method or indicators. This is the case for the benchmarking project conducted by the National Comprehensive Cancer Network, which developed and tested a methodology to measure clinical productivity of oncology physicians, without measuring changes in this productivity induced by the project; or a second Benchmarking of trauma centres project. Similarly, the Benchmarking for length of stay[17] or the Nordic Indicator[36] projects have enabled to gather data and draw policy conclusions but no impact of the project on those indicators is reported.

Success factors or threats linked to the benchmarking process

One article exploring the benchmarking of Comprehensive Cancer Centres[7] produced a detailed list of success factors for benchmarking project (see Table 2). One of the factors mentioned – management’s dedication to the benchmarking project – was also mentioned as a critical determinant for success or failure in three projects such as the Performance Assessment Tool for Quality Improvement in Hospitals (PATH) project,[30] the Essence of Care project[49] and the Australian National Mental Health Benchmarking Project.[25] Whether results should be made public or not was debated. Literature about the Benchmarking project by the DKG/DGS mentioned the anonymity of centres as one success factor for participation of centres.[20] But Berta et al.[16] argued that public disclosure of results might promote risk-averse behaviour from providers, discouraging them from accepting high-risk patients, while acknowledging that it can drive quality improvement. Organising a meeting for participants, either before or after the audit visits, was mentioned as a success factor in three projects.[9,25,54] Those workshops or forums provided the opportunity for participants to network with other organisations, discuss the meaning of data and share ideas for quality improvements and best practices. The existence of other competing or overlapping projects was mentioned as a threat for two projects. It is reported that this co-existence sometimes provided benefits, sometimes threatened the PATH project,[30] but regarding the Benchmarking project for length of stay, the fact that some hospitals engaged in other compulsory registration project explained a drop of participation after few years. Finally, only for seven projects did the literature mention the identification of leader health facilities and sharing of best practices. This was organised either through tools or databases developed to that effect, or through meetings, workshop or networking events between hospitals. For the remaining projects, no mention of sharing best practices is made.

Success factors and threats linked to indicators or data collection

One recurring issue from the benchmarking projects concerns the crucial importance of the complexity and amount of data. In the NCDAH project, while most participants agreed that the feedback report contained the right amount of information, some participants felt that the data were too complex and the reports contained too much information. Participants in the BELIEVE benchmarking project felt the burden of participating in the project was too heavy or not properly evaluated beforehand. An evaluation of the PATH project reported that there was a major agreement that the burden of data collection was too important for the following indicators: prophylactic antibiotic use, surgical theatre use, training expenditure and absenteeism. In addition to a data collection burden, definition and methodology is of crucial importance. The feedback of the PATH project reported major disagreements regarding the definition of three indicators: training expenditure, health promotion budget and patient-centredness. Those indicators were later abandoned for the project.[31,30] Participants in the BELIEVE project also reported difficulties in interpreting the questions (that were resolved during training sessions). Adjusting indicators for diagnostic-related groups was mentioned as one success factor of the Benchmarking project in Lombardy.[16] Indeed, this adjustment allowed for fairer comparison and enabled to identify the areas that need improvement the most. Using a combination of process and outcome indicators, rather than outcome only measures was considered as beneficial. The advantage of process indicators over outcome indicators is that they reflect true variations in care delivery, while outcome indicators can be influenced by other factors.[57] Including process indicators in the benchmarking projects allows us to identify the remedial actions. This finding is similar to one of the conclusions related to the CALNOC project: outcome measures include near-misses, which allows us to correct the system.[43] Different projects had different policies regarding public release of data. Two projects (the National Practice Benchmark project and the NCDAH project) released only anonymised data or average results, or no data at all, even for the project participants. Two projects disclosed nominative data but only for the benchmarking participants while releasing only anonymised data to the public (CALNOC project and Australia’s mental health project). Six projects disclosed hospital data publicly but anonymously. The Lombardy Benchmarking project shared the results with health-care providers outside the benchmarking project and sharing data with patients was under discussion. And two projects (Nordic and Danish Indicator projects) publicly disclosed nominative hospital data. For the remaining projects, data release or sharing between participants was not mentioned. Finally, other lessons mentioned in the articles are: the use of regional data might be more acceptable;[16] crude mortality rate might not be a valid indicator as it does not take into account mortality after discharge,[22] due to different reimbursement mechanisms (in different countries) the use of financial indicators is especially complex,[7] and as some indicators were subject to large year-to-year variations, measuring indicators over a 1-year period does not always give a good impression of performance.[7]

Discussion

The aim of our review was to analyse different European or international benchmarking projects of hospitals or health-care facilities in order to draw important lessons, avoid duplication of work and identify the success factors and threats to benchmarking of hospitals. We analysed the peer-reviewed and grey literature related to 18 benchmarking projects and 4 indicator development projects for benchmarking. Improving quality of care was mentioned as the most important motivation for health authorities to develop benchmarking projects, showing a rising demand for accountability and transparency of care.[36] In some cases, it seems that this demand has financial consequences. Indeed, a rise in litigation costs linked to care negligence and withheld reimbursement for treatment of conditions preventable by improved care are mentioned as reasons for the development of or participation in, respectively, the Essence of Care and CALNOC projects. This issue has been more often documented in the United States. Indeed, the rise of the performance measurement and comparisons by the Health Care Financing Administration (the agency responsible for administering the Medicaid and Medicare programmes) has also been noted in a previous article discussing quality measuring in US nursing homes.[59] Those aspects could be viewed as direct (in the case of the CALNOC project) or indirect (for the Essence of Care project) incentives. The effect of financial incentives for performance on hospitals is a controversial subject. A recent review of Pay for Performance initiatives summarised that individuals tend to respond more strongly to negative incentives than to positive incentives of equivalent size, but negative incentives are likely to be perceived as unfair and may result in negative reactions.[60] As explained earlier, the decision to initiate a benchmarking project was most often a top-down one; but the participation of facilities was voluntary for all projects except one. For only two projects, a possible financial motive for facilities was mentioned (costs of litigation or withheld Medicare/Medicaid reimbursement due to poor outcomes). But we did not find other information about the incentives of hospitals to participate in such projects. The increase in participation of the Benchmarking of breast cancer units by the DKG/DGS and the NCDAH possibly reflect a growing adhesion and popularity of those projects. However, we did not find an analysis of the reasons for that increasing success. On the other hand, some projects have seen their participation decline over time. This could be due to difficulties in maintaining interest in participants, or due to the appearance of concurrent quality improvement projects. Indeed, the existence of competing or overlapping projects was mentioned as one threat for the implementation of the PATH project. We did not observe any apparent link between the number of facilities participating in the project and the outcome of the project or the success factors and threats. In most documented cases, the impact of the project was reported as positive, resulting in either change at the institutional level, improvement in clinical outcomes, increased use of guidelines or improvements in communication. It is interesting to note that, while most projects used a mix of structure, process and outcome indicators or process and outcome indicators, most of the reported positive impact of the benchmarking projects are linked to process measures. Only one benchmarking project reports an impact in terms of outcomes. Some of the success factors for the conduct of a benchmarking project include the necessity to use comparable data (adjusted for case mix or other factors). Unsurprisingly, this information is consistent with the findings of a previous literature review, as is the recommendation to organise a meeting for participants.[11] Indeed, it reports from previous studies that focus group meetings and interviews are a central component of benchmarking, providing information that serves to identify problems, issues, concerns and possible unmet needs from the perspective of the users of the service and service providers. This dynamic of comparing and learning from each other distinguishes benchmarking from other quality improvement processes. The issue of making data public or not is one point of controversy between different articles. In one study,[20] it was noted that the anonymity of centres was a success factor, while another article argued that the public disclosure of results was suspected of promoting risk-averse behaviour from providers but acknowledged that such disclosure could drive quality improvement. In our review, the practice of public disclosure of nominative data was rare but some projects only shared anonymised or average data, even within the benchmarking participants. This controversy is not limited to the articles included in our review. Advocates of report cards believe that publicly releasing performance data on hospitals will stimulate hospitals and clinicians to engage in quality improvement activities and increase the accountability and transparency of the health-care system. Critics argue that publicly released report cards may contain data that are misleading or inaccurate and may unfairly harm the reputations of hospitals and clinicians. They also are concerned that report card initiatives may divert resources away from other important needs.[61] Although there is evidence that public reports do not affect patients’ choice of hospital,[62] the impact on quality is unclear. It appears that hospitals who are subject to public reporting have engaged in quality improvement initiatives,[61,63] but the evidence on process and outcome indicators is mixed.[61,62] Projects have used a wide range of approaches to define and select indicators to be used in the projects, such as interviews, focus groups, literature reviews and consensus surveys. We have noted that one project (Essence of Care) included patients’ feedback when defining best practices measured by indicators. This project was the only one that used this approach. It appears that the involvement of patients in the quality policy of a health facility is highly encouraged.[64] However, it seems that this practice is still not widely implemented.[64] The implication of patients in the definition of quality indicators and research on this subject seems very scarce: a systematic review conducted in 2013 found only 11 scientific articles describing how patients are involved in quality indicators development.[65] None of those studies compared different approaches or explained how their contribution led to changes in the resulting quality indicators. Our review confirms those results as the literature related to the ‘Essence of Care’ project did not detail precisely how the patients and carers were involved in the definition of best practices. More research is needed on this subject. Other projects, while not involving patients in indicators selection, used patients’ satisfaction surveys as part of the indicators measuring the quality of their hospital. The literature on this subject confirms that patient experience measures are an appropriate complement to clinical quality measure. Patient satisfaction is linked with better patients’ adherence to treatment protocol, best practice clinical processes, better safety culture and better clinical outcome.[66]

Policy implications

Policy makers or programme coordinators who want to develop benchmarking projects of hospitals or health facilities should learn lessons from previous projects. First and foremost, ensuring the commitment to the project by the management team of hospitals participating and the allocation of sufficient resources for the completion of the project is paramount to the development of a benchmarking exercise. Given the time and efforts that are requested for participation in a benchmarking project, developers of benchmarking projects should reflect on incentives for health facilities to participate continuously over time. One important challenge to the development of a benchmarking project is the issue linked to data sharing. On one hand, sharing data between partners of a benchmarking project is essential for hospitals to learn best practices; on the other hand, the request to share confidential data could deter health facilities to participate in such a project and therefore jeopardise its success. The benchmarking projects reviewed adopted diverse policies in the project reviewed, but perhaps anonymising or clustering data could be a suitable option. Project coordinators should develop clear guidelines on this subject in consensus with the partners and participating health facilities. In terms of indicators, using a mix of process, structure and outcome indicators seems the most effective, and adjusting the clinical outcome indicators for diagnostic-related-groups is more appropriate and accepted, as it leads to fairer comparisons between hospitals. Lack of clarity around the calculation indicators has been reported as a problem and can lead to invalid results and unfair comparisons. It needs to be ensured that the methodology for indicators is very clear and as less a burden as possible and is feasible for all participants. Finally, coordinators of benchmarking programmes should provide opportunities for participants to meet and exchange with other participants in order to promote the dissemination of good practices.

Strength and limitations of the review

Our review has analysed different benchmarking projects in the world. To our knowledge, this is the first in-depth and global analysis of the benchmarking projects of health-care facilities performed. We were able to collect relevant information to be used for the development of future benchmarking projects. One of the strengths of our review is that we have included material from the grey literature, as well as peer-reviewed articles. However, our review is not without limitations. While we have tried to include a diversity of benchmarking projects, it should be noted that our review was not meant to be exhaustive or systematic. We might have missed national projects with material written in languages other than French or English. We started from a search of scientific literature and, by snowballing, included grey literature related to a benchmarking project. However, many projects were not reported in any peer-reviewed articles, so we did not include them. This is justified by our objective to retrieve in-depth analysis and feedback from project, which might be missing in grey literature publications. We should note also limitations about the data we encountered. As the evidence on the impact of benchmarking project and on sharing data to yield best practices was limited, we were not able to perform a strong analytical comparison between studies. However, we were able to describe how benchmarking studies report on those projects.

Conclusion

We reviewed the peer-reviewed and grey literature about benchmarking projects in order to draw lessons that can be applied when developing new benchmarking projects, avoid duplication of work and identify the success factors and threats to the benchmarking of hospitals. We hope that this review and the related material that we present will be of interest to those who plan to participate in or coordinate a benchmarking project, or research on benchmarking in health care. Although the literature we studied reported a positive impact for most of the benchmarking projects, this impact is mainly at the structure and process level. There is a lack of evidence about the impact of benchmarking on patients’ benefit. Future research on benchmarking should investigate the long-term impact of benchmarking health facilities, particularly in terms of patient’s outcomes and the learning of best practices.
  50 in total

1.  Are nurses engaged in quality initiatives?

Authors:  Michelle Mello; Jane Cummings
Journal:  Nurs Times       Date:  2011 Sep 20-26

2.  Developing a booklet to share best practice in implementing Essence of Care benchmarks.

Authors:  Ann Gibbins; Jane Butler
Journal:  Nurs Times       Date:  2010 Mar 30-Apr 5

3.  A performance assessment framework for hospitals: the WHO regional office for Europe PATH project.

Authors:  J Veillard; F Champagne; N Klazinga; V Kazandjian; O A Arah; A-L Guisset
Journal:  Int J Qual Health Care       Date:  2005-09-09       Impact factor: 2.038

4.  Hospital performance reports: impact on quality, market share, and reputation.

Authors:  Judith H Hibbard; Jean Stockard; Martin Tusler
Journal:  Health Aff (Millwood)       Date:  2005 Jul-Aug       Impact factor: 6.301

5.  National benchmarking between the Nordic countries on the quality of care.

Authors:  Jan Mainz; Morten Hjulsager; Mette Thorup Eriksen Og; Jytte Burgaard
Journal:  J Surg Oncol       Date:  2009-06-15       Impact factor: 3.454

6.  Exploring variation in pressure ulcer prevalence in Sweden and the USA: benchmarking in action.

Authors:  Lena Gunningberg; Nancy Donaldson; Carolyn Aydin; Ewa Idvall
Journal:  J Eval Clin Pract       Date:  2011-06-22       Impact factor: 2.431

7.  Optimizing the quality of breast cancer care at certified german breast centers: a benchmarking analysis for 2003-2009 with a particular focus on the interdisciplinary specialty of radiation oncology.

Authors:  Sara Y Brucker; Markus Wallwiener; Rolf Kreienberg; Walter Jonat; Matthias W Beckmann; Michael Bamberg; Diethelm Wallwiener; Rainer Souchon
Journal:  Strahlenther Onkol       Date:  2011-01-21       Impact factor: 3.621

8.  Benchmarks in clinical productivity: a national comprehensive cancer network survey.

Authors:  F Marc Stewart; Robert L Wasserman; Clara D Bloomfield; Stephen Petersdorf; Robert P Witherspoon; Frederick R Appelbaum; Andrew Ziskind; Brian McKenna; Jennifer M Dodson; Jane Weeks; William P Vaughan; Barry Storer; Sara Perkel; Marcy Waldinger
Journal:  J Oncol Pract       Date:  2007-01       Impact factor: 3.840

9.  Effectiveness of public report cards for improving the quality of cardiac care: the EFFECT study: a randomized trial.

Authors:  Jack V Tu; Linda R Donovan; Douglas S Lee; Julie T Wang; Peter C Austin; David A Alter; Dennis T Ko
Journal:  JAMA       Date:  2009-11-18       Impact factor: 56.272

10.  Investigating the use of patient involvement and patient experience in quality improvement in Norway: rhetoric or reality?

Authors:  Siri Wiig; Marianne Storm; Karina Aase; Martha Therese Gjestsen; Marit Solheim; Stig Harthug; Glenn Robert; Naomi Fulop
Journal:  BMC Health Serv Res       Date:  2013-06-06       Impact factor: 2.655

View more
  9 in total

1.  A country wide adaptation of the European Society of Thoracic Surgeons lung cancer core database: the Hungarian model.

Authors:  Zalan Szanto; Jozsef Furak; Pierre-Emmanuel Falcoz; Alessandro Brunelli; Gyoergy Lang
Journal:  J Thorac Dis       Date:  2018-10       Impact factor: 2.895

2.  Accuracy and generalizability of using automated methods for identifying adverse events from electronic health record data: a validation study protocol.

Authors:  Christian M Rochefort; David L Buckeridge; Andréanne Tanguay; Alain Biron; Frédérick D'Aragon; Shengrui Wang; Benoit Gallix; Louis Valiquette; Li-Anne Audet; Todd C Lee; Dev Jayaraman; Bruno Petrucci; Patricia Lefebvre
Journal:  BMC Health Serv Res       Date:  2017-02-16       Impact factor: 2.655

3.  "We Sometimes Hold on to Ours" - Professionals' Views on Factors that both Delay and Facilitate Transition to Adult Care.

Authors:  Susie Aldiss; Hilary Cass; Judith Ellis; Faith Gibson
Journal:  Front Pediatr       Date:  2016-11-24       Impact factor: 3.418

4.  Collection and use of EQ-5D for follow-up, decision-making, and quality improvement in health care - the case of the Swedish National Quality Registries.

Authors:  Olivia Ernstsson; Mathieu F Janssen; Emelie Heintz
Journal:  J Patient Rep Outcomes       Date:  2020-09-16

5.  One size fits none - a qualitative study investigating nine national quality registries' conditions for use in quality improvement, research and interaction with patients.

Authors:  Vibeke Sparring; Emma Granström; Magna Andreen Sachs; Mats Brommels; Monica E Nyström
Journal:  BMC Health Serv Res       Date:  2018-10-20       Impact factor: 2.655

6.  Development of a benchmark tool for cancer centers; results from a pilot exercise.

Authors:  Anke Wind; Joris van Dijk; Isabelle Nefkens; Wineke van Lent; Péter Nagy; Ernestas Janulionis; Tuula Helander; Francisco Rocha-Goncalves; Wim van Harten
Journal:  BMC Health Serv Res       Date:  2018-10-10       Impact factor: 2.655

7.  Validation of an algorithm based on administrative data to detect new onset of atrial fibrillation after cardiac surgery.

Authors:  Jonathan Bourgon Labelle; Paul Farand; Christian Vincelette; Myriam Dumont; Mathilde Le Blanc; Christian M Rochefort
Journal:  BMC Med Res Methodol       Date:  2020-04-05       Impact factor: 4.615

8.  Predicted quality benefits of achievable performance benchmarks of chronic heart failure care in China: results from a nationwide observational study.

Authors:  Chang Yin; Xi Li; Chao Wang; Jingkun Li; Xiaoqiang Bao; Qiuju Zhang; Yupeng Wang; Xudong Ma; Meina Liu
Journal:  BMJ Open       Date:  2020-09-23       Impact factor: 2.692

9.  The contribution of benchmarking to quality improvement in healthcare. A systematic literature review.

Authors:  Claire Willmington; Paolo Belardi; Anna Maria Murante; Milena Vainieri
Journal:  BMC Health Serv Res       Date:  2022-02-02       Impact factor: 2.655

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.