Literature DB >> 27507233

Do university hospitals perform better than general hospitals? A comparative analysis among Italian regions.

Sabina Nuti1, Tommaso Grillo Ruggieri1, Silvia Podetti1.   

Abstract

OBJECTIVE: The aim of this research was to investigate how university hospitals (UHs) perform compared with general hospitals (GHs) in the Italian healthcare system. DESIGN AND
SETTING: 27 indicators of overall performance were selected and analysed for UHs and GHs in 10 Italian regions. The data refer to 2012 and 2013 and were selected from two performance evaluation systems based on hospital discharge administrative data: the Inter-Regional Performance Evaluation System developed by the Management and Health Laboratory of the Scuola Superiore Sant'Anna of Pisa and the Italian National Outcome Evaluation Programme developed by the National Agency for Healthcare Services. The study was conducted in 2 stages and by combining 2 statistical techniques. In stage 1, a non-parametric Mann-Whitney U test was carried out to compare the performance of UHs and GHs on the selected set of indicators. In stage 2, a robust equal variance test between the 2 groups of hospitals was carried out to investigate differences in the amount of variability between them.
RESULTS: The overall analysis gave heterogeneous results. In general, performance was not affected by being in the UH rather than the GH group. It is thus not possible to directly associate Italian UHs with better results in terms of appropriateness, efficiency, patient satisfaction and outcomes.
CONCLUSIONS: Policymakers and managers should further encourage hospital performance evaluations in order to stimulate wider competition aimed at assigning teaching status to those hospitals that are able to meet performance requirements. In addition, UH facilities could be integrated with other providers that are responsible for community, primary and outpatient services, thereby creating a joint accountability for more patient-centred and integrated care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

Entities:  

Keywords:  Evaluation; Hospital; Italy; Performance; University

Mesh:

Year:  2016        PMID: 27507233      PMCID: PMC4985844          DOI: 10.1136/bmjopen-2016-011426

Source DB:  PubMed          Journal:  BMJ Open        ISSN: 2044-6055            Impact factor:   2.692


This study provides evidence about differences in terms of performance between university hospitals and general hospitals that was lacking in Italy. The analysis shows new results about hospital performance that can contribute to the debate on this topic. For the first time, a non-parametric approach of analysis was applied to this topic in the Italian context. The study is limited to the Italian healthcare system and its organisational structure. There could be other performance indicators that are as valuable and informative as those measures included in the analysis.

Introduction

University hospitals (UHs) can be considered as complex organisations given that their mission includes three different objectives: patient care, education and research.1 UHs combine all the features of Mintzberg's Professional Bureaucracy2 embedded within both the healthcare organisations and the university context. In addition, UHs are usually referral centres for most complex care within a hub-and-spoke hospital network.3 Given the threefold mission of these institutions and the specific role that they play in the healthcare system, should UHs be considered as a ‘cluster’ with specific performance patterns? This study investigates whether UHs behave homogeneously regarding performance results with substantial differences with respect to general hospitals (GHs). Evidence on this topic could provide important information for policymakers and managers in defining specific policies and actions in order to improve the quality of care within the regional network of hospitals, where UHs play a specific and strategic role, and in order to pursue their specific mission. In particular, in Italy as in other countries, UHs are in charge of the strategic role of training doctors of the future. Therefore, since health professionals are the most important assets for the healthcare organisations, policymakers should ensure that clinicians are trained and supported by institutions that can ensure the appropriate requirements in terms of quality of care and research productivity. The analysis was carried out in Italy.

Background

Teaching status has been already investigated from several perspectives by studying whether it affects the results of UHs compared with other hospitals in terms of outcomes, quality of care, productivity, costs, etc. First, reviews on outcomes, quality of care and prevention of adverse events reached mixed conclusions and highlighted the need for evidence on differences between UHs and GHs.4 5 Some reviews underlined better overall results for UHs,6 7 whereas a systematic review highlighted no differences between UH and GH outcomes.8 Second, studies on productivity and efficiency have usually applied Data Envelopment Analysis (DEA) and frequently highlighted better performance of GHs with respect to UHs.9 10 Indeed, training resident students carrying out research activities besides patient care and the role of referral centres for complex care have often been identified as elements that can increase costs.11–13 This frequently drives additional financial resources to UHs (eg, an increased markup in the reimbursement system for UH discharges).6 Research on this topic presents several differences in terms of data sources, measurement processes and methodology for data analysis.4 This could raise potential issues regarding external validity and result generalisability.6–9 Examples of these differences are: The data sources: for example, medical records or administrative data; The definition of UHs and their ownership (public, private, for-profit, non-profit): for example, some studies consider only major UHs, whereas others include all the hospitals with a residency programme; The indicators included in the analysis (usually outcomes, quality of care or efficiency) and the different calculation criteria and risk-adjustment procedure used for the same measures (mortality rates, process measures, etc); The statistical methods used to compare hospitals (parametric and non-parametric approaches and tests such as DEA, analysis of variance (ANOVA), Kruskal-Wallis, Mann-Whitney, etc). These differences may partially explain why research looking at different performance or outcomes in UHs or controlling for a potential effect of the teaching status has not led to straightforward results. Finally, results may be also associated with the specific geographical context. For instance, in one of the most recent systematic reviews on this topic, more than three-fourths of the studies included in the analysis were conducted in the USA.8 However, each specific geographical and health system context may play an important role in explaining results. With reference to Italy, detailed studies are also lacking on this topic. Scholars have focused on governance issues or research evaluations (see, for instance, refs. 14–17). There have been no systematic comparisons of performances between the two groups of hospitals and related research.

The Italian context

The national healthcare system in Italy follows a Beveridge Model by providing universal coverage through general taxation. Regional governments are responsible for organising and delivering health services and being accountable for performance. The national government monitors the pursuit of the universal coverage, in particular with respect to a package of essential services (nationally defined basic health benefit package—Livelli Essenziali di Assistenza). The national government allocates financial resources to the regional governments on an adjusted capitation basis. Regions then reallocate resources to Local Health Authorities (LHAs), through a regionally adjusted capitation formula. In Italy, hospital care is delivered by public GHs directly managed by the LHAs, private or public autonomous hospitals (AHs), private or public UHs and research hospitals (RHs). AHs, UHs and RHs are autonomous organisations with respect to LHAs managing the healthcare delivery in their own geographical area. UHs can be classified considering ownership and different institutional and organisational settings.18 In Italy, the teaching status can be attributed to hospitals owned by private university medical schools, hospitals owned by public university medical schools and hospitals jointly owned by both public university medical schools and the regional administration. In this last case, the chief executive officer (CEO) is jointly appointed by the two institutions. Following the national laws (D.Lgs 502/92 and D.Lgs 517/99), these hospitals are identified as teaching facilities by the Ministry of Health, the Ministry of Education and the Regional Administrations. Regardless of the ownership and the organisational settings, health professionals employed by universities, besides teaching and carrying out research, also provide patients care and receive an additional 30% remuneration. These costs are directly sustained not by the universities but by the hospital administration. Considering patient care activity, since UHs are autonomous authorities, they are not financed through capitation-based funding as the LHAs, but through different financing mechanisms depending on regional strategies. At the national level, UH inpatient services delivered for residents of other regions are reimbursed considering a diagnosis related group (DRG) tariff increase of 7%. At the regional level, UHs can be financed through a pay for service system based on DRG tariffs (eg, Lombardy region) or through a budget-cost control system. In the first case, UH DRG tariffs are increased by a certain percentage (usually the 3% circa), depending on the case-mix delivered and the regional strategy. In the second case, as well as in other countries,19 regions usually assign additional resources to UHs through specific funds linked to education, research and complex care delivery (eg, in Tuscany, the amount of these funds accounted for 30% of the UH overall budget). Therefore, UHs receive an additional amount of resources with respect to GHs, but this varies depending on the regional policies.14 Italian UHs have on average a much higher number of hospital beds with respect to GHs and are referral centres for highly complex and highly specialised care, such as neurosurgery, cardiosurgery, radiotherapy, most critical intensive care, paediatric highly complex surgery, etc. Evidence from Italy on the comparison of UH performance with respect to GHs may provide valuable information for both healthcare policymakers and managers, at both regional and national levels and not only in Italy. Indeed, if UHs behave as a specific ‘cluster’, new policies and focused actions could be defined to support the specific role of these authorities within the hospital network in the regional and national contexts. Evidence of similar patterns of performance between these two groups of hospitals may highlight the need to look for other sources of variation. Therefore, other features from the teaching and research status may be relevant to inform policies on hospital governance, financing and network organisation, considering the crucial role of UHs in training the future clinicians for the healthcare system. The aim of this paper is thus to investigate how UHs perform in comparison to GHs.

Methods

Data sources and hospital selection

The data used in this analysis were selected from two performance evaluation systems based on the same hospital discharge administrative database: The Inter-Regional Performance Evaluation System (IRPES) developed by the Management and Health Laboratory of the Scuola Superiore Sant'Anna of Pisa (MeS-Lab)—where the authors of this paper are researchers. This system provides a multidimensional evaluation of performance including efficiency, appropriateness, integration and quality of care. This system was first implemented by the regional government in Tuscany20 21 and was then adopted—on a voluntary basis—by the majority of other Italian regions.[i] 22 23 The evaluation process measures through benchmarking and with specific risk adjustment processes the results achieved every year by all the Health Authorities (the LHAs, the UHs, the RHs and the AHs) located in these regions. Results are publicly reported.24 The Italian National Outcome Evaluation Programme (NOEP) developed by the National Agency for Healthcare Services on behalf of the Ministry of Health. This system measures outcomes nationwide,25 that is, for each Italian hospital. On the basis of rigorous risk adjustment processes,26 27 these measures represent assessment tools to support clinical and organisational audit programmes aimed at improving outcome and equity in the National Health Service. Data refer to the years 2012 and 2013, apart from two economic indicators related to balance sheets, which are available only for 2011 and 2012. Two groups of hospitals were considered in the analysis. The groups differed in particular in terms of whether they had teaching status, and in the organisational autonomy with respect to the LHAs. They also differed in terms of the average number of hospital discharges (in 2012, 32 632 for UHs and ∼17 606 for GHs) and the average DRG weight (in 2012, 1.3 for UHs and 1.06 for GHs). The whole study included all the 15 UHs and 73 LHAs of the 10 IRPES regions.

Performance indicators

For the purposes of this study, 27 performance indicators were selected, 10 from IRPES (table 1) and 17 from NOEP (box 1).
Table 1

IRPES indicators

IRPES indicatorsRationale
Efficiency and appropriateness
Relative stay index (case-mix adjusted differential average LOS days)Measure of the average difference from the standard LOS for admitted patients with adjustments for case-mix.
Percentage of medical discharges with LOS over the threshold for patients aged 65 and overMeasure of the hospital compliance with the Italian Ministry of Health standards for the LOS for medical inpatient activity for elderly patients. This measure is a proxy for the effective implementation of integrated pathways between home, community-based and hospital care for elderly patients.
Percentage of ED green-coded patients visited within 1 hourMeasure of timely emergency care for ED patients whose treatment may be delayed without risk.
Percentage of ED patients referred for hospital admission with ED LOS≤8 hoursMeasure of overall timely emergency care.
Percentage of medical inpatient discharges within 2 days (National Healthcare Agreement 2010)Measure of hospital compliance in avoiding short ordinary hospitalisations for patients who could be treated in outpatient clinics or in other care settings, as requested by the Italian Ministry of Health standards in the National Healthcare Agreement of 2010.
Percentage of day case surgery for specific procedures (National Healthcare Agreement 2010)Measure of hospital compliance with Italian Ministry of Health standards for delivering specific, not complex, surgical procedures in day case surgery or in outpatient clinics rather than through ordinary hospitalisations.
Patient satisfaction
Percentage of patients leaving ED against/without medical adviceProxy of patient satisfaction on ED services and waiting times.
Percentage hospitalised patients leaving against medical adviceProxy of patient satisfaction for the inpatient activity.
Economic and financial evaluation
Average cost per weighted caseMeasure of the ratio of a hospital acute inpatient care expenses to the number of acute inpatient cases weighted for the DRG complexity. The weighting enhances comparability across hospitals. The measure includes the percentage cost of hospital university staff financed by the regional administration for their patient care activity. This allows to take into account the overall hospital staff costs.
Average expenditure per diagnostic imaging weighted for tariffMeasure of efficiency that compares costs and the value of the delivered diagnostic activity (sum of ambulatory tariffs).

DRG, diagnosis related group; ED, emergency department; IRPES, Inter-Regional Performance Evaluation System; LOS, length of stay.

Outcome: measures of 30-day mortality or readmissions for relevant inpatient activity AMI: 30-day mortality AMI without PTCA: 30-day mortality AMI with PTCA within 2 days: 30-day mortality AMI with PTCA after 2 days: 30-day mortality AMI: 1-year mortality AMI: MACCE after 1 year Isolated aortocoronary bypass: 30-day mortality Valvuloplasty or heart valve replacement: 30-day mortality Congestive heart failure: 30-day mortality Ischaemic stroke: 30-day mortality Ischaemic stroke: 30-day readmission Chronic obstructive pulmonary disease (COPD) exacerbation: 30-day mortality COPD: 30-day readmission Proportion of caesarean section Femur fracture: 30-day mortality Femur fracture: percentage of operations carried out within 2 days Colon cancer surgery: 30-day mortality AMI, acute myocardial infarction; MACCE, major adverse cardiac and cerebrovascular event; PTCA, percutaneous transluminal coronary angioplasty. IRPES indicators DRG, diagnosis related group; ED, emergency department; IRPES, Inter-Regional Performance Evaluation System; LOS, length of stay. Eight IRPES indicators regard efficiency and appropriateness, patient satisfaction, and economic and financial dimensions. Two indicators regard economic and financial evaluation. This selection was shared by the group of IRPES regional representatives. This group is in charge of systematically reviewing and discussing the measures included in the IRPES as relevant proxies for measuring performance in a multidimensional perspective in all the different settings of care.22 For both sources of the selected indicators, the time coverage and the number of providers needed to perform the statistical test were guaranteed, thus ensuring the consistency of the comparative analysis between the two groups of hospitals in this single-country study.28 29 The number of observations for the NOEP indicators may differ because not all the hospitals included in the analysis provide all the healthcare services linked to the included measures. However, the selection of these measures took into account the services usually provided by both LHA-GHs and UHs. The analysis for the IRPES indicators compared the 15 UHs to the 73 LHAs. On the other hand, the analysis for the NOEP indicators was carried out at the hospital level, thus comparing the (at most) 19 facilities of the 15 UHs to the individual (at most) 187 GHs led by the 73 LHAs (see online supplementary appendix I for the complete list of hospitals considered and the number of observations included for each indicator).

Statistical methods

The study was conducted in two stages and by combining two statistical techniques. Data were processed using Stata software, V.12. In stage 1, a non-parametric Mann-Whitney U test was carried out to compare the performance of UHs and GHs on the selected set of indicators. This analysis determines whether UHs and GHs were drawn from the same target population. Previous studies have already applied this univariate analysis to illustrate differences between hospitals30 because of its appropriateness with small samples.31–35 For the purposes of this study, this test verified whether there were differences between UH and GH performance, or, in other words, whether UHs and GHs could be considered as two different clusters. In stage 2, we carried out a robust equal variance test to investigate differences in the amount of variability between UHs and GHs.36 This test is usually used to verify the assumption of homogeneity of variance across groups, meaning that the internal variability of one group of hospitals is not significantly different with respect to the other one. To be in line with the assumptions of the Mann-Whitney U test, we used an extension of Levene's test as suggested by Brown and Forsythe.37 We applied the test only for those indicators in which the Mann-Whitney U test did not show significant differences between UH and GH performances. Indeed, in those cases where the performance between the two groups did not show significant differences, we tested whether there were specific patterns in terms of variability.

Results

The Mann-Whitney U test on IRPES indicators showed that in relation to four measures of ‘Efficiency and appropriateness’ and ‘Economic and financial evaluation’ dimensions, there were differences in performance between UHs and GHs. The test, in fact, was significant both in 2012 and 2013 for the ‘Percentage of emergency department (ED) green-coded patients visited within one hour’, the ‘Percentage of medical inpatient discharges within two days’ and the ‘Percentage of day case surgery for specific procedures (National Healthcare Agreement 2010)’. The test was significant also in 2011 and 2012 for the ‘Average expenditure for diagnostic imaging weighted for tariff’. For these indicators, GHs seemed to perform better than UHs. On the other hand, with reference to the indicators ‘Relative stay index’, ‘Percentage of medical discharges with length of stay (LOS) over the threshold for patients aged 65 and over’, and ‘Percentage of ED patient referred for hospital admission with ED LOS≤8 hours’, the Mann-Whitney U test was rejected for both 2012 and 2013. Moreover, no significant differences were found for patient satisfaction proxies ‘Percentage of patients leaving ED against/without medical advice’ and of ‘Percentage of hospitalised patients leaving against medical advice’. Moreover, in 2013, UHs accounted for fewer patients who were discharged against medical advice, whereas in 2012 the GHs achieved better results. The test was also not significant for the ‘Average cost per weighted case’ and this occurred also after deleting outliers. Table 2 summarises the results of the test and illustrates the average and the median values of the two groups of hospitals for each of the indicators.
Table 2

Mann-Whitney U test for IRPES indicators

2012
2013
Mann-Whitney U test IRPES indicatorsMedian UHMedian GHMean UHMean GHBest Perf. medianMedian UHMedian GHMean UHMean GHBest Perf. median
Efficiency and appropriateness
Relative stay index (case-mix adjusted differential average LOS days)−0.2−0.10−0.2UH0−0.3−0.1−0.3GH
Percentage of medical discharges with LOS over the threshold for patients aged 65 and over4.83.64.64GH3.73.54.33.8GH
Percentage of ED green-coded patients visited within 1 hour73.179.272.777.3GH*68.477.267.276.2GH*
Percentage of ED patients referred for hospital admission with ED LOS≤8 hours98.897.893.994.8UH98.297.593.294.5UH
Percentage of medical inpatient discharges within 2 days (National Healthcare Agreement 2010)21.514.622.314.9GH*21.814.121.914.4GH*
Percentage of day case surgery for specific procedures (National Healthcare Agreement 2010)46.258.84858.9GH*48.459.14959GH*
Patient satisfaction
Percentage of patients leaving ED against/without medical advice3.23.23.63.1GH3.53.23.63.4GH
Percentage of hospitalised patients leaving against medical advice0.90.811GH0.70.80.90.9UH
Economic and financial evaluation
2011
2012
Average cost per weighted case4.4714.3174.7824.398GH4.4844.5164.7454.651UH
Average expenditure per diagnostic imaging weighted for tariff1.40.91.81.1GH*1.411.61.1GH*

*p-value<0.05.

ED, emergency department; GH, general hospital; IRPES, Inter-Regional Performance Evaluation System; LOS, length of stay; UH, university hospital.

Mann-Whitney U test for IRPES indicators *p-value<0.05. ED, emergency department; GH, general hospital; IRPES, Inter-Regional Performance Evaluation System; LOS, length of stay; UH, university hospital. Regarding the test for the NOEP indicators, for all the tested measures, the Mann-Whitney U test was not significant except for two measures that showed mixed results in 2012 and 2013 (table 3) (in online supplementary appendix II, box plots for IRPES and NOEP indicators with significant differences between UHs and GHs are shown).
Table 3

Mann-Whitney U test for NOEP risk-adjusted indicators

2012
2013
Mann-Whitney U test—NOEP risk-adjusted indicatorsMedian UHMedian GHMean UHMean GHBest perf. medianMedian UHMedian GHMean UHMean GHBest perf. median
Outcome indicators
AMI: 30-day mortality9.88.810.19.3GH9.17.68.98.1GH
AMI without PTCA: 30-day mortality17.415.517.716.5GH16.815.017.515.5GH
AMI with PTCA within 2 days: 30-day mortality4.84.14.64.2GH4.13.74.43.7GH
AMI with PTCA after 2 days: 30-day mortality2.72.43.22.6GH2.62.52.92.8GH
AMI: 1-year mortality10.411.110.611.5UH9.810.610.210.8UH
AMI: MACCE after 1 year2424.824.525.2UH22.423.123.123.5UH
Isolated aortocoronary bypass: 30-day mortality1.81.92.22.0UH22.32.42.1UH
Valvuloplasty or heart valve replacement: 30-day mortality2.63.72.93.5UH2.33.02.83.2UH
Congestive heart failure: 30-day mortality8.49.89.310.8UH8.810.78.711.1UH*
Ischaemic stroke: 30-day mortality9.410.18.810.5UH9.29.69.310.5UH
Ischaemic stroke: 30-day readmission11.19.410.510.3GH6.76.77.27.2UH
COPD exacerbation: 30-day mortality7.28.77.68.9UH7.28.27.78.8UH
COPD: 30-day readmission14.215.615.015.4UH14.215.414.215.4UH
Proportion of caesarean section19.918.123.618.8GH20.218.522.519.3GH
Femur fracture: 30-day mortality4.24.84.75.1UH4.44.74.74.8UH
Femur fracture: percentage of operations carried out within 2 days48.454.441.553.2GH*50.660.254.259.4GH
Colon cancer surgery: 30-day mortality3.43.94.44.3UH3.04.23.74.6UH

*p-value<0.05.

AMI, acute myocardial infarction; COPD, chronic obstructive pulmonary disease; GH, general hospital; MACCE, major adverse cardiac and cerebrovascular event; NOEP, National Outcome Evaluation Programme; PTCA, percutaneous transluminal coronary angioplasty; UH, university hospital.

Mann-Whitney U test for NOEP risk-adjusted indicators *p-value<0.05. AMI, acute myocardial infarction; COPD, chronic obstructive pulmonary disease; GH, general hospital; MACCE, major adverse cardiac and cerebrovascular event; NOEP, National Outcome Evaluation Programme; PTCA, percutaneous transluminal coronary angioplasty; UH, university hospital. For the ‘Congestive heart failure: 30-day mortality’, the test showed no statistical differences between UHs and GHs in 2012. However, a significantly better performance for UHs was found in 2013. Similarly, in the case of the indicator ‘Femur fracture: percentage of operations carried out within two days’, the Mann-Whitney U test showed significant differences between UHs and GHs in 2012, but not for 2013, with GHs having the best median performance. In order to investigate different variations between the two groups of hospitals, the robust equal variance test37 was carried out for a set of 23 indicators (6 IRPES indicators and 17 NOEP indicators) that rejected the Mann-Whitney U test. Regarding IRPES indicators, the test was always not significant for both years included in the analysis (table 4). UHs and GHs showed a higher SD depending on the measures considered.
Table 4

Robust equal variance test for IRPES indicators

2012
2013
Robust equal variance test—IRPES indicatorsSD UHSD GHW50—medianPr>FSD UHSD GHW50—medianPr>FHigher variability in 2012Higher variability in 2013
Efficiency and appropriateness
Relative stay index (case-mix adjusted differential average LOS days)0.91.40.20.60.81.20.70.4GHGH
Percentage of medical discharges with LOS over the threshold for patients aged 65 and over1.720.10.81.72.10.70.4GHGH
Percentage of ED patients referred for hospital admission with ED LOS≤8 hours96.70.50.59.77.70.30.6UHUH
Patient satisfaction
Percentage of patients leaving ED against/without medical advice1.91.80.10.822.101UHGH
Percentage of hospitalised patients leaving against medical advice0.70.70.10.80.60.601GHGH
Economic and financial evaluation
2011
2012
Higher variability in 2011
Higher variability in 2012
Average cost per weighted case10687851.10.39628500.80.4UHUH

ED, emergency department; GH, general hospital; IRPES, Inter-Regional Performance Evaluation System; LOS, length of stay; UH, university hospital.

Robust equal variance test for IRPES indicators ED, emergency department; GH, general hospital; IRPES, Inter-Regional Performance Evaluation System; LOS, length of stay; UH, university hospital. For the 2012 results of NOEP indicators, the test was significant for four measures (table 5):
Table 5

Robust equal variance test for NOEP risk-adjusted indicators

2012
2013
Robust equal variance test—NOEP risk-adjusted indicatorsSD UHSD GHW50—medianPr>FSD UHSD GHW50—medianPr>FHigher variability in 2012Higher variability in 2013
Outcome indicators
AMI: 30-day mortality3.33.80.80.42.63.72.80.1GHGH
AMI without PTCA: 30-day mortality4.86.21.10.34.46.62.30.1GHGH
AMI with PTCA within 2 days: 30-day mortality1.41.91.40.21.82.10.40.5GHGH
AMI with PTCA after 2 days: 30-day mortality1.61.40.50.51.21.40.90.3UHGH
AMI: 1-year mortality1.94.45.60.02*3.33.70.10.7GH*GH
AMI: MACCE after 1 year4.15.32.1 0.23.25.540.04*GHGH*
Isolated aortocoronary bypass: 30-day mortality1.41.60.00.91.61.40.00.9GHUH
Valvuloplasty or heart valve replacement: 30-day mortality1.30.52.70.11.21.00.20.6UHUH
Congestive heart failure: 30-day mortality3.35.01.80.2GH
Ischaemic stroke: 30-day mortality2.94.55.90.02*44.50.50.5GH*GH
Ischaemic stroke: 30-day readmission3.63.90.00.92.23.01.60.2GHGH
COPD exacerbation: 30-day mortality2.33.93.70.12.94.11.20.3GHGH
COPD: 30-day readmission2.44.55.90.02*3.44.21.10.3GH*GH
Proportion of caesarean section9.17.11.30.39.27.210.3UHUH
Femur fracture: 30-day mortality1.32.25.20.02*2.12.20.60.5GH*GH
Femur fracture: percentage of operations carried out within 2 days16.718.10.80.4GH
Colon cancer surgery: 30-day mortality2.72.300.91.72.52.50.1UHGH

*p-value<0.05.

AMI, acute myocardial infarction; COPD, chronic obstructive pulmonary disease; GH, general hospital; MACCE, major adverse cardiac and cerebrovascular event; NOEP, National Outcome Evaluation Programme; PTCA, percutaneous transluminal coronary angioplasty; UH, university hospital.

Acute myocardial infarction (AMI): 1-year mortality’ (p value=0.02) Ischaemic stroke: 30-day mortality’ (p value=0.02) Femur fracture: 30-day mortality’ (p value=0.02) Chronic obstructive pulmonary disease (COPD): 30-day readmission’ (p value=0.02) Robust equal variance test for NOEP risk-adjusted indicators *p-value<0.05. AMI, acute myocardial infarction; COPD, chronic obstructive pulmonary disease; GH, general hospital; MACCE, major adverse cardiac and cerebrovascular event; NOEP, National Outcome Evaluation Programme; PTCA, percutaneous transluminal coronary angioplasty; UH, university hospital. In 2013, the test was significant only for the indicator ‘AMI: major adverse cardiac and cerebrovascular event (MACCE) after 1 year’ (p value=0.04). For these measures, GHs showed a higher SD with respect to UHs. This was also the case for most of the other outcome measures included for 2012 and 2013, apart from the ‘Proportion of caesarean section’ and the ‘30-day mortality rate for valvuloplasty or heart valve replacement’.

Discussion

The overall analysis showed heterogeneous results when comparing the two groups of hospitals. Considering the IRPES indicators of appropriateness, we found a higher compliance of GHs in pursuing the Italian Ministry of Health standards on directing patients to the appropriate care settings for surgical treatments as well as in avoiding short medical hospitalisations and giving preference to outpatient clinics or day cases. This may be due to the lower complexity of general LHA-led hospitals and to a related lower complex management. Regarding efficiency, in 2013, GHs seemed to perform better than UHs but these results are slightly different in 2012, thus leading to ambiguous conclusions. Therefore, the threefold mission and the greater organisational complexity of UHs seemed to lead to lower but not significantly different efficiency with respect to GHs. The more straightforward results in terms of the waiting times in ED may be due to the greater pressure in the UH EDs, which are usually located in city centres. Although the differences between GHs and UHs were always not significant, in 2012 GHs accounted for higher patient satisfaction. These results changed in 2013. However, previous research focused only on the patient experience with hospital medical staff in Tuscany showing a higher patient satisfaction for patients discharged by UHs with respect to patients hospitalised in GHs (see, among others, ref. 38). In addition, the test on variability for IRPES indicators showed homogeneous patterns of performance regardless of the teaching status. In particular, UHs showed a larger variation in the average cost per weighted case, which measures efficiency by comparing the average costs of inpatient cases weighted for the DRG complexity. This suggests that, as a group, UHs do not generally account for higher costs, contrary to what has been stated by other scholars.11–13 UHs, as individuals, show highly heterogeneous results. Hence, based on our analysis, the financial and economical sustainability of UHs could be related to the individual internal organisation or other factors rather than to the teaching status. Finally, for the tested IRPES indicators and considering both the years considered in the analysis, a ‘cluster effect’ linked to the teaching status did not seem plausible. This is also confirmed by the analysis on the NOEP indicators, which suggested that UHs did not generally achieve better outcomes. These results contribute to the research on this topic by suggesting that there is no straightforward evidence for better outcomes associated with UHs. Interestingly, GHs performed better (although not significantly) considering indicators related to the waiting time for femur fracture surgery and to the recourse to caesarean sections. In most of the mortality and readmission indicators, UHs did perform better but without a significant effect. Considering that UHs are referral centres with higher delivered volumes and patients, it is possible that these better results could also be explained by their role in the hospital network, rather than only by the teaching status, as suggested in other studies.39 In addition, GHs account for a generally higher variability compared with UHs, but without significant differences. This means that although UHs seem to be generally more concentrated around average values, the extreme values of GH results towards the maximum and minimum of the distribution do not affect the overall analysis results. In conclusion, straightforward evidence identifying better performance and less variability for UHs also does not seem plausible for NOEP indicators. Summarising these results, from a multidimensional perspective being in the UH rather than the GH group, does not generally affect performance. Hence, the different institutional and organisational settings between them do not seem to result in significant dissimilarities. Instead, the variations in hospital performance could be linked to particular features of each individual hospital or its managerial approach. Furthermore, these variations may also be determined by the Regional Healthcare System, rather than by a specific cross-regional group affiliation. In Italy, there is evidence that hospital performance improvement may be affected by regional strategies combining different tools.22 This is the case of the Tuscany and Basilicata regions, which applied a combination of different integrated governance tools and registered a higher performance improvement in the past years with respect to other regions. In fact, with reference to Tuscany, the regional UHs generally achieve a higher performance with respect to the UHs of the other IRPES regions.23–25 40 Nevertheless, the analysis of the impact of these regional strategies on performance of UHs needs to be investigated further. As a preliminary study on this topic, this research presents some limitations. First, the study context focused on the Italian healthcare system and its organisational structure. We believe, however, that the contextual factors strongly influence the results. Therefore, these factors cannot be excluded when the research is aimed at supporting decision-making processes. This study provides evidence to enlarge the debate on this relevant topic in Italy and also in those countries aiming at linking teaching status attribution to performance evaluation. Second, there could be other indicators as valuable and informative as those measures included in the analysis. However, we included the ones that regional policymakers and healthcare managers in Italy share as valuable measures to assess and guide the system. Further studies will investigate the relevance of individual and regional factors in affecting UH and GH results in this multidimensional perspective.

Conclusions

The main finding of this study is that Italian UHs cannot straightforwardly be associated with better results in terms of appropriateness, efficiency, patient satisfaction, economic and financial evaluation, and outcomes. However, this preliminary evidence may inform the debate on the future role of UHs and encourage further considerations with regard to the Italian healthcare system. First, if UHs wish to maintain their role of leading players in the hospital network and to be the main actors in charge of training clinicians of the future, hospital performance evaluations should be further encouraged in order to inform the attribution of teaching status based on performance results. This could stimulate wider competition between Italian hospitals aimed at assigning teaching status to those hospitals that achieve the best performance in specific care paths. In this respect, medical schools should base their teaching activities for both undergraduate and resident students in the hospitals that can ensure the best results and practices, since the future generation of clinicians has a crucial role in improving the quality of care. Second, considering the pressure towards more population-based-oriented healthcare systems, the organisational structure of Italian UHs as an independent organisation could be revised towards a more integrated network with other facilities delivering community, primary and outpatient care. UH facilities could therefore be directly integrated with the other LHA-led providers also creating a joint accountability for more patient-centred care. In this perspective, in Italy, recent national legislation (Disegno di Legge n. 2111-B/2016) has allowed as a pilot experience the Special Administrative Regions (such as Friuli Venezia Giulia) to incorporate the UHs within the LHAs. In conclusion, further studies on this topic will investigate whether performance of Italian UHs may be affected by regional strategies and systems of governance, such as the use of a transparent performance evaluation system.
  17 in total

1.  Performance indicators from all perspectives.

Authors:  J E Ibrahim
Journal:  Int J Qual Health Care       Date:  2001-12       Impact factor: 2.038

2.  Comparing teaching and non-teaching hospitals: a frontier approach (teaching vs. non-teaching hospitals).

Authors:  S Grosskopf; D Margaritis; V Valdmanis
Journal:  Health Care Manag Sci       Date:  2001-06

Review 3.  Non-parametric and parametric applications measuring efficiency in health care.

Authors:  Bruce Hollingsworth
Journal:  Health Care Manag Sci       Date:  2003-11

Review 4.  Teaching hospitals and quality of care: a review of the literature.

Authors:  John Z Ayanian; Joel S Weissman
Journal:  Milbank Q       Date:  2002       Impact factor: 4.911

5.  Academic medicine: the evidence base.

Authors: 
Journal:  BMJ       Date:  2004-10-02

6.  Rural vs urban hospital performance in a 'competitive' public health service.

Authors:  Javier Garcia-Lacalle; Emilio Martin
Journal:  Soc Sci Med       Date:  2010-06-19       Impact factor: 4.634

7.  Effects of teaching on hospital costs.

Authors:  F A Sloan; R D Feldman; A B Steinwald
Journal:  J Health Econ       Date:  1983-03       Impact factor: 3.883

8.  Quality of care in teaching hospitals: a literature review.

Authors:  Joel Kupersmith
Journal:  Acad Med       Date:  2005-05       Impact factor: 6.893

9.  Teaching hospital costs: implications for academic missions in a competitive market.

Authors:  R Mechanic; K Coleman; A Dobson
Journal:  JAMA       Date:  1998-09-16       Impact factor: 56.272

10.  How do hospitalization experience and institutional characteristics influence inpatient satisfaction? A multilevel approach.

Authors:  Anna Maria Murante; Chiara Seghieri; Adalsteinn Brown; Sabina Nuti
Journal:  Int J Health Plann Manage       Date:  2013-07-01
View more
  3 in total

1.  Risks in Antibiotic Substitution Following Medicine Shortage: A Health-Care Failure Mode and Effect Analysis of Six European Hospitals.

Authors:  Nenad Miljković; Brian Godman; Eline van Overbeeke; Milena Kovačević; Karyofyllis Tsiakitzis; Athina Apatsidou; Anna Nikopoulou; Cristina Garcia Yubero; Laura Portillo Horcajada; Gunar Stemer; Darija Kuruc-Poje; Thomas De Rijdt; Tomasz Bochenek; Isabelle Huys; Branislava Miljković
Journal:  Front Med (Lausanne)       Date:  2020-05-12

2.  Differences and their contexts between teaching and nonteaching hospitals in Iran with other countries: A concurrent mixed-methods study.

Authors:  Niusha Shahidi Sadeghi; Mohammadreza Maleki; Hassan Abolghasem Gorji; Soudabeh Vatankhah; Bahram Mohaghegh
Journal:  J Educ Health Promot       Date:  2022-01-31

3.  Exploring Satisfaction and Migration Intentions of Physicians in Three University Hospitals in Poland.

Authors:  Katarzyna Dubas-Jakóbczyk; Alicja Domagała; Dorota Kiedik; Juan Nicolás Peña-Sánchez
Journal:  Int J Environ Res Public Health       Date:  2019-12-19       Impact factor: 3.390

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.