Literature DB >> 32919795

The measurement and improvement of maternity service performance through inspection and rating: An observational study of maternity services in acute hospitals in England.

Thomas Allen1, Kieran Walshe2, Nathan Proudlove3, Matt Sutton4.   

Abstract

OBJECTIVES: To determine whether the prior performance of maternity services, as measured by Royal College of Obstetricians and Gynaecologists performance indicators, is associated with ratings by the Care Quality Commission at subsequent inspection, and whether performance changes occur after inspection.
METHODS: We used hospital activity data from 176 maternity sites inspected between October 2013 and March 2016 to generate a set of performance indicators developed by the Royal College of Obstetricians and Gynaecologists. We linked these data to Care Quality Commission data on inspection dates and rating scores and used regression models, controlling for site level effects, to estimate the relationships between inspection ratings and performance indicators before and after inspections.
RESULTS: Coefficients measuring the relationship between indicator performance and subsequent inspection rating score had wide confidence intervals which crossed zero suggesting no statistically significant relationship prior to inspection. The same absence of statistical significance was observed for changes in indicator performance after inspection.
CONCLUSIONS: The use of routine data for performance monitoring is becoming increasingly important as regular inspection is costly and regulators require accurate and timely intelligence. However, we found no statistically significant relationships between inspection ratings and performance indicators before or after inspections in maternity services. This calls into question the validity and reliability of the performance indicators, the inspection process and ratings, or both, as measures of performance.
Copyright © 2020 The Authors. Published by Elsevier B.V. All rights reserved.

Entities:  

Keywords:  External inspection; Government regulation; Quality of health care; Statistical analysis

Mesh:

Year:  2020        PMID: 32919795      PMCID: PMC7584108          DOI: 10.1016/j.healthpol.2020.08.007

Source DB:  PubMed          Journal:  Health Policy        ISSN: 0168-8510            Impact factor:   2.980


Introduction

Healthcare regulation, often using some form of inspection of healthcare providers and the subsequent publication of reports and inspection ratings or outcomes, is widely used in many countries with the twin aims of quality improvement and quality assurance [1]. However, evidence on effectiveness of regulation is quite limited, and inspections are quite resource-intensive interventions [[2], [3], [4], [5], [6], [7]]. Regulators often seek to use routine data from healthcare providers to produce performance indicators which can then be used to target or focus the use of inspection (for example, helping to decide which providers to inspect, when to undertake inspections, or what aspects of care should be assessed during inspection). The use of such performance indicators has been the subject of a growing body of international literature and some influential critiques [8]. Healthcare providers in England are regulated by the Care Quality Commission (CQC), who inspect and publish ratings for all acute hospitals. Hospitals are inspected and rated across eight core services: Urgent and emergency services; Medical care; Surgery; Critical care; Maternity and gynaecology; Services for children and young people; End of life care; and Outpatient services and diagnostic. Each service receives a rating on a four-point scale (Outstanding, Good, Requires Improvement or Inadequate) on five domains (Effectiveness, Safety, Care, Responsiveness and Leadership) plus an Overall rating. Recent research has examined the CQC inspection process and its impact [6,[9], [10], [11], [12], [13], [14]]. Here we examine the relationship between inspection ratings and performance indicators in hospital maternity services, in which, about 98 % of births in England occur [15]. These are a mixture of midwife-led units, which deal with low risk pregnancies, and consultant-led obstetric units. They have varying levels of associated neonatal care units to deal with differing levels of acuity [16], organised into geographical specialty networks for acuity matching and escalation [17]. The CQC reports at the level of maternity services at a site (usually a hospital), and this is the unit of analysis in this paper. An NHS hospital trust may run maternity services at more than one site. Internationally, the quality of maternity services has long attracted the interest of researchers, clinicians and policymakers [18]. While serious adverse outcomes for mothers and babies are rare, they can have profound consequences. Maternity services account for 10 % of legal claims for medical negligence against the NHS in England, but 49 % of the cost of awards, or £686 million in 2016−17 [19]. Maternity services in England present an interesting setting to evaluate inspection since they are relatively free from mandated performance targets (unlike, for example, emergency departments with 4 -hour waiting-time targets or elective surgery with maximum referral-to-treatment times). They are also relatively self-contained and relatively independent of the performance of other parts of the hospital. This may mean it is easier to observe and isolate the effects of the inspection and rating process on performance. We used a set of performance indicators developed by the Royal College of Obstetricians and Gynaecologists (RCOG) [20], which are the most comprehensive clinical quality indicators available. RCOG developed these indicators as a way to measure patterns in maternity care and outcomes across providers and over time, aiming to stimulate a discussion about how improvements can be made. They are generated from routinely-collected hospital data and relate to the mode of delivery, use of interventions and associated complications - in particular rates of inductions, caesareans, use of instruments, episiotomy and tears. In their inspection framework for maternity services, the CQC state one of their key lines of enquiry as asking if patients’ care and treatment outcomes are monitored and how they compare with other services [21]. Furthermore, the RCOG indicators set is listed as a professional standard in which services should participate and inspection reports routinely refer to performance on these indicators such as the services’ percentage of caesarean sections or inductions. Of course, RCOG indicators only form a subset of the aspects of care on which the CQC focus, and inspections draw on a much wider and more detailed range of quantitative and qualitative data. In this paper we investigate whether, in pursuit of a data driven approach to regulation [22], the CQC could utilise RCOG performance indicators to supplement inspection and rating. We test three specific hypotheses: H1 that there was a correlation between prior RCOG indicator performance and CQC inspection ratings; H2 that there were changes in RCOG indicator performance in the post-inspection periods; and H3 that any post-inspection changes were greater for maternity sites with poorer inspection ratings (which had the most reason and scope to improve). We secured University research ethics approval in February 2016 (having determined using the National Research Ethics Services web-based tool that the study did not require NHS Research Ethics approval).

Materials and methods

Data

We used three sources of data: CQC inspection and ratings data [23], Hospital Episode Statistics (HES) [24] and maternity site acuity categories [25]. Hospital activity data were provided by NHS Digital under a bespoke data sharing agreement. CQC data and acuity categories were publicly available. The CQC data provided the dates and outcomes for all inspections from October 2013 to March 2016. For this study we used the inspection date and Overall maternity site rating score for all first inspections of hospital sites. This gave us a sample of 176 sites, ran by 134 NHS hospital trusts. HES is an administrative dataset capturing all hospital activity. We used the admitted patient care dataset covering April 2012 to September 2016. We generated the RCOG performance indicators from HES data following the RCOG specifications [20]. There were 18 indicators defined in the RCOG report of which we used 14, see Table 1 for a list of indicators. Four indicators were excluded as we did not have the data required to accurately generate them. Two were related to prior caesarean and would have required the use of HES data from before 2012. Two were related to maternal readmission and would have required data beyond September 2016 to capture sufficient data for sites inspected towards the end of the inspection cycle.
Table 1

Summary statistics for RCOG maternity performance indicators.

Indicator (percentage of)Sites with a negative CQC rating Mean [SD]Sites with a positive CQC rating Mean [SD]Difference[95 % CI]
Spontaneous unassisted vaginal deliveries51.86 [11.38]61.34 [20.23]−9.48*** [-14.97,-3.99]
Induced labours25.05 [7.69]19.72 [12.55]5.34** [1.88,8.79]
Induced labours in deliveries between 37 and 39 weeks of gestation25.21 [22.83]20.36 [17.85]4.84* [1.16,8.55]
Induced labours in deliveries at 42 or more weeks of gestation67.18 [20.36]58.98 [29.42]8.20* [-.35,16.75]
Deliveries by caesarean section21.85 [5.20]16.94 [9.35]4.90*** [2.37,7.43]
Induced labours resulting in emergency caesarean section22.58 [6.10]19.85 [14.76]2.73 [-1.18,6.63]
Spontaneous labours resulting in emergency caesarean section10.45 [3.18]7.58 [5.00]2.86*** [1.48,4.25]
Pre-labour caesarean sections9.54 [2.95]8.14 [5.23]1.40 [-.02,2.82]
Deliveries involving instruments13.51 [4.58]11.42 [6.53]2.10* [0.24,3.94]
Episiotomies among vaginal deliveries19.71 [9.67]17.35 [9.10]2.36 [0.24,3.94]
Episiotomies among instrumental deliveries79.55 [12.27]81.34 [14.15]−1.79 [-6.12,2.54]
Third and fourth degree perineal tears among vaginal deliveries3.04 [2.16]2.63 [1.73]0.41 [-0.19,1.00]
Third and fourth degree perineal tears among unassisted vaginal deliveries2.25 [0.99]2.09 [1.57]0.16 [-0.28,0.60]
Third and fourth degree perineal tears among assisted vaginal deliveries6.35 [3.57]6.60 [4.36]−0.25 [-1.56 1.06]

Note: indicator values are means over the pre-inspection period: April 2012 to September 2013. Negatively rated sites received a CQC rating of Requires Improvement or Inadequate. Positively rated sites received a CQC rating of Good or Outstanding. Difference tested using T Tests * p < 0.05, ** p < 0.01, *** p < 0.001.

Summary statistics for RCOG maternity performance indicators. Note: indicator values are means over the pre-inspection period: April 2012 to September 2013. Negatively rated sites received a CQC rating of Requires Improvement or Inadequate. Positively rated sites received a CQC rating of Good or Outstanding. Difference tested using T Tests * p < 0.05, ** p < 0.01, *** p < 0.001. To adjust for patient severity and risk, we adopted the five-level acuity categorisation used in an analysis of perinatal mortality [25]. The highest acuity sites are those with a NICU (Neonatal Intensive Care Unit, the highest-acuity type of neonatal unit) plus neonatal surgical provision; then sites with NICUs but without neonatal surgery, then sites without NICUs, split into three size bands: those over 4,000 births a year, those with 2,000 – 3,999 and those with fewer than 2,000. We linked the data from these sources to form a single dataset combing inspection dates and ratings with maternity site performance indicators and acuity categories. Actual sample sizes ranged from 142 to 165 sites since some sites had indicators which failed RCOG data-quality requirements [20]. We note that the RCOG indicators are mostly not straightforwardly directional (in the sense that a higher or lower value can be definitively regarded as better or worse). For example, instrumental delivery is often a necessary intervention but is associated with risk of harm and there has been concern about increased rates of instrumental delivery [26,27]. Similarly, concern has been expressed about the rise in the caesarean-section rate in the UK [28]. CQC inspection reports sometimes praise initiatives to reduce the use of these interventions [29].

Analysis

Across all the maternity site inspections included in our analyses, only 10 were rated Inadequate and seven were rated Outstanding (see Fig. 1). These two rating categories accounted for less than 10 % of the inspected sites. Because of these very low numbers in the highest and lowest categories, we created a binary variable which combined the two higher and the two lower rating scores. We describe this binary rating as being either positive (ratings of Good or Outstanding) or negative (ratings of Requires Improvement or Inadequate).
Fig. 1

Number of inspections each month by rating score.

Number of inspections each month by rating score. H1 investigated whether observed differences in maternity performance indicators prior to inspection were correlated with subsequent rating score. This reveals whether maternity sites with a positive or negative CQC rating also differed in their performance indicators. The mean performance on each indicator for every site was calculated from data covering April 2012 to September 2013, before inspections had begun. We used cross-sectional regressions to model the associations between indicator performance and inspection rating while controlling for site acuity. Observations were at site level and therefore regressions were weighted by the mean number of deliveries per site per month to account for site volume. H2 considered whether maternity sites responded to an inspection by changing how they performed on the RCOG indicators after inspection. H3 considered whether any observed changes in performance differed between sites with positive or negative ratings. The data were aggregated to the site-month level, for 54 months from April 2012 to September 2016. We regressed performance on each RCOG indicator on binary variables indicating the six-month post-inspection period for those with either negative or positive ratings. The six-month period covered different months depending on when the site was inspected. The least squares dummy variable regression models included indicators for maternity site and month, to adjust for unobserved site characteristics and seasonal variation respectively. Such models allowed for the use of time varying weights and observations were weighted by the number of deliveries in each site in each month. All models were estimated using Stata Version 14, online supplementary material presents the regression equations. To model if changes occurred outside of the six-month post-inspection period, supplementary analyses tested whether indicator performance changed in anticipation of an inspection (one month prior to inspection) and whether changes occurred in the longer term (more than six months post-inspection). The within site variation in indicator values were also compared before and after inspection. All analyses were repeated using the original four-category rating score.

Results

Fig. 1 shows the timing of each inspection and the original, four-category rating score. The majority of sites received a rating score of either Requires Improvement or Good. Summary statistics for the indicators are shown in Table 1, split by sites with negative or positive ratings and reporting the difference in means and the statistical significance of these differences. Indicator values for sites with negative ratings differ from those with positive ratings. The largest difference was found for spontaneous, unassisted vaginal deliveries, a 9.48 [95 % CI -14.97, -3.99] percentage point difference between sites with a positive rating and those with a negative rating. This difference was statistically significant, as were differences for six other indicators. Differences suggest sites with negative ratings perform fewer spontaneous, unassisted vaginal deliveries and fewer inductions, caesarean sections and instrumental deliveries. The standard deviations are large compared with the differences in means between the groups, suggesting substantial variation not explained by ratings alone. Table 2 summarises the results of the cross-section models for each of the 14 RCOG indicators regressed on subsequent binary rating score (testing H1). In each regression the reference category was a negative rating and each regression adjusted for maternity site acuity and volume. As an example of interpretation, the first model implies that prior to inspection spontaneous unassisted vaginal delivery rates were on average 2.11 [95 % CI 0.13, 4.08] percentage points higher in sites subsequently awarded positive ratings. One additional indicator (deliveries by caesarean section), out of 14, was statistically significant at the 5% level. One indicator (Spontaneous labours resulting in emergency caesarean section) was statistically significant at the 1% level. The results suggest more unassisted deliveries and fewer induced labours and caesarean sections in sites with a positive rating, which is the expected direction for these indicators. Findings for episiotomies and tears are not in the expected direction and confidence intervals were wide for most indicators.
Table 2

Adjusted cross-section models of mean maternity site performance between April 2012 and September 2013 (pre-inspection) regressed on subsequent rating score.

Indicator (percentage of)Difference between positive and negative rating [95 % CI]Number of sites
Spontaneous unassisted vaginal deliveries2.11* [0.13,4.08]165
Induced labours−1.72 [-4.10,0.67]165
Induced labours in deliveries between 37 and 39 weeks of gestation−0.97 [-3.70,1.77]159
Induced labours in deliveries at 42 or more weeks of gestation−2.33 [-8.83,4.16]150
Deliveries by caesarean section−1.43* [-2.57,-0.30]165
Induced labours resulting in emergency caesarean section−0.75 [-4.34,2.84]150
Spontaneous labours resulting in emergency caesarean section−1.38** [-2.36,-0.40]165
Pre-labour caesarean sections0.23 [-0.72,1.18]165
Deliveries involving instruments0.40 [-0.82,1.61]165
Episiotomies among vaginal deliveries−0.01 [-2.49,2.48]165
Episiotomies among instrumental deliveries0.90 [-1.46,3.25]154
Third and fourth degree perineal tears among vaginal deliveries0.11 [-0.22,0.44]165
Third and fourth degree perineal tears among unassisted vaginal deliveries0.36 [-0.01,0.74]164
Third and fourth degree perineal tears among assisted vaginal deliveries0.11 [-0.75,0.97]154

Note: regressions were adjusted for patient acuity at site level and weighted by mean number of deliveries per site per month. Coefficients show the correlation relative to the reference category: negative rating (inadequate or requires improvement). 95 % confidence intervals in brackets, robust standard errors used to calculate confidence intervals * p < 0.05, ** p < 0.01, *** p < 0.001.

Adjusted cross-section models of mean maternity site performance between April 2012 and September 2013 (pre-inspection) regressed on subsequent rating score. Note: regressions were adjusted for patient acuity at site level and weighted by mean number of deliveries per site per month. Coefficients show the correlation relative to the reference category: negative rating (inadequate or requires improvement). 95 % confidence intervals in brackets, robust standard errors used to calculate confidence intervals * p < 0.05, ** p < 0.01, *** p < 0.001. Table 3 summarises the results from the 14 multi-level models examining RCOG indicator performance in the six months following an inspection. Changes in performance for this period are shown for positive and negative ratings scores (testing H2 and H3). Again, drawing on spontaneous unassisted vaginal deliveries as an example of interpretation: in the six month period following an inspection, sites with a positive rating decreased these deliveries by 0.53 [95 % CI -1.58, 0.51] percentage points whereas sites with a negative rating increased these deliveries by 1.15 [95 % CI -0.26, 2.56], note these changes are not statistically significant. Only one result was statistically significant (Third and fourth degree perineal tears among assisted vaginal deliveries for sites with a negative rating) suggesting a 0.83 [95 % CI 0.01, 1.66] percentage point increase post-inspection. Overall, and when not considering the statistical significance, the observed changes in indicator performance are larger in sites with a negative rating compared to those with a positive rating, consistent with them having greater room for improvement. These observed changes are not unanimously in the expected direction. Following an inspection, sites with a negative rating do go on to increase their rates of spontaneous unassisted vaginal deliveries and decrease their rates of induced labours (the first four results). However, these sites also increase their rates of induced labours resulting in emergency caesarean section.
Table 3

Fixed effects models of changes in maternity site performance post inspection.

Change observed in the six months post-inspection [95 % CI]
Observations [number of sites]
Positive ratingNegative rating
Spontaneous, unassisted vaginal deliveries−0.53 [-1.58,0.51]1.15 [-0.26,2.56]5666 [151]
Induced labours0.25 [-1.13,1.63]−1.8 [-4.11,0.52]5664 [151]
Induced labours in deliveries between 37 and 39 weeks of gestation−0.59 [-2.14,0.97]−1.64 [-3.97,0.69]5334 [149]
Induced labours in deliveries at 42 or more weeks of gestation1.56 [-2.10,5.22]−2.26 [-7.62,3.11]4569 [144]
Deliveries by caesarean section0.09 [-0.46,0.64]−0.17 [-0.80,0.45]5666 [151]
Induced labours resulting in emergency caesarean section−0.71 [-2.90,1.49]3.47 [-0.98,7.91]4921 [142]
Spontaneous labours resulting in emergency caesarean section0.53 [-0.18,1.24]−0.44 [-1.10,0.23]5658 [151]
Pre-labour caesarean sections−0.46 [-1.20,0.28]0.31 [-0.41,1.03]5666 [151]
Deliveries involving instruments0.22 [-0.36,0.79]−0.8 [-3.18,1.57]5666 [151]
Episiotomies among vaginal deliveries0.85 [-1.42,3.12]−0.36 [-1.75,1.02]5647 [151]
Episiotomies among instrumental deliveries0.24 [-1.16,1.65]1.79 [-0.95,4.52]5040 [143]
Third and fourth degree perineal tears among vaginal deliveries−0.09 [-0.29,0.11]0.19 [-0.05,0.43]5662 [151]
Third and fourth degree perineal tears among unassisted vaginal deliveries−0.07 [-0.30,0.16]0.22 [-0.09,0.53]5563 [151]
Third and fourth degree perineal tears among assisted vaginal deliveries0.01 [-0.57,0.58]0.83* [0.01,1.66]5040 [143]

Note: Least-squares dummy variable model weighted by number of deliveries per site per month. 53 month dummies included (April 2012 to September 2016). Robust standard errors clustered by site used to calculate confidence intervals * p < 0.05, ** p < 0.01, *** p < 0.001. Unadjusted, mean values for these indicators and time periods are presented in the online supplement.

Fixed effects models of changes in maternity site performance post inspection. Note: Least-squares dummy variable model weighted by number of deliveries per site per month. 53 month dummies included (April 2012 to September 2016). Robust standard errors clustered by site used to calculate confidence intervals * p < 0.05, ** p < 0.01, *** p < 0.001. Unadjusted, mean values for these indicators and time periods are presented in the online supplement. The supplementary analyses examining pre-inspection and longer term changes showed a similar lack of response to inspection, as did the comparison of within site variation. Regressions with the full four-category rating outcomes found no statistically significant relationships.

Discussion

Main findings

For some of the RCOG performance indicators analysed we observed differences in performance prior to inspection, and different changes in performance after inspection. However, these differences are not always in an expected direction, they are small and they are statistically significant in very few cases. Furthermore, given that 14 indicators are tested, the few statistically significant results should be treated with further caution owing to the increased probability of falsely rejecting the null hypothesis of no difference [30]. To summarise, we find little evidence that poorer-rated maternity sites performed differently on RCOG performance indicators before inspection or that their performance changed afterward.

Strengths and limitations

We contribute to literature on the role of healthcare regulation in quality improvement through our study of the new system for inspection and rating of maternity services introduced by the CQC in 2013. The system was larger and more focused than previous inspection regimes and was introduced to address specific shortcomings in quality. In evaluating the impact of regulation in maternity services we benefitted from maternity-specific inspections by the CQC as well as maternity-specific RCOG performance indicators. Our study was further strengthened by the relative autonomy of maternity sites, not subject to extensive external performance management or national targets, and whose performance is less dependent on the performance of other parts of a hospital. Inspection dates were not allocated based on maternity service performance, which otherwise would have confounded our analysis, and we further removed bias by controlling for seasonal, macro and volume effects in our statistical approach. The majority of maternity sites were inspected and rated by the CQC in less than 24 months, and there are no unexposed sites that could be used as controls. Such a control group would have accounted for changes in RCOG performance indicators contemporaneous to inspection amongst the sites where inspection had not occurred. However, our specification can account for changes in RCOG performance indicators occurring outside the six month period following inspection. We were also limited somewhat by the quality of hospital data, meaning not all maternity sites could be included in our analysis. Additionally, it was plausible that maternity services reacted to inspection in ways not captured by the RCOG indicators, for example sites may have improved their response to complaints or their communication with patients.

Interpretation

Recent research has shown that, despite the many types of quality metrics in maternity and the large amount of ‘noise’ in the data, there are robust and measurable performance differences amongst English NHS maternity sites [31]. Indeed there are differences in maternity sites’ capability to improve, which are also related to their performance [32]. These findings might be used to make the case for using such performance indicators to assess and compare maternity sites alongside inspections. However, our findings in this case suggest little if any relationship between the RCOG performance indicators and CQC inspection ratings for maternity services. We have reached similar conclusions in a parallel study of emergency departments [11]. Our results are consistent with several interpretations. Firstly, it is possible that CQC inspection ratings and RCOG indicators were both valid measures of performance but a relationship was not observed because they measured different aspects of performance. Indeed, if they had both measured exactly the same aspects of quality and had been highly correlated, that might lead one to argue that inspection and rating was not necessary. However, it seems reasonable to argue that we would have expected some degree of statistically significant correlation to be present, and its absence should be a cause for concern. Secondly, our results could suggest that either the RCOG performance indicators, or the inspection ratings, or both, were not valid measures of performance. They should certainly lead us to be cautious about relying on either dataset to make summative judgements about performance or to inform decisions about maternity services. Thirdly, the fact that we observed no improvement in performance on the RCOG indicators after inspection and rating, either across all maternity sites or particularly in those that were rated negatively, might either lead us again to question the validity of those performance indicators or to consider whether the inspection and rating process itself was effective in its intention to stimulate or catalyse improvement.

Conclusion

Our research raises some concerns about the validity of routine performance indicators in maternity services and about the validity of inspections and ratings undertaken by the health and care regulator for England, the Care Quality Commission. However, regulators like the CQC are making greater use of performance indicators as an adjunct to, or even partial replacement for, some hospital inspections [22] and their ratings and indicators are increasingly used to make judgements about services (by patients, the media and healthcare commissioners for example) and may influence future decisions about service provision. It is therefore important that these data are valid and reliable. We conclude that regulators who seek to implement a data driven approach to regulation, using performance indicators to prioritise, target or replace inspections, should be cautious, and should seek to demonstrate through evaluations the validity and reliability of such an approach before putting it into widespread practice.

Declaration of Competing Interest

The authors report no declarations of interest.
  16 in total

1.  What is quality in maternity care? An international perspective.

Authors:  Rüdiger Pittrof; Oona M R Campbell; Véronique G A Filippi
Journal:  Acta Obstet Gynecol Scand       Date:  2002-04       Impact factor: 3.636

2.  The caesarean section epidemic.

Authors:  W Savage
Journal:  J Obstet Gynaecol       Date:  2000-05       Impact factor: 1.246

3.  The effect of external inspections on safety in acute hospitals in the National Health Service in England: A controlled interrupted time-series analysis.

Authors:  Ana Castro-Avila; Karen Bloor; Carl Thompson
Journal:  J Health Serv Res Policy       Date:  2019-04-12

Review 4.  Narrative synthesis of health service accreditation literature.

Authors:  Reece Hinchcliff; David Greenfield; Max Moldovan; Johanna Irene Westbrook; Marjorie Pawsey; Virginia Mumford; Jeffrey Braithwaite
Journal:  BMJ Qual Saf       Date:  2012-10-04       Impact factor: 7.035

Review 5.  External inspection of compliance with standards for improved healthcare outcomes.

Authors:  Gerd Flodgren; Daniela C Gonçalves-Bradley; Marie-Pascale Pomey
Journal:  Cochrane Database Syst Rev       Date:  2016-12-02

Review 6.  A systematic review of hospital accreditation: the challenges of measuring complex intervention effects.

Authors:  Kirsten Brubakk; Gunn E Vist; Geir Bukholm; Paul Barach; Ole Tjomsland
Journal:  BMC Health Serv Res       Date:  2015-07-23       Impact factor: 2.655

7.  User involvement in regulation: A qualitative study of service user involvement in Care Quality Commission inspections of health and social care providers in England.

Authors:  Emma Richardson; Kieran Walshe; Alan Boyd; Jill Roberts; Lillie Wenzel; Ruth Robertson; Rachael Smithson
Journal:  Health Expect       Date:  2018-12-07       Impact factor: 3.377

8.  Measurement and improvement of emergency department performance through inspection and rating: an observational study of emergency departments in acute hospitals in England.

Authors:  Thomas Allen; Kieran Walshe; Nathan Proudlove; Matt Sutton
Journal:  Emerg Med J       Date:  2019-04-03       Impact factor: 2.740

9.  Do performance indicators predict regulator ratings of healthcare providers? Cross-sectional study of acute hospitals in England.

Authors:  Thomas Allen; Kieran Walshe; Nathan Proudlove; Matt Sutton
Journal:  Int J Qual Health Care       Date:  2020-04-27       Impact factor: 2.038

10.  Using quality indicators to predict inspection ratings: cross-sectional study of general practices in England.

Authors:  Thomas Allen; Kieran Walshe; Nathan Proudlove; Matt Sutton
Journal:  Br J Gen Pract       Date:  2019-12-26       Impact factor: 5.386

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.