Literature DB >> 28218984

Closing the patient experience chasm: A two-level validation of the Consumer Quality Index Inpatient Hospital Care.

Alina Smirnova1,2, Kiki M J M H Lombarts2, Onyebuchi A Arah3,4, Cees P M van der Vleuten1.   

Abstract

BACKGROUND: Evaluation of patients' health care experiences is central to measuring patient-centred care. However, different instruments tend to be used at the hospital or departmental level but rarely both, leading to a lack of standardization of patient experience measures.
OBJECTIVE: To validate the Consumer Quality Index (CQI) Inpatient Hospital Care for use on both department and hospital levels.
DESIGN: Using cross-sectional observational data, we investigated the internal validity of the questionnaire using confirmatory factor analyses (CFA), and the generalizability of the questionnaire for use at the department and hospital levels using generalizability theory. SETTING AND PARTICIPANTS: 22924 adults hospitalized for ≥24 hours between 1 January 2013 and 31 December 2014 in 23 Dutch hospitals (515 department evaluations). MAIN VARIABLE: CQI Inpatient Hospital Care questionnaire.
RESULTS: CFA results showed a good fit on individual level (CFI=0.96, TLI=0.95, RMSEA=0.04), which was comparable between specialties. When scores were aggregated to the department level, the fit was less desirable (CFI=0.83, TLI=0.81, RMSEA=0.06), and there was a significant overlap between communication with doctors and explanation of treatment subscales. Departments and hospitals explained ≤5% of total variance in subscale scores. In total, 4-8 departments and 50 respondents per department are needed to reliably evaluate subscales rated on a 4-point scale, and 10 departments with 100-150 respondents per department for binary subscales. DISCUSSION AND
CONCLUSIONS: The CQI Inpatient Hospital Care is a valid and reliable questionnaire to evaluate inpatient experiences in Dutch hospitals provided sufficient sampling is done. Results can facilitate meaningful comparisons and guide quality improvement activities in individual departments and hospitals.
© 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.

Entities:  

Keywords:  Confirmatory factor analysis; Consumer Quality Index (CQI); generalizability theory; national surveys; patient-centered care; quality assessment

Mesh:

Year:  2017        PMID: 28218984      PMCID: PMC5600232          DOI: 10.1111/hex.12545

Source DB:  PubMed          Journal:  Health Expect        ISSN: 1369-6513            Impact factor:   3.377


INTRODUCTION

Evaluation of patients’ health care experiences has become central to measuring quality in health care and, as a result, health care providers are more often held responsible for monitoring and improving patients’ care experiences.1 Patient care experiences reflect the degree to which care is patient‐centred (ie care that is respectful and responsive to patients’ preferences, needs and values).2 In addition to its intrinsic value as an indicator of quality, a growing body of evidence points to the positive associations between positive patient experiences and clinical processes of care3, 4 as well as better patient adherence to treatment, improved clinical outcomes and decreased utilization of health care services.5 Even though improving patient care experiences is increasingly being incorporated in both local and global health agendas,6 patient feedback remains largely underutilized in local hospital improvement plans.7 One of the main reasons for this is lack of specific and timely feedback that is easily translatable to improvements on the frontline.8, 9 Current instruments used to collect patient experience data mostly collect data on hospital‐wide level for identification of larger national trends and contracting of hospital services. In order to bridge the gap between external reporting and internal quality assurance, some have recommended to use different instruments for different purposes.9, 10 This is, however, not desirable due to lack of standardization of measures, a lack of common language and possible disconnect between local improvement efforts and hospital‐wide measurements. Implementation of instruments is also costly and can potentially lead to duplication of work and unnecessary use of valuable resources. An alternative approach is to adapt existing instruments to reflect their multiple purposes. In this study, we attempt to address these problems using the Dutch version of the American Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, which was imported into the Netherlands in 2006 by Arah et al. for use within the Dutch health care system.11 This has led to the development of nationally used standardized questionnaires and protocols called the Consumer Quality Index (CQI), wherein the Dutch HCAHPS is known as the CQI Inpatient Hospital Care.12 Efforts to adapt this questionnaire for multiple purposes, including external accountability and internal quality assurance, have resulted in different versions of the questionnaire to be produced.13, 14 However, no extensive validation of the CQI Inpatient Hospital Care has occurred since the original validation study by Arah et al. (2006). As the results are consequentially used by patients, hospital staff, health insurers, the inspectorate and researchers for different purposes, it is imperative that the questionnaire can evaluate and differentiate patient care experiences across hospitals, specialties as well as departments reliably and validly. We aimed to assess the internal validity and reliability of the CQI Inpatient Hospital Care on both hospital and department levels. Additionally, we investigated whether the questionnaire measured similar domains of patient experiences across four specialties, namely surgery, obstetrics and gynaecology, internal medicine and cardiology.

METHODS

Setting and study population

We analysed CQI Inpatient Hospital Care questionnaire data from 23 Dutch hospitals including four academic centres, 515 department evaluations in 17 specialties (nine surgical and eight medical) collected between 1 January 2013 and 31 December 2014. Eligible patients 16 years or older who were hospitalized for at least 24 hours with a discharge within the previous 12 months were identified using hospital admission lists. Eligible participants were invited to evaluate their experiences of hospitalization using either online or paper‐based CQI Inpatient Hospital Care (Appendix S1). Evaluations collected in 2013 were used for national benchmarking among 43 hospitals in four specialties, namely surgery, internal medicine, cardiology, obstetrics and gynaecology. Therefore, we focused on these specialties in this study. The hospitals and clinical departments that re‐evaluated their inpatient hospital care using the same questionnaire in 2014 for own internal quality assurance purposes were considered to be independent evaluations and were, therefore, also included in the analysis. We analysed the results both for 2013 and 2014 together and separately, and if there was no change, reported the combined results only. As retrospective research does not fall under the Dutch Medical Research Involving Human Subjects Act (WMO), an official ethical review was not required for this study. Nonetheless, we obtained permissions from individual hospitals to use anonymized questionnaire data for research purposes. Furthermore, we consulted a privacy officer at our institution to ensure that the data provided for this research complied with Dutch Personal Data Protection Act. Participating hospitals were recruited through the Miletus Foundation (www.stichtingmiletus.nl), a coordinating body of all CQI evaluations within the Netherlands. A detailed research proposal was sent to all hospitals and subsequently discussed at the general meeting. Hospitals interested in participating in the study gave informed consent either via the Miletus Foundation or by directly contacting the primary researcher (AS). MediQuest (home.mediquest.nl), a company that processes patient evaluation data from these evaluations, provided the final data set for the study.

CQI Inpatient Hospital Care questionnaire

The CQI Inpatient Hospital Care questionnaire has been developed in co‐operation with patient and consumer organizations based on three existing instruments used to measure patient care experiences: the CAHPS Hospital Care questionnaire, the Dutch Hospital Association inpatient satisfaction questionnaire and the Hospital Comparison questionnaire from the Netherlands Institute for Health Services Research and the Consumers’ Association.13, 15 The CQI Inpatient Hospital Care consists of a total of 50 items: 38 items about patient experiences and 12 items asking background information. An earlier exploratory factor analysis14 identified nine domains of patient experience, namely admission (Q4a‐j), communication with nurses (Q6‐8), communication with doctors (Q9‐10), own contribution (Q13‐15, 17, 25), explanation of treatment (Q18‐20), pain management (Q21‐22), communication about medication (Q23‐24), feeling of safety (Q27‐29) and discharge information (Q31‐34). Admission and information at discharge were assessed on a 2‐point scale (yes=1, no=0). Other scales were assessed on a 4‐point Likert scale ranging from 1 (Never) to 4 (Always). Building on this previously identified work, we used this structure to test the internal validity, reliability and generalizability of the CQI Inpatient Hospital Care questionnaire.

Statistical analysis

First, respondents and non‐respondents were described using descriptive statistics. Questionnaires were excluded if they had a negative or no response to the question whether or not the patient had a hospital admission within the last 12 months or if less than half of core items were completed. Evaluations with missing data were imputed using multiple imputation technique to create 10 complete data sets.16 Multiple imputation was preferable to single‐imputation methods such as maximum‐likelihood approaches because it better reflected the inherent uncertainty due to missing data in the sample.17 Convergence of the imputations was assessed by examining trace plots and calculating the Rhat statistic.18 In order to maximize convergence, we increased the number of maximum iterations to 200. We then calculated the subscale scores for each imputed data set by averaging the scores for the items within each subscale. The internal validity of the questionnaire was evaluated by assessing the fit of the pre‐identified 9‐factor structure of the questionnaire. In order to assess the overall fit of the model, we performed a confirmatory factor analysis (CFA) on all imputed data sets and combined the final results using Rubin's rules. For categorical variables, weighted least squares with mean and variance adjusted (WLSMV) estimator was preferred to account for the categorical nature of the answers. The WLSMV estimator is a robust estimator that does not assume normally distributed variables and is preferred for modelling categorical or ordered data.19 We assessed the global model fit using the comparative fit index (CFI), Tucker‐Lewis index (TLI) and root mean square error of approximation (RMSEA).20 The following cut‐off values indicated a good fit: CFI≥0.95, TLI≥0.95 and RMSEA≤0.06.19 The overall fit was deemed acceptable if at least two of the three criteria of fit indices were met.21 In order to establish whether the questionnaire measured similar patient experiences across various medical specialties, CFA was then repeated in four subgroups: surgery, obstetrics and gynaecology, internal medicine and cardiology. These specialties were chosen because these specialties were included in the national benchmark. Same cut‐off points were used to evaluate the fit of the factor structure as for the overall sample. Finally, we repeated the CFA on the department level by aggregating the scores of each variable to the department level. Internal consistency of the subscales was evaluated by calculating Cronbach's α statistic for individual questionnaires and the department in each imputed data set, and averaging it across imputed data sets. Overall Cronbach's α≥0.70 was deemed acceptable. The degree to which the subscales measured distinct concepts was assessed by calculating inter‐scale correlations, which were also calculated for individual scores and scores aggregated to the department. A correlation of <0.70 indicated that there was no significant overlap between the subscales. Construct validity was assessed by examining the relative importance of the subscales with two global ratings, namely overall evaluation of the department (Q36, scale 0‐10) and hospital (Q35, scale 0‐10) using multiple linear regression and accounting for respondents’ age, sex, education, self‐rated physical health, self‐rated psychological health, country of origin and the number of admissions in the previous 12 months. Generalizability analysis was conducted to estimate the minimum number of respondents needed to reliably evaluate each subscale on both department and hospital levels. For department‐level evaluations, we estimated a model where the number of items was considered as fixed, with department (d) as the unit of analysis, where respondents (p) were nested within departments (p:d). The resulting design was unbalanced single‐facet nested design.22 For hospital‐level analyses, we similarly regarded the number of items as fixed; however, this time we regarded hospital as the unit of analysis and respondents (p) to be nested within departments (d), which were, in turn, nested within hospitals (h), resulting in an multifacet unbalanced nested design (p:d:h). We averaged variance components, including variance across the departments (Sd) and respondents nested within departments (Sp:d) and respondents nested within departments and hospitals (Sp:d:h), across imputed data sets. Then, we estimated the proportion of the total variance in scores that are due to differences between departments or hospitals. In a D‐study, we estimated the G coefficient and the standard error of measurement (SEM) associated with varying number of respondents within departments and departments within hospitals for mean subscale scores. For seven scales evaluated on a 4‐point scale, we used 0.4 units as an admissible level of “noise,” representing SEM<0.10 (1.96×0.10×2≈0.4) as the maximum value for 95% confidence interval interpretation. For dichotomous scales, we used 0.1 on a scale of 0‐1 as an admissible level of noise, representing SEM<0.025 (1.96×0.025×2≈0.1). Missing data were imputed using the mice package (version 2.25) in R statistical software version 3.2.3.23, 24 The confirmatory factor analyses on imputed data sets were performed using the semTools package (version 0.4‐11) and on aggregated data sets using the lavaan package (version 0.5‐20) in R version 3.2.3.25 Inter‐scale correlations, Cronbach's α, variance components calculations and multiple linear regression analyses were performed using SPSS version 23.0.0.2 (IBM SPSS Inc., Chicago, IL, USA).

RESULTS

Of the distributed 74090 questionnaires, 23476 were returned (gross response rate 31.7%). Table 1 reports characteristics of respondents and non‐respondents. In total, 552 questionnaires were excluded due to negative or no response to the question whether or not they had a hospital admission within the last 12 months or less than half of core items completed. The resulting sample size was 22924 (net response rate 30.9%), including 23 hospitals, 17 different specialties and 515 department evaluations. Table 2 further describes the demographic characteristics of the included respondents. As the results did not differ between 2013 and 2014, we report only combined results below.
Table 1

Characteristics of respondents and non‐respondents of the CQI Inpatient Hospital Care questionnaire

CharacteristicRespondents N (%) (n=23 476)Non‐respondents N (%) (n=50 614)Total N (%) (n=74 090)
Gender
Male11 255 (47.9)21 802 (43.1)33 057 (44.6)
Female12 221 (52.1)28 812 (56.9)41 033 (55.4)
Age (years)
16‐24486 (2.1)3623 (7.2)4109 (5.5)
25‐341580 (6.7)6999 (13.8)8579 (11.6)
35‐441833 (7.8)6356 (12.6)8189 (11.1)
45‐543062 (13.0)7246 (14.3)10 308 (13.9)
55‐645195 (22.1)8224 (16.2)13 419 (18.1)
65‐745492 (23.4)9720 (19.2)15 212 (20.5)
75‐792737 (11.7)3088 (6.1)5825 (7.9)
80+3091 (13.2)5358 (10.6)8449 (11.4)
Type of questionnaire
Online17 922 (76.3)
Mail5554 (23.7)
Table 2

Characteristics of the respondents included in validation of the CQI Inpatient Hospital Care questionnaire

CharacteristicN (Total=22 924)%
Gender
Male10 99247.9
Female11 93252.1
Age (years)
16‐244862.1
25‐3415726.9
35‐4418288.0
45‐54305313.3
55‐64517022.6
65‐74546223.8
75‐79253511.1
80+281812.3
Level of education
Lower secondary or less656128.6
Upper secondary10 51145.9
Tertiary585225.5
Self‐reported health
Excellent13896.1
Very good296221.9
Good10 67346.6
Average669429.2
Bad12065.3
Self‐reported psychological health
Excellent414918.1
Very good546023.8
Good10 96847.8
Average21309.3
Bad2170.9
Country of origin
The Netherlands21 15292.3
Germany1560.7
(Former) Netherlands Antilles/Aruba/Suriname2931.3
Indonesia/Netherlands Indies2811.2
Morocco/Turkey1940.8
Other7383.2
Missing1100.5
Number of admissions in the previous 12 months including current one
113 28357.9
2594725.9
321199.2
4+14646.4
Missing1110.5
Specialty
Surgical11 34449.5
General surgery322514.1
Orthopaedic surgery250210.9
Urology17737.7
Cardiothoracic surgery8953.9
Neurosurgery8223.6
Otolaryngology7433.2
Obstetrics and gynaecology6432.8
Plastic surgery6072.6
Ophthalmology1340.6
Medical800034.9
Cardiology269711.8
Internal medicine19848.7
Pulmonology18778.2
Neurology12625.5
Rheumatology670.3
Geriatrics540.2
Dermatology380.2
Anaesthesiology210.1
Missing358015.6
Characteristics of respondents and non‐respondents of the CQI Inpatient Hospital Care questionnaire Characteristics of the respondents included in validation of the CQI Inpatient Hospital Care questionnaire

Psychometric properties

CFA showed a good fit for surgical, obstetrics and gynaecology, internal medicine, cardiology specialties and all specialties combined (Table 3). When the scores were aggregated to the department level, the incremental fit indices decreased to CFI=0.83 and TLI=0.81. Internal consistency of the scales was acceptable, except for subscales own contribution (0.69), communication about medication (0.68) and feeling of safety (0.64). On the department level, all subscales demonstrated acceptable Cronbach's α, except for feeling of safety (0.64) (Table 4). Inter‐scale correlations showed that on the department level, the subscales communication with doctors with explanation of treatment overlapped substantially (Pearson's r=0.72) (Table 4). Communication of treatment did not predict global ratings of either the hospital or the department, while explanation of treatment was a significant predictor of the rating of the hospital but not the global rating of the department (Table 4).
Table 3

Fit indices for surgery, cardiology, internal medicine, and obstetrics and gynaecology, and all specialties on individual (patient) level and department level. Department‐level scores were obtained by calculating the means for every item per department across all imputed data sets

Surgery (n=3225) Individual levelCardiology (n=2697) Individual levelInternal medicine (n=1984) Individual levelObstetrics and gynaecology (n=643) Individual levelAll specialties (n=22 924) Individual evelAll specialties (n=515) Department level
CFI (≥0.95)0.980.970.980.980.960.83
TLI (≥0.95)0.980.960.980.970.950.81
RMSEA (≤0.06)0.030.030.030.020.040.06
Table 4

Scale means with standard deviations (SD), reliability coefficients (Cronbach's α) and inter‐scale correlations, for individual (above the diagonal) and aggregated department (below the diagonal) evaluations, and estimates of multiple linear regression analyses with 95% confidence intervals examining associations with global department and hospital ratings corrected for respondents’ age, sex, level of education, self‐rated physical and psychological health, number of admissions in the previous 12 mo, and place of birth (*P≤.05, **P≤.001)

Subscale (scoring range)Mean (SD)Cronbach's α individual (department)Inter‐scale correlationsGlobal rating departmentGlobal rating hospital
123456789
1. Admission (0‐1) 0.6 (0.25)0.77 (0.81)10.300.290.310.390.250.380.350.420.14 (0.05‐0.23)*0.20 (0.12‐0.27)**
2. Communication with nurses (1‐4) 3.4 (0.61)0.83 (0.87)0.3610.560.490.510.550.470.460.351.00 (0.97‐1.04)**0.59 (0.56‐0.62)**
3. Communication with doctors (1‐4) 3.4 (0.71)0.81 (0.84)0.410.5610.420.560.410.430.370.330.08 (0.05‐0.11)**0.27 (0.25‐0.30)**
4. Own contribution (1‐4) 3.0 (0.66)0.69 (0.80)0.310.500.4710.440.380.460.390.310.31 (0.28‐0.34)**0.27 (0.25‐0.30)**
5. Explanation of treatment (1‐4) 3.5 (0.67)0.81 (0.89)0.500.590.720.5210.450.570.430.43−0.03 (‐0.06‐0.00)0.08 (0.05‐0.11)**
6. Pain management (1‐4) 3.5 (0.62)0.79 (0.86)0.480.680.520.420.6110.420.410.320.34 (0.31‐0.38)**0.26 (0.22‐0.29)**
7. Communication about medication (1‐4) 3.0 (0.91)0.68 (0.85)0.410.600.600.550.670.5210.470.45−0.01 (−0.03‐0.02)0.004 (‐0.02‐0.03)
8. Feeling of safety (1‐4) 3.4 (0.68)0.64 (0.64)0.470.470.480.300.500.610.5210.380.21 (0.18‐0.24)**0.18 (0.15‐0.21)**
9. Information at discharge (0‐1) 0.7 (0.31)0.76 (0.82)0.600.490.500.430.600.500.530.4210.54 (0.48‐0.60)**0.52 (0.47‐0.57)**
Fit indices for surgery, cardiology, internal medicine, and obstetrics and gynaecology, and all specialties on individual (patient) level and department level. Department‐level scores were obtained by calculating the means for every item per department across all imputed data sets Scale means with standard deviations (SD), reliability coefficients (Cronbach's α) and inter‐scale correlations, for individual (above the diagonal) and aggregated department (below the diagonal) evaluations, and estimates of multiple linear regression analyses with 95% confidence intervals examining associations with global department and hospital ratings corrected for respondents’ age, sex, level of education, self‐rated physical and psychological health, number of admissions in the previous 12 mo, and place of birth (*P≤.05, **P≤.001) 5% or less of total variance in scores was attributable to the department or the hospital (Table 5). Results of the generalizability analysis showed that a minimum of 50 respondents is needed to reliably evaluate subscales of patient experience scored 1‐4 in a department (Appendix S2). For subscales evaluated on Yes/No (0‐1) scale (admission and discharge information), 100 and 150 patient evaluations were needed, respectively, for department‐level evaluations. For hospital‐level evaluations, subscales rated 1‐4 can reliably be evaluated with 4‐8 departments with at least 50 patient evaluations each. For admission and discharge information, at least 10 departments with 100 patient evaluations are needed.
Table 5

Variance components for departments, hospitals and residual variance

Residual varianceBetween‐department variance (% total variance)Between‐hospital variance (% total variance)Hospital variance vs hospital and department variance
1. Admission 0.0590.003 (5%)0.000 (0%)0.0
2. Communication with nurses 0.3600.005 (1%)0.004 (1%)0.44
3. Communication with doctors 0.4900.006 (1%)0.004 (1%)0.40
4. Own contribution 0.4040.014 (3%)0.020 (5%)0.59
5. Explanation of treatment 0.4350.012 (3%)0.003 (1%)0.20
6. Pain management 0.3760.008 (2%)0.002 (1%)0.20
7. Communication about medication 0.8050.012 (1%)0.008 (1%)0.40
8. Feeling of safety 0.4460.010 (2%)0.002 (0%)0.17
9. Information at discharge 0.0890.005 (5%)0.000 (0%)0.0
Variance components for departments, hospitals and residual variance

DISCUSSION

To our knowledge, this is the first study to validate an inpatient experience questionnaire for multiple purposes, namely on the level of the hospital and the department. The CFA results showed a good overall fit, which was comparable between specialties. On the department level, however, the CFA showed a less desirable fit with a significant overlap on the department level between the subscales communication with doctors and explanation of treatment. Differences between departments and hospitals explained only a small proportion of total variance in patient experience scores, with the hospital and the department varying in importance depending on the subscales. A total of 4‐8 departments and 50 respondents per department are needed to reliably evaluate most subscales on both department and hospital levels. For binary subscales, such as admission and discharge information, a minimum of 100‐150 patients per department and 10 departments are needed. The overall good fit provides evidence of validity for the internal structure of the CQI Inpatient Hospital Care questionnaire on the level that it was first designed for, that is the patient. The goodness‐of‐fit indices for surgery, obstetrics and gynaecology, cardiology and internal medicine specialties were similarly good, suggesting that patients experience similar aspects of care in different specialties, allowing for comparisons of patient experiences between specialties to be made. Previous research has demonstrated that, even though aspects of patient experience may be comparable across specialties, their importance can differ substantially by type of hospitalization.26 Although we did not research the relative importance of these aspects for different specialties, departments or hospitals will need to take this into account when choosing priorities for areas of quality improvement. The internal consistency of the scales was acceptable except for three subscales: own contribution, communication about medication and feeling of safety. The same subscales also demonstrated a lower internal consistency in a previous pilot validation study.14 Furthermore, our study found that the subscale communication about medication did not significantly contribute to the global ratings of the department or the hospital, which may indicate a need for improvement in external validity of this scale. Alternatively, global ratings may not be a good indicator of overall health care quality and should, therefore, not be used in external validation, as research by Krol et al. has shown it may be measuring a different concept.27 Similar to other studies11, 26, 28, we found that communication with nurses was the strongest predictor of overall ratings of the department as well as the hospital. This is not surprising as nurses are the primary providers of care in the hospital environment. Furthermore, research has shown that factors related to nursing work such as nursing work environment, nurse‐to‐patient ratios28 and missed nursing care29 and nurse‐patient interaction30 can influence patient satisfaction ratings. A new finding, however, is that higher scores on the subscale discharge information significantly contributed to patients’ global ratings of both the hospital and the department. This is different from the findings by Elliott et al.26, in which discharge information was one of the least valued aspects of inpatient care and was important for only half of hospitalization types. This is not surprising as there appears to be a gap in communication between patients and providers at discharge. A survey of hospitalized patients showed that more than half of patients 70 years or older did not receive instructions about how to care for themselves after hospitalization.31 Our findings suggest that discharge information may be more important than previously thought and that hospitals and departments may improve the overall patient experience by improving how they handle discharges. Yet, as De Boer et al. demonstrated, although global ratings represent experiences regarding priorities, experiences with the important elements of care may still have inconsistent relationships with global ratings.32 On the department level, fit indices did not demonstrate an acceptable fit based on the incremental fit indices (CFI=0.83 and TLI=0.81), while RMSEA was within acceptable bounds at 0.06 (≤0.06 acceptable). As two of the three criteria do not meet the cut‐off criteria, we conclude that the current model is not a good representation of the latent constructs on the department level. Combined with the significant overlap between subscales explanation of treatment and communication with doctors, these results point that on the department level a different structure would provide a better fit of the data. Another reason for a poor fit of the structure on the department level could be the use of aggregated scores, which does not consider the variability of the scores within each department. This may have unnecessarily distorted the data. As the patients are naturally nested within departments and hospitals, confirmation of the fit using multilevel CFA is desirable. The results of the variance component analysis showed that the department and the hospital each account for 5% or less in total variability of the subscale scores. This corresponds with previous research that has found limited influence of the department and the hospital on variability of patient experience scores.15, 33 Generalizability analysis found that it is possible to reliably evaluate patients’ experience using subscales with the scoring scale 1‐4 with 50 respondents (in 4‐8 departments for hospital‐level evaluations), and with 100‐150 respondents (in 10 departments) for the two subscales with the Yes/No (0‐1) scoring scale. More respondents are needed for binary subscales because of the small range of possible scores, leading to higher precision and reliability needed to detect small changes. Compared with other instruments,5, 10 this study shows an improvement in the number of respondents that are needed for reliable evaluation of patient experiences of a single department. Similar size samples are required to reliably evaluate all subscales on the hospital level using our criteria. However, different cut‐off criteria may be chosen depending on whether the results of the CQI Inpatient Hospital Care are to be used by departments for their own quality improvement purposes, or by health insurance companies and health‐care authorities to make summative judgements about the quality of care.34 We, therefore, recommend using the generalizability results of this study (shown in Appendix S2) to adjust the cut‐off criteria based on the proposed use of the questionnaire. In interpreting the results, several limitations should be mentioned. Patient surveys suffer from low response rates. Our response rate of 31% was similar to those previously seen in this setting.14 Reasons for non‐response were not collected during the original data collection process, which made a non‐responder analysis impossible. Although we tried to account for non‐respondents by including sex and age as covariate in regression analyses, this may not have been sufficient because respondents and non‐respondents may also vary based on other characteristics that we have not been able to account for, such as country of origin, language spoken at home or level of education. For example, we did not have any data on how many patients were invited to fill out online or paper‐based questionnaire. Furthermore, in this study we aggregated the individual scores to the level of the department, because this is how typically the scores may be used. Other methods can be tried, such as using median or factor scores, but these may be difficult to interpret. Also, we did not test alternative models on the department level or factor equivalency between different specialties or respondents groups. Finally, we did not investigate the external validity of this questionnaire by studying the relationship between aspects of inpatient hospital care and other important process or outcome measures. Nonetheless, this study also has several strengths. One strength of this study is its use of more than 22000 patient evaluations and over 500 department evaluations from multiple specialties in multiple hospitals including academic and non‐academic centres, which supports the generalizability of our results. Another strength in this study is the use of multiple imputation for handing missing data, which accounts for the uncertainty associated with imputation of missing data.17 With this study, we contribute evidence for validity of the CQI Inpatient Hospital Care questionnaire and its utility for use in different settings and for both quality assurance and summative purposes. We recommend that stakeholders including hospitals, clinical departments and health insurers using this questionnaire use appropriate sample sizes based on its purpose and level of use. Considering the response rate is 31%, much larger samples may be required to arrive at recommended numbers of evaluations. Low response rates have become worrisomely common in survey research,35 with many studies now reporting rates as low as or lower than ours.36 Low response rates may indicate low levels of receptivity of the instrument by patients. Improvements in response rates, for example by identifying and addressing reasons for non‐response, are needed to ensure optimal use of resources as well as appropriate sample sizes. Although this questionnaire has originally been imported to facilitate standardization of the instrument for international comparisons,11 at this point, both the CQI Inpatient Hospital Care and the American HCAHPS, on which the CQI Inpatient Hospital Care is based, have changed substantially such that any international comparisons can only be made based on a collection of limited number of questions that are present in both questionnaires. Future research can investigate whether patient experiences of hospital care improve over time with continuous measurement. Like Zuidgeest et al.37 and Damman et al., 38 we recommend using multilevel models for longitudinal and hierarchical data analyses, rather than using average department or hospital scores. In conclusion, the CQI Inpatient Hospital Care questionnaire can provide valid and reliable data on patient experiences of inpatient hospital care on both department and hospital. The resulting data can be used to facilitate meaningful comparisons and guide quality improvement activities. Future research can focus on improving reliability of the scales, wording of the individual items to reflect specific provider or clinical settings better, and validating the structure on the department level and for different specialties.

CONFLICTS OF INTEREST

All authors express no potential conflicts of interest. Click here for additional data file.
  28 in total

1.  Medscape's response to the Institute of Medicine Report: Crossing the quality chasm: a new health system for the 21st century.

Authors:  M Leavitt
Journal:  MedGenMed       Date:  2001-03-05

2.  Psychometric properties of the Dutch version of the Hospital-level Consumer Assessment of Health Plans Survey instrument.

Authors:  Onyebuchi A Arah; A H A ten Asbroek; Diana M J Delnoij; Johan S de Koning; Piet J A Stam; Aldien H Poll; Barbara Vriens; Paul F Schmidt; Niek S Klazinga
Journal:  Health Serv Res       Date:  2006-02       Impact factor: 3.402

Review 3.  Participation rates in epidemiologic studies.

Authors:  Sandro Galea; Melissa Tracy
Journal:  Ann Epidemiol       Date:  2007-06-06       Impact factor: 3.797

4.  'I'm pickin' up good regressions': the governance of generalisability analyses.

Authors:  Jim Crossley; Jean Russell; Brian Jolly; Chris Ricketts; Chris Roberts; Lambert Schuwirth; John Norcini
Journal:  Med Educ       Date:  2007-10       Impact factor: 6.251

5.  Using multilevel modeling to assess case-mix adjusters in consumer experience surveys in health care.

Authors:  Olga C Damman; Janine H Stubbe; Michelle Hendriks; Onyebuchi A Arah; Peter Spreeuwenberg; Diana M J Delnoij; Peter P Groenewegen
Journal:  Med Care       Date:  2009-04       Impact factor: 2.983

6.  Patient satisfaction with nursing care: a multilevel analysis.

Authors:  Angelo Aiello; Andrew Garman; Scott B Morris
Journal:  Qual Manag Health Care       Date:  2003 Jul-Sep       Impact factor: 0.926

7.  Multiple imputation with large data sets: a case study of the Children's Mental Health Initiative.

Authors:  Elizabeth A Stuart; Melissa Azur; Constantine Frangakis; Philip Leaf
Journal:  Am J Epidemiol       Date:  2009-03-24       Impact factor: 4.897

8.  Patients' perception of hospital care in the United States.

Authors:  Ashish K Jha; E John Orav; Jie Zheng; Arnold M Epstein
Journal:  N Engl J Med       Date:  2008-10-30       Impact factor: 91.245

9.  Hospital discharge information and older patients: do they get what they need?

Authors:  Jonathan Flacker; Wansoo Park; Addie Sims
Journal:  J Hosp Med       Date:  2007-09       Impact factor: 2.960

10.  Nursing: a key to patient satisfaction.

Authors:  Ann Kutney-Lee; Matthew D McHugh; Douglas M Sloane; Jeannie P Cimiotti; Linda Flynn; Donna Felber Neff; Linda H Aiken
Journal:  Health Aff (Millwood)       Date:  2009-06-12       Impact factor: 6.301

View more
  6 in total

1.  Elderly patients' (≥65 years) experiences associated with discharge; Development, validity and reliability of the Discharge Care Experiences Survey.

Authors:  Ranveig Marie Boge; Arvid Steinar Haugen; Roy Miodini Nilsen; Stig Harthug
Journal:  PLoS One       Date:  2018-11-07       Impact factor: 3.240

2.  Discharge care quality in hospitalised elderly patients: Extended validation of the Discharge Care Experiences Survey.

Authors:  Ranveig Marie Boge; Arvid Steinar Haugen; Roy Miodini Nilsen; Frøydis Bruvik; Stig Harthug
Journal:  PLoS One       Date:  2019-09-26       Impact factor: 3.240

3.  Comparative effectiveness of direct admission and admission through emergency departments for children: a randomized stepped wedge study protocol.

Authors:  JoAnna K Leyenaar; Corrie E McDaniel; Stephanie C Acquilano; Andrew P Schaefer; Martha L Bruce; A James O'Malley
Journal:  Trials       Date:  2020-11-30       Impact factor: 2.279

4.  The Value of Merging Medical Data from Ambulance Services and General Practice Cooperatives Using Triple Aim Outcomes.

Authors:  Rosa Naomi Minderhout; Hedwig M M Vos; Pierre M van Grunsven; Isabel de la Torre Y Rivas; Sevde Alkir-Yurt; Mattijs E Numans; Marc A Bruijnzeels
Journal:  Int J Integr Care       Date:  2021-10-28       Impact factor: 5.120

5.  A methodological framework for evaluating transitions in acute care services in the Netherlands to achieve Triple Aim.

Authors:  Rosa Naomi Minderhout; Mattijs E Numans; Hedwig M M Vos; Marc A Bruijnzeels
Journal:  BMC Res Notes       Date:  2022-09-09

6.  Measuring discharge quality based on elderly patients' experiences with discharge conversation: a cross-sectional study.

Authors:  Ranveig Marie Boge; Arvid Steinar Haugen; Roy Miodini Nilsen; Frøydis Bruvik; Stig Harthug
Journal:  BMJ Open Qual       Date:  2019-12-16
  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.