Literature DB >> 22335801

The association between survey timing and patient-reported experiences with hospitals: results of a national postal survey.

Oyvind A Bjertnaes1.   

Abstract

BACKGROUND: Research on the effect of survey timing on patient-reported experiences and patient satisfaction with health services has produced contradictory results. The objective of this study was thus to assess the association between survey timing and patient-reported experiences with hospitals.
METHODS: Secondary analyses of a national inpatient experience survey including 63 hospitals in the 5 health regions in Norway during the autumn of 2006. 10,912 (45%) patients answered a postal questionnaire after their discharge from hospital. Non-respondents were sent a reminder after 4 weeks. Multilevel linear regression analysis was used to assess the association between survey timing and patient-reported experiences, both bivariate analysis and multivariate analysis controlling for other predictors of patient experiences.
RESULTS: Multivariate multilevel regression analysis revealed that survey time was significantly and negatively related to three of six patient-reported experience scales: doctor services (Beta = -0.424, p< 0.05), information about examinations (Beta = -0.566, p < 0.05) and organization (Beta = -0.528, p < 0.05). Patient age, self-perceived health and type of admission were significantly related to all patient-reported experience scales (better experiences with higher age, better health and routine admission), and all other predictors had at least one significant association with patient-reported experiences.
CONCLUSIONS: Survey time was significantly and negatively related to three of the six scales for patient-reported experiences with hospitals. Large differences in survey time across hospitals could be problematic for between-hospital comparisons, implying that survey time should be considered as a potential adjustment factor. More research is needed on this topic, including studies with other population groups, other data collection modes and a longer time span.

Entities:  

Mesh:

Year:  2012        PMID: 22335801      PMCID: PMC3298703          DOI: 10.1186/1471-2288-12-13

Source DB:  PubMed          Journal:  BMC Med Res Methodol        ISSN: 1471-2288            Impact factor:   4.615


Background

Patient experiences are an important part of health-care quality [1,2]. Surveys are frequently used to measure patient experiences and satisfaction with health care [3,4], but their value is subject to several methodological challenges. One particular methodological challenge relates to the time it takes from the health-care encounter to the patient receives the questionnaire. Questionnaires might be distributed immediately after a healthcare encounter, a short time after the encounter or a long time after the encounter. Since factors like clinical and health-related quality-of-life outcomes might produce different patient evaluations depending on when the survey is applied to patients [3], decisions about survey time might have substantial effects. Following a longitudinal study, two different theoretical models for patient satisfaction were suggested: (i) an immediate post-visit satisfaction model that includes demographics, patient expectations, patient functioning and patient-doctor interaction; and (ii) a model for 2-week/3-month satisfaction that includes demographics, expectations, patient functioning and symptom improvement [5]. This suggests that the results of evaluations of patients would vary with the survey time point. Several studies have investigated the association between survey timing and patient evaluation [6-16], and most of them found that patient evaluation is poorer when measured at a longer time after the encounter [6-9,11,14-16]. However, a closer investigation of these studies shows that in all except one [11], the data collection mode changed between the different measurements; the aforementioned timing effects might therefore have been due to changes in data collection mode. In fact, the best-designed study concerning the association between survey timing and patient satisfaction found little association between survey timing and patient satisfaction measures [12]. That study grouped patients into three different mailing intervals: 1, 5 and 9 weeks after discharge. All groups received and completed a postal survey at home, and so the data-collection mode was standardized between the groups. The objective of this study was to assess the association between survey timing and patient-reported experiences with hospitals. The Norwegian Knowledge Centre for the Health Services conducted a national postal patient experience survey among adult inpatients discharged from Norwegian hospitals in 2006. The data set included survey data, administrative data from the hospitals including discharge dates, and practical survey variables including the dates for first postal mailing and for response registration in the Knowledge Centre. The availability of these data made it possible to assess the association between survey timing and patient-reported experiences while simultaneously controlling for other known predictors of patient experiences.

Methods

Data

The national survey included adult inpatients discharged from Norwegian hospitals between September 1 and November 23, 2006. The response rate to the survey was 45%, with responses being received from 10,912 patients. In total, 24,141 patients were included in the study; 345 patients were not eligible. The study is described in more detail in another publication [17]. The Norwegian Regional Committee for Medical Research Ethics, the Data Inspectorate and the Norwegian Directorate of Health and Social Affairs approved the survey.

Questionnaire

The questionnaire comprised 29 items about patient experiences and satisfaction, 14 questions about quality of life and 10 background questions. The patient experience questions were based on the Patient Experiences Questionnaire [18], but the response scale was changed to improve data quality [19]. For 27 of the 29 experience items, a five-point response format was used, ranging from "not at all" to "to a very large extent" The national report used the following six scales with good evidence for data quality, reliability and validity [20]: doctor services (three items), nursing services (four items), information about examinations (two items), organization (three items), hospital and equipment (two items) and contact with next of kin (two items). These scales were used in the present study. Scale scores were transformed to a scale of 0-100, where 100 is the best possible rating.

Statistical analysis

Patients discharged from one hospital department to another were included in the national survey. For these patients we only saved the latest discharge date, and consequently it is inappropriate to use the discharge date in the computation of survey time. Therefore, patients with more than one department stay were excluded from this study (n = 362). The survey-time variable was computed as the difference between the date of the first postal mailing and the discharge date. The survey time was 11.8 ± 5.7 days (mean ± SD; minimum: 1 day; maximum: 41 days). According to the protocol, the survey time should have been 1-15 days, but delays in transfers from hospitals resulted in a greater variation. Survey time is a continuous variable and was included as such in the regression analysis described below. In bivariate analysis, the survey-time variable was grouped by week, which gave large groups for statistical comparisons. These bivariate analyses are secondary and no other tests were conducted to assess the appropriateness of this grouping. The survey-time variable was divided into the following groups: ≤ 1 week, 1-2 weeks, 2-3 weeks and > 3 weeks. Survey-time groups were compared across six variables: gender, age, education, self-perceived health, admission type and number of admissions in the previous 2 years. Pearson's chi-square was used for statistical testing, except for age, for which one-way analysis of variance (ANOVA) was used. Response time has been shown to be associated with patient-reported experiences [21]. Response time was computed as the difference between the dates of response registration and first postal mailing. This variable is influenced by how rapidly each individual responded to the questionnaire, and was included as a covariate in the regression models described below. The main focus of the regression analysis was the association between survey timing and the level of patient-reported experiences, adjusted for all the other predictors including response time. Multilevel linear regression analysis was used to assess the association between survey timing and the six patient experience scales, both bivariate analysis and multivariate analysis controlling for gender, age, self-perceived health, education, admission type, number of admissions in the previous 2 years, response time and hospital. Patient clustering within hospitals might inflate t values in ordinary linear regression models so as to produce a type I error, which was the reason for using multilevel regression. The multilevel model divides the total variance in patient-reported experiences into variance at the hospital (macro) level versus the patient (micro) level. The hospitals were included as random intercepts, and all variables from the ordinary regression as fixed effects at the patient level. Standardized variables at level 1 were used in the regression; consequently, standardized regression coefficients were computed. SPSS version 15.0 was used for statistical analyses.

Results

Three of the six background variables differed significantly across the survey-time groups (Table 1): gender (p < 0.05), self-perceived health (p < 0.05) and number of admissions in the previous 2 years (p < 0.05). Bivariate multilevel regression analysis showed that survey time was significantly and negatively related to three of six patient-reported experience scales: organization (Beta = 0.528 p < 0.05), information about examinations (Beta = -0.631, p < 0.01) and contact with next of kin (Beta = -0.669, p < 0.05) (Table 2).
Table 1

Background variables for the survey-time groups (n = 10,717)

Percentage of al respondentsTime between discharge and posting the questionnaire
≤ 1 week(n = 1,977)1-2 weeks(n = 4,810)2-3 weeks(n = 3,290)> 3 weeks(n = 640)

Gender (%)
Male46.744.646.747.151.6
Female53.355.453.352.948.4
pa*
Age, mean (years)60.960.960.960.960.3
pans
Education (%)
Primary school and lowersecondary school34.034.134.433.533.2
Upper secondary school38.938.838.239.840.7
University/college for < 4 years18.319.118.218.217.8
University/college for ≥ 4 years8.78.09.28.68.3
pans
Self-perceived health (%)
Excellent5.34.95.25.75.2
Very good16.215.616.816.113.8
Good33.533.034.533.328.6
Fairly good31.331.830.131.835.6
Poor13.814.813.413.116.7
pa*
Admission type (%)
Emergency50.550.551.049.551.6
Routine49.549.549.050.548.4
pans
Number of admissions in the previous 2 years (%)
146.944.748.546.542.6
225.426.224.825.726.8
3-521.222.321.020.424.2
6-104.54.64.15.04.4
> 102.02.21.62.42.0
pa*

aChi-square tests, except for age (one-way ANOVA). *p < 0.05; ns, not significant

Table 2

Bivariate multilevel regression models: associations between survey-time and patient-reported experience scales

Betap
Doctor services-0.355ns
Nursing services-0.301ns
Information about examinations-0.631**
Organization-0.528*
Hospital and equipment-0.234ns
Contact with next of kin-0.669*

**p < 0.01; *p < 0.05; ns, not significant

Background variables for the survey-time groups (n = 10,717) aChi-square tests, except for age (one-way ANOVA). *p < 0.05; ns, not significant Bivariate multilevel regression models: associations between survey-time and patient-reported experience scales **p < 0.01; *p < 0.05; ns, not significant Multilevel regression analysis revealed that several background variables were significantly associated with patient-reported experiences (Table 3). Survey time was significantly related to three of the six patient-reported experience scales: doctor services (Beta = -0.424, p < 0.05), information about examinations (Beta = -0.566, p < 0.05) and organization (Beta = -0.528, p < 0.05). All associations were negative, indicating that the patient-reported experience scores declined with increasing survey time. Patient age, self-perceived health and type of admission were significantly related to all patient-reported experience scales (better experiences with higher age, better health and routine admission), and all other predictors had at least one significant association with patient-reported experiences.
Table 3

Multilevel linear regression models: associations between independent variables and patient-reported experience scales

Doctor servicesNursing servicesInformation about examinationsOrganizationHospital and equipmentContact with next of kin
Betap Betap Betap Betap Betap Betap
Survey time-0.424*-0.330ns-0.566*-0.528*-0.276ns-0.497ns
Response time-1.21***-1.21***-1.04***-0.858***-0.644**-0.432ns
Male (vs. female)0.090ns0.808***0.038ns0.406ns0.411*0.496ns
Age1.89***1.25***1.79***1.67***1.03***2.41***
Self-perceived health-4.01***-3.44***-4.70***-3.63***-2.66***-3.64***
Education (vs. primary school):
Upper secondary school0.221ns-0.152ns-0.334ns-0.588*-0.533*-0.252ns
University/college for < 4 years-0.239ns-0.691***-0.589*-1.72***-1.46***-0.592ns
University/college for ≥ 4 years-0.045ns-0.613**-0.405ns-1.80***-0.772***-0.697*
Routine admission (vs. emergency)1.48***0.844***2.36***2.49***0.455*0.875**
Number of admissions in the previous 2 years-0.775***-0.793***-0.230ns-0.473*-0.656**-0.543*

***p < 0.001; **p < 0.01; *p < 0.05; ns, not significant

Multilevel linear regression models: associations between independent variables and patient-reported experience scales ***p < 0.001; **p < 0.01; *p < 0.05; ns, not significant

Discussion

Research on the effect of survey timing on patients' evaluations of health services has produced contradictory results [3]. This study found that patients report worse experiences for 3 of 6 patient-reported experience scales when survey time is longer. Individual response time was also negatively related to patient-reported experiences, so regardless of reason increasing time since discharge seems to result in poorer patient-experience scores. Studies assessing the effect of survey time need to standardize the data-collection mode in order to avoid confounding time and mode effects. Almost all studies showing a worsening in patient evaluation over time changed the data-collection mode between the different measurements [6-9,11,14-16]. Therefore, the timing effects might be related to the mode change rather than being actual timing effects. The current study standardized the data-collection mode and found a significant association between survey time and patient-reported experiences for three of the six scales. This is in line with the aforementioned studies, but contradicts another study from Switzerland in which the data-collection mode was also standardized [12]. However, the Swiss study only included one hospital, a specific patient group and a relatively small sample. Consideration of all of the available data suggests that there is a negative association between survey timing and patient-reported experiences and satisfaction. However, more research is needed including studies with other population groups, other data-collection modes and a longer time span. The first limitation of the current study relates to the distribution of the survey-time variable. The data-collection protocol means that most patients were sent a questionnaire 0-3 weeks after discharge, with a maximum of 41 days for individual patients. It would be useful to have knowledge about the potential effects of a longer time span, such as the effects of a model with surveys sent 1 month versus 2 months after discharge. The second potential limitation of the current study is the response rate. In general, postal surveys have lower response rates than other data collection modes [3]. The response rate in the current study is in line with other Norwegian national patient experience surveys. Non-response bias occurs when the main variables differ systematically between respondents and respondents [22]. In our study, the main question relates to differences between respondent groups, and hence non-response bias is of less concern. However, response rates might be lower in surveys with a longer time between discharge and postal mailing [12,13]. Consequently, the association between survey timing and the response rate is an important consideration when designing data-collection procedures. A Swiss study found that the response rate was significantly lower for the 9-week group [12] but not for the 5-week group, indicating that satisfactory results regarding response rate can be achieved with surveys mailed 0-5 weeks after discharge. A third possible limitation of the study concerns its observational research design, causing uncertainty related to potential confounding variables. The gold standard for effect research is randomized trials, in which the aim is for only random variations to exist between study groups and for there to be a direct link between intervention and effect. However, a multicentre randomized trial on this topic would present large practical and methodological challenges, both regarding which time frames to use (intervention) and how to apply sample frames and randomization across hospitals. Another suitable design could have been a longitudinal approach, but that was not possible in this study. The present study adjusted for the most important sociodemographic predictors of patient experiences and patient satisfaction, reducing the probability of confounding effects related to variables not included. The study was based on data from all hospitals in Norway, and the survey-time variable was registered and analyzed as a continuous variable at the individual level. The former feature increased the external relevance of the study, and the latter gave the opportunity to use survey time in days in the analysis, providing more detailed information than only groups based on, say, weeks or months.

Conclusions

Survey time was significantly and negatively related to three of the six scales for patient-reported experiences with hospitals. Large-scale hospital comparisons of patient-reported experiences should consider survey time as an adjustment factor if it is not standardized across hospitals. The generalizability of the survey-time effect to other topics and other modes is uncertain, but a negative association has been found in most of the other studies referenced, including patient populations in primary care and hospital in- and outpatient care. However, more high-quality research on this topic is needed, including studies with other population groups, other data collection modes and a longer time span.

Competing interests

The author declares that they have no competing interests.

Authors' contributions

O.A.B. planned the paper, carried out the statistical analysis, and drafted and finalized the paper.

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/12/13/prepub
  18 in total

Review 1.  The measurement of satisfaction with healthcare: implications for practice from a systematic review of the literature.

Authors:  R Crow; H Gage; S Hampson; J Hart; A Kimber; L Storey; H Thomas
Journal:  Health Technol Assess       Date:  2002       Impact factor: 4.014

Review 2.  The quality of care. How can it be assessed?

Authors:  A Donabedian
Journal:  JAMA       Date:  1988 Sep 23-30       Impact factor: 56.272

3.  Predictors of patient satisfaction.

Authors:  J L Jackson; J Chamberlin; K Kroenke
Journal:  Soc Sci Med       Date:  2001-02       Impact factor: 4.634

4.  A comparison of methods for measuring patient satisfaction with consultations in primary care.

Authors:  P Kinnersley; N Stott; T Peters; I Harvey; P Hackett
Journal:  Fam Pract       Date:  1996-02       Impact factor: 2.267

5.  Timing of patient satisfaction assessment: effect on questionnaire acceptability, completeness of data, reliability and variability of scores.

Authors:  Anne Brédart; Darius Razavi; Chris Robertson; Stefania Brignone; Dora Fonzo; Jean-Yves Petit; J C J M de Haes
Journal:  Patient Educ Couns       Date:  2002-02

6.  The Patient Experiences Questionnaire: development, validity and reliability.

Authors:  Kjell I Pettersen; Marijke Veenstra; Bjørn Guldvog; Arne Kolstad
Journal:  Int J Qual Health Care       Date:  2004-12       Impact factor: 2.038

7.  Effect of a general practitioner's consulting style on patients' satisfaction: a controlled study.

Authors:  R Savage; D Armstrong
Journal:  BMJ       Date:  1990-10-27

8.  Assessment of the patient-doctor interaction scale for measuring patient satisfaction.

Authors:  M A Bowman; A Herndon; P C Sharp; M B Dignan
Journal:  Patient Educ Couns       Date:  1992-02

9.  Patient satisfaction following day surgery.

Authors:  Paulo Lemos; Ana Pinto; Gustavo Morais; José Pereira; Rui Loureiro; Sofia Teixeira; Catarina S Nunes
Journal:  J Clin Anesth       Date:  2009-05       Impact factor: 9.452

10.  Effect of timing on the response to postal questionnaires concerning satisfaction with anaesthesia care.

Authors:  D Saal; M Nuebling; Y Husemann; T Heidegger
Journal:  Br J Anaesth       Date:  2004-11-12       Impact factor: 9.166

View more
  16 in total

1.  Improving Response Rates and Representation of Hard-to-Reach Groups in Family Experience Surveys.

Authors:  Sara L Toomey; Marc N Elliott; Alan M Zaslavsky; Jessica Quinn; David J Klein; Stephanie Wagner; Cassandra Thomson; Melody Wu; Sarah Onorato; Mark A Schuster
Journal:  Acad Pediatr       Date:  2018-07-26       Impact factor: 3.107

2.  Effect of intensive care environment on family and patient satisfaction: a before-after study.

Authors:  Irene P Jongerden; Arjen J Slooter; Linda M Peelen; Hester Wessels; Colette M Ram; Jozef Kesecioglu; Margriet M Schneider; Diederik van Dijk
Journal:  Intensive Care Med       Date:  2013-06-06       Impact factor: 17.440

3.  Patient experience of virtual urogynaecology services during Covid-19 pandemic.

Authors:  Victoria Kershaw; Zarnigar Khan; Stephen Radley
Journal:  Int Urogynecol J       Date:  2022-06-28       Impact factor: 1.932

4.  Measuring the patient experience in primary care: Comparing e-mail and waiting room survey delivery in a family health team.

Authors:  Morgan Slater; Tara Kiran
Journal:  Can Fam Physician       Date:  2016-12       Impact factor: 3.275

5.  Variation and outcomes associated with direct hospital admission among children with pneumonia in the United States.

Authors:  JoAnna K Leyenaar; Meng-Shiou Shieh; Tara Lagu; Penelope S Pekow; Peter K Lindenauer
Journal:  JAMA Pediatr       Date:  2014-09       Impact factor: 16.193

6.  Trends in patient satisfaction in Dutch university medical centers: room for improvement for all.

Authors:  Sophia M Kleefstra; Linda C Zandbelt; Hanneke J C J M de Haes; Rudolf B Kool
Journal:  BMC Health Serv Res       Date:  2015-03-19       Impact factor: 2.655

7.  Multisite Quality Improvement Study of a Patient-Pathologist Consultation Program.

Authors:  Rachel Jug; Adam L Booth; Anne F Buckley; Jordan Newell; Joshua Kesterson; Jerad M Gardner; Lerna Ozcan; Beiyu Liu; Cynthia L Green; Lija Joseph; Thomas J Cummings
Journal:  Am J Clin Pathol       Date:  2021-05-18       Impact factor: 2.493

8.  PIPEQ-OS--an instrument for on-site measurements of the experiences of inpatients at psychiatric institutions.

Authors:  Oyvind Bjertnaes; Hilde Hestad Iversen; Johanne Kjollesdal
Journal:  BMC Psychiatry       Date:  2015-10-06       Impact factor: 3.630

9.  The Patient-Reported Incident in Hospital Instrument (PRIH-I): assessments of data quality, test-retest reliability and hospital-level reliability.

Authors:  Oyvind Bjertnaes; Kjersti Eeg Skudal; Hilde Hestad Iversen; Anne Karin Lindahl
Journal:  BMJ Qual Saf       Date:  2013-05-14       Impact factor: 7.035

10.  It takes patience and persistence to get negative feedback about patients' experiences: a secondary analysis of national inpatient survey data.

Authors:  David N Barron; Elizabeth West; Rachel Reeves; Denise Hawkes
Journal:  BMC Health Serv Res       Date:  2014-04-04       Impact factor: 2.655

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.