Literature DB >> 33457640

What Are the Priming and Ceiling Effects of One Experience Measure on Another?

Aresh Al Salman1, Benjamin J Kopp1, Jacob E Thomas2, David Ring1, Amirreza Fatehi1.   

Abstract

Patient-reported experience measures have notable ceiling effects which can hinder efforts to learn and improve. This study tested whether an iterative (Guttman-style) satisfaction questionnaire combined with instructions intended to give people agency to critique us primes responses on an ordinal scale and reduces ceiling effects. Among the 161 subjects randomly assigned to complete an iterative satisfaction questionnaire before or after an ordinal scale, there was no difference in mean satisfaction (no priming). The Guttman scale was more normally distributed and had slightly less ceiling effect when compared to the ordinal scale. Iterative satisfaction scales partially mitigate ceiling effects. The absence of priming suggests that attempts to encourage agency and reflection have limited ability to reduce ceiling effects, and alternative approaches should be tested.
© The Author(s) 2020.

Entities:  

Keywords:  ceiling effect; iterative questionnaire; patient satisfaction; priming

Year:  2020        PMID: 33457640      PMCID: PMC7786675          DOI: 10.1177/2374373520951670

Source DB:  PubMed          Journal:  J Patient Exp        ISSN: 2374-3735


Introduction

Patient satisfaction has become increasingly important to physicians in recent years due to the expansion of online reviews and, in some cases, ties to reimbursement (1). A qualitative study using interviews of orthopedic outpatients identified 7 themes related to satisfaction: trust, relatedness (the extent to which a patient feels connected to, respected or understood by the clinician), expectations, wait time, visit duration, communication effectiveness, and empathy (1). An ordinal scale is commonly used to measure satisfaction, in part due to its simplicity and in part for its lack of overlap with other themes that influence satisfaction as previously mentioned. A large proportion of patients give top scores, perhaps from appreciation, respect, deference, social desirability bias, or generosity (2). Top scores are useful for marketing and reimbursement, but in research when scores are prevalent of the top end of a scale, it is referred to as a “ceiling effect.” Ceiling and floor effects indicate that your measure is missing important information. Think of it as losing all the variation in the top end of a Gaussian curve. Statisticians refer to this as censoring. The ceiling effects of patient-reported experience measures (PREMs) such as satisfaction, perceived empathy, and communication effectiveness are so high that in research they tend to be dichotomized for analysis into “satisfied” or “unsatisfied” based on whether they are at the extreme end of the scale or not (3,4). Something similar is done in marketing with the so-called Net Promotor Score. Every clinician can improve and every visit can be better. To learn more about factors that contribute to patient satisfaction, we are testing methods to lessen the ceiling effect. To date, we have tried several methods: different questions, scale types, and anchors and still, notable ceiling effects persist (5). Additional ideas include adding text to encourage people to give constructive feedback by providing an introduction with information regarding anonymity and the desire to collect feedback about the visit to help patients in the future (6) (agency, empowerment). Another idea is to use a Guttman-type scale (7): a measurement instrument in which a series of questions ask people to agree or disagree with progressively more extreme statements (Appendix 1). For instance, one thought we had was that the first node of the Guttman scale can ask people if they notice even one small thing that could improve, which should nearly always be the case, which might help add spread to the scores. The iterative nature of a Guttman scale might also lead to greater reflection about one’s satisfaction with the visit and how things might be improved. An interactive scale combined with an attempt to give people agency might influence the scores on the ordinal scale via a priming effect. The iterative opportunities to reflect and the sense that one is helping the team improve might induce a less superlative score. Priming is a psychological phenomenon whereby the immediately preceding stimulus influences subsequent thoughts and behavior (8,9). Studies of priming effects measure responses among people exposed to various stimuli (10). The effect can be positive or negative, depending on the priming. The influence of priming is not limited to behavior (11). There is evidence that priming can influence thoughts, norms, and values (12) and even cognitive performance (eg, in a game of trivial pursuit) (13). The aim of this study was 3-fold: to look for priming of one satisfaction measure on another; to attempt to diminish the ceiling effect; and to determine personal factors associated with satisfaction. Our primary null hypothesis is that there is no difference in patient satisfaction rated on a numerical rating scale (NRS) when given before or after completion of a Guttman-style (iterative) satisfaction rating introduced and worded to increase agency. We also addressed the secondary null hypotheses that there is no difference in ceiling or floor effects of patient satisfaction rating on an NRS compared to a Guttman-style questionnaire and that there are no sociodemographic or psychological factors associated with satisfaction ratings.

Material and Methods

This cross-sectional study was performed at several orthopedic offices in an urban area in the United States during a 2-month period. Using an IRB-approved protocol, all new and return patients seeking orthopedic care were invited to enroll by a researcher not involved in patient care. Inclusion criteria were English speaking and aged 18 to 89 years. At the end of the visit, patients were invited to complete several surveys on tablets using HIPAA-compliant REDCap electronic data capture tools. Accepting the invitation to enroll and answering the questionnaires implied consent. Patients were randomly assigned 1:1 by a computer to complete the NRS-satisfaction questionnaire before or after the Guttman-type satisfaction scale. Researchers assured them that all questionnaires were anonymous and participation was completely voluntary. Patients were allowed to stop at any moment if they felt uncomfortable. We enrolled 173 patients. Eleven patients provided incomplete data due to connectivity issues or misunderstandings. One patient found the questionnaires too personal and did not feel comfortable completing them. This left us with 161 records for analysis. Patients completed a demographic survey (sex, age, new or return patient, marital status, level of education, income, work status, insurance status, and language spoken). There were 16 different treating clinicians separated into the 3 that had more than 20 ratings and the other 13 clinicians were considered as “other clinicians.” The clinician was asked to provide the diagnosis at the end of the visit. Patients also completed the 4 question version of the Pain Catastrophizing Scale (PCS-4) to measure worst-case thinking in response to nociception (14), and the 5 question version of the Short Health Anxiety Inventory (SHAI-5) (15) as a measure of heightened illness concerns (the sense that one has a serious illness in spite of reassurance to the contrary). The total number of questions averaged 23, which were completed in about 15 minutes. Our main response variables were the NRS and Guttman satisfaction scales. The NRS satisfaction questionnaire is an 11-point ordinal rating of satisfaction with care ranging from 0 (completely dissatisfied, can’t imagine a worse visit) to 10 (completely satisfied, can’t imagine a better visit). A Guttman scale uses a series of questions seeking agreement or disagreement with progressively more extreme statements. Respondents, in theory, will agree or disagree with all questions up until a certain question on the series. The point at which they reach a final answer in the linear progression of questions is converted to a numeric rating, ranging from 0 to 9. For analyses, the Guttman scores were rescaled to have equivalent upper and lower limits to the NRS. For some of the statistical comparisons, we scaled the Guttman scores to have the same range as the NRS (0-10).

Patient Characteristics

On hundred and sixty one patients were included in this study, including 71 men (45%) with a median age of 47 (SD±17; Table 1).
Table 1.

Patient and Clinical Characteristics.a

VariablesN = 161
Age in years47 ± 17 (18-89)
Women90 (55)
New patient81 (50)
Trauma79 (49)
Race/ethnicity
 White97 (60)
 Latino/Hispanic30 (18)
 African American18 (11)
 Asian10 (6)
 Other8 (5)
Marital status
 Married/unmarried couple70 (43)
 Single65 (40)
 Divorced/separated/widowed28 (17)
Level of education
 High-school or less32 (20)
 Some college39 (24)
 College graduate57 (35)
 Master’s degree or more35 (21)
Work status
 Working102 (63)
 Retired28 (17)
 Unemployed/disabled/student33 (20)
Income
 $<25.00021 (13)
 $25.000-$50.00023 (14)
 $50.000-$75.00028 (18)
 $>75.00091 (55)
Insurance
 Private insurance95 (58)
 Medicare29 (18)
 Other or no insurance39 (24)

a Continuous variables as mean ± SD (range); discrete variables as number (percentage).

Patient and Clinical Characteristics.a a Continuous variables as mean ± SD (range); discrete variables as number (percentage).

Statistical Analysis

A priori power calculation indicated that 151 patients would provide 90% power to detect a difference in medians of 0.6 with an SD of 1.5 (effect size of 0.4) in NRS satisfaction pre- or post-Guttman using a 2-tailed Mann Whitney U test with α at 0.05. We conceptualized ceiling effects for our sample as having tightly clustered data around the upper rage of the NRS. Thus, a scale lending increased variability and a lower central tendency would mitigate the ceiling effect, to some extent. We compared variability in the NRS and Guttman scales with the Levene test of homogeneity of variance and central tendency using the Mann-Whitney rank sum test. To further probe potential variance in Guttman and NRS scores by sociodemographic variables, psychological factors, and by clinician, 2 multilevel mixed-effect regression models were fit to the data, model 1 with Guttman score as the outcome variable, and model 2 with NRS score as the outcome variable. Because Guttman and NRS scores were not normally distributed, their log transformation was used. Both models contained age, sex, race/ethnicity, education, income, PCS-4, and SHAI-5 as level-1 predictors and clinician as a level-2 predictor.

Results

There was no difference in NRS completed after a Guttman scale (mean 9.4, median 10, SD 1.1) or before (mean 9.3, median 10, SD 1.2; z = 0.52, P = .60). After scaling, the mean Guttman score was 8.2 (median = 8.8, SD = 2.1), and the mean NRS score was 9.3 (median = 10, SD = 1.1). The Levene test was significant (F[1,320] = 102.4, P < .01), rejecting the null hypothesis that the variances are equal. The Mann-Whitney rank sum test was also significant (z = 9.5, P < .01), rejecting the null hypothesis that the means are equal. Accounting for potential confounding using multilevel modeling, we found no variation in NRS or Guttman scores by sociodemographic variables, psychological factors, or by clinician. None of the level-1 predictors were significant in either model.

Discussion

To improve experience, it would be helpful to have PREMs that don’t overlap and that have a limited ceiling effect. To date, we have tried a variety of methods to lessen this ceiling effect in orthopedic specialty care. In this study, we administered a Guttman style (iterative) questionnaire, with a lead in that welcomes a constructive critique to test the hypothesis that this approach can limit ceiling effects, in part through priming effects. We found no priming effect of one satisfaction measure on another, less ceiling effect with an iterative (Guttman-style) satisfaction questionnaire, and no psychological or sociodemographic factors associated with satisfaction. This study should be considered in light of its limitations. First, our patients were enrolled from a single large urban area, creating a homogenous group of mostly insured, employed, relatively high-income Americans. Furthermore, we only enrolled patients visiting hand or sports specialists, which might also limit generalizability to other regions and subspecialties. The ceiling effect seems just as strong in more diverse settings change (2,3,16 –18). Second, although the set of questionnaires was relatively limited, there may have been some questionnaire fatigue (8,19–20). Third, 3 of our surgeons were responsible for over 50% of the visits, which may limit the generalizability of the findings and may also limit the ability to determine the influence of the provider since there were fewer opportunities for outlier performance. The finding that the iterative score and agency-oriented introduction did not affect ratings on the NRS suggests that there is no priming effect from the experience of these variations. There is some evidence of priming in a single specialty visit. For instance, mental health questionnaires framed in the positive have a positive influence on patient-reported outcome measures (21). And the positive priming in the language of questionnaires also increased grip strength (10). We did not observe a similar effect of our attempts to engage people in helping us improve. Our thinking is that our attempts had limited influence on social desirability bias: the generosity induced by the personal experience of having been rated by others. One study of desire to serve the public found variation by age group, religion, and social desirability bias (22). New ideas are needed for how to reduce social desirability bias when completing questionnaires intended to inform improvement. The finding that an iterative test has less of a ceiling effect and greater variation indicates that there are steps one can take to develop experience measures that are more informative. Prior attempts have had limited or no influence on ceiling effects, including shortening questionnaires to improve response rate (23), removing neutral responses (24,25), making the top statements more extreme (26), changing the format of the rating scale (27), and statements regarding anonymity and the need for feedback in order to help future patients (28). We do not accept a strong ceiling effect in other measures, so we should not accept it for experience measures. Using an iterative scale seems to work better than other methods, but there is still a substantial ceiling effect that we would like to reduce. Our sense is that we somehow need social desirability bias in a feedback generosity. That may only occur when people have more routine good experiences providing constructive feedback. One concept we are investigating is avoiding numerical or hierarchical ratings altogether and simple collecting people’s verbatim instructions for how we can improve and analyzing them with natural language processing to try to quantify experience. The observation that satisfaction measures do not vary according to sociodemographic variables, psychological factors, or clinician is consistent with prior work (17,29 –33) demonstrating the complexity of patient experience measures and the difficulty determining modifiable or non-modifiable factors associated with satisfaction. We believe that part of this difficulty is due to the ceiling effect. There must be some patient physical, mental, or social health factors, or personality, experience, or cultural factors that influence satisfaction, but they are likely masked by a ceiling effect. We know that clinician specialty matters, with orthopedic surgeons having some of the lowest communication effectiveness scores (34), but we don’t know the degree to which this is due to surgeon characteristics versus the nature of the illnesses they are seeing. In conclusion, administrating a Guttman scale questionnaire reduces ceiling effects somewhat, suggesting some promise for an iterative approach. Notable ceiling effects remain, and the lack of priming of one questionnaire by another suggests we are not influencing social desirability bias. Research to identify effective methods for helping people feel positively about giving constructive feedback seems important because changes to the structure of the scale are having limited impact. Click here for additional data file. Supplemental Material, Guttman_appendix for What Are the Priming and Ceiling Effects of One Experience Measure on Another? by Aresh Al Salman, Benjamin J Kopp, Jacob E Thomas, David Ring and Amirreza Fatehi in Journal of Patient Experience
  25 in total

1.  Influence of Priming on Patient-Reported Outcome Measures: A Randomized Controlled Trial.

Authors:  Femke M A P Claessen; Jos J Mellema; Nicky Stoop; Bart Lubberts; David Ring; Rudolf W Poolman
Journal:  Psychosomatics       Date:  2015-10-01       Impact factor: 2.386

2.  Do previsit expectations correlate with satisfaction of new patients presenting for evaluation with an orthopaedic surgical practice?

Authors:  Michiel G J S Hageman; Jan Paul Briët; Jeroen K Bossen; Robin D Blok; David C Ring; Ana-Maria Vranceanu
Journal:  Clin Orthop Relat Res       Date:  2014-10-01       Impact factor: 4.176

3.  Achieving and sustaining profound institutional change in healthcare: case study using neo-institutional theory.

Authors:  Fraser Macfarlane; Cathy Barton-Sweeney; Fran Woodard; Trisha Greenhalgh
Journal:  Soc Sci Med       Date:  2013-01-12       Impact factor: 4.634

4.  The relation between perception and behavior, or how to win a game of trivial pursuit.

Authors:  A Dijksterhuis; A van Knippenberg
Journal:  J Pers Soc Psychol       Date:  1998-04

5.  Affect, cognition, and awareness: affective priming with optimal and suboptimal stimulus exposures.

Authors:  S T Murphy; R B Zajonc
Journal:  J Pers Soc Psychol       Date:  1993-05

6.  Patient Satisfaction in an Outpatient Hand Surgery Office: A Comparison of English- and Spanish-Speaking Patients.

Authors:  Mariano E Menendez; Markus Loeffler; David Ring
Journal:  Qual Manag Health Care       Date:  2015 Oct-Dec       Impact factor: 0.926

7.  Determinants of patient satisfaction in a large, municipal ED: the role of demographic variables, visit characteristics, and patient perceptions.

Authors:  E D Boudreaux; R D Ary; C V Mandry; B McCabe
Journal:  Am J Emerg Med       Date:  2000-07       Impact factor: 2.469

8.  Defining and measuring patient satisfaction with medical care.

Authors:  J E Ware; M K Snyder; W R Wright; A R Davies
Journal:  Eval Program Plann       Date:  1983

9.  Identification of factors influencing patient satisfaction with orthopaedic outpatient clinic consultation: A qualitative study.

Authors:  Stuart Waters; Stephen J Edmondston; Piers J Yates; Daniel F Gucciardi
Journal:  Man Ther       Date:  2016-06-04

10.  Effect of questionnaire length, personalisation and reminder type on response rate to a complex postal survey: randomised controlled trial.

Authors:  Shannon Sahlqvist; Yena Song; Fiona Bull; Emma Adams; John Preston; David Ogilvie
Journal:  BMC Med Res Methodol       Date:  2011-05-06       Impact factor: 4.615

View more
  3 in total

1.  Charting a path to high-quality end-of-life care for children with cancer.

Authors:  Prasanna Ananth; Joanne Wolfe; Emily E Johnston
Journal:  Cancer       Date:  2022-08-25       Impact factor: 6.921

2.  CORR Insights®: Which Factors Are Associated With Satisfaction With Treatment Results in Patients With Hand and Wrist Conditions? A Large Cohort Analysis.

Authors:  David C Ring
Journal:  Clin Orthop Relat Res       Date:  2022-01-11       Impact factor: 4.755

3.  Do Unhelpful Thoughts or Confidence in Problem Solving Have Stronger Associations with Musculoskeletal Illness?

Authors:  Ayane Rossano; Aresh Al Salman; David Ring; J Mica Guzman; Amirreza Fatehi
Journal:  Clin Orthop Relat Res       Date:  2022-02-01       Impact factor: 4.755

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.