Literature DB >> 22273089

The effect of item screeners on the quality of patient survey data: a randomized experiment of ambulatory care experience measures.

Hector P Rodriguez1, Ted von Glahn, Angela Li, William H Rogers, Dana Gelb Safran.   

Abstract

BACKGROUND: The use of item screeners is viewed as an essential feature of quality survey design because only respondents who are 'qualified' to answer questions that apply to a subset of the sample are directed to answer. However, empirical evidence supporting this view is scant.
OBJECTIVE: This study compares data quality resulting from the administration of ambulatory care experience measures that use item screeners versus tailored 'not applicable' options in response scales.
METHODS: Patients from the practices of 367 primary care physicians in 65 medical groups were randomly assigned to receive one of two versions of a well validated ambulatory care experience survey. Respondents (n = 2240) represent random samples of active established patients from participating physicians' panels.The 'screener' survey version included item screeners for five test items and the 'no screener' version included tailored 'not applicable' options in response scales instead of using screeners.The main outcomes measures were data quality resulting from the two item versions, including the mean item scores, the level of missing values, outgoing patient sample sizes needed to achieve adequate medical group-level reliability, and the relative ranking of medical groups.
RESULTS: Mean survey item scores generally did not differ by version. There were consistently fewer respondents to the 'screener' versions than 'no screener' versions. However, because the 'screener' versions improved measurement precision, smaller outgoing patient samples were needed to achieve adequate medical group-level reliability for four of the five items than for the 'no screener' version. The relative ranking of medical groups did not differ by item version.
CONCLUSION: Screeners appear to reduce noise by ensuring that respondents who are not 'qualified' to answer a question are screened out instead of providing unreliable responses. The increased precision resulting from 'screener' versions appears to more than offset the higher item non-response rates compared with 'no screener' versions.

Entities:  

Year:  2009        PMID: 22273089     DOI: 10.2165/01312067-200902020-00009

Source DB:  PubMed          Journal:  Patient        ISSN: 1178-1653            Impact factor:   3.883


  7 in total

1.  Special issues addressed in the CAHPS survey of Medicare managed care beneficiaries. Consumer Assessment of Health Plans Study.

Authors:  J A Schnaier; S F Sweeny; V S Williams; B Kosiak; J S Lubalin; R D Hays; L D Harris-Kojetin
Journal:  Med Care       Date:  1999-03       Impact factor: 2.983

2.  The use of cognitive testing to develop and evaluate CAHPS 1.0 core survey items. Consumer Assessment of Health Plans Study.

Authors:  L D Harris-Kojetin; F J Fowler; J A Brown; J A Schnaier; S F Sweeny
Journal:  Med Care       Date:  1999-03       Impact factor: 2.983

3.  Case-mix adjustment of the National CAHPS benchmarking data 1.0: a violation of model assumptions?

Authors:  M N Elliott; R Swartz; J Adams; K L Spritzer; R D Hays
Journal:  Health Serv Res       Date:  2001-07       Impact factor: 3.402

4.  Paying for performance: implementing a statewide project in California.

Authors:  Cheryl L Damberg; Kristiana Raube; Tom Williams; Stephen M Shortell
Journal:  Qual Manag Health Care       Date:  2005 Apr-Jun       Impact factor: 0.926

5.  Evaluating patients' experiences with individual physicians: a randomized trial of mail, internet, and interactive voice response telephone administration of surveys.

Authors:  Hector P Rodriguez; Ted von Glahn; William H Rogers; Hong Chang; Gary Fanjiang; Dana Gelb Safran
Journal:  Med Care       Date:  2006-02       Impact factor: 2.983

6.  Measuring patients' experiences with individual primary care physicians. Results of a statewide demonstration project.

Authors:  Dana Gelb Safran; Melinda Karp; Kathryn Coltin; Hong Chang; Angela Li; John Ogren; William H Rogers
Journal:  J Gen Intern Med       Date:  2006-01       Impact factor: 5.128

7.  Patient samples for measuring primary care physician performance: who should be included?

Authors:  Hector P Rodriguez; Ted von Glahn; Hong Chang; William H Rogers; Dana Gelb Safran
Journal:  Med Care       Date:  2007-10       Impact factor: 2.983

  7 in total
  1 in total

1.  Examining multiple sources of differential item functioning on the Clinician & Group CAHPS® survey.

Authors:  Hector P Rodriguez; Paul K Crane
Journal:  Health Serv Res       Date:  2011-08-11       Impact factor: 3.402

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.