Hector P Rodriguez1, Ted von Glahn, Angela Li, William H Rogers, Dana Gelb Safran. 1. 1 Department of Health Services, School of Public Health and Community Medicine, University of Washington, Seattle, Washington, USA 2 Pacific Business Group on Health, San Francisco, California, USA 3 Blue Cross Blue Shield of Massachusetts, Boston, Massachusetts, USA 4 Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, Massachusetts, USA.
Abstract
BACKGROUND: The use of item screeners is viewed as an essential feature of quality survey design because only respondents who are 'qualified' to answer questions that apply to a subset of the sample are directed to answer. However, empirical evidence supporting this view is scant. OBJECTIVE: This study compares data quality resulting from the administration of ambulatory care experience measures that use item screeners versus tailored 'not applicable' options in response scales. METHODS:Patients from the practices of 367 primary care physicians in 65 medical groups were randomly assigned to receive one of two versions of a well validated ambulatory care experience survey. Respondents (n = 2240) represent random samples of active established patients from participating physicians' panels.The 'screener' survey version included item screeners for five test items and the 'no screener' version included tailored 'not applicable' options in response scales instead of using screeners.The main outcomes measures were data quality resulting from the two item versions, including the mean item scores, the level of missing values, outgoing patient sample sizes needed to achieve adequate medical group-level reliability, and the relative ranking of medical groups. RESULTS:Mean survey item scores generally did not differ by version. There were consistently fewer respondents to the 'screener' versions than 'no screener' versions. However, because the 'screener' versions improved measurement precision, smaller outgoing patient samples were needed to achieve adequate medical group-level reliability for four of the five items than for the 'no screener' version. The relative ranking of medical groups did not differ by item version. CONCLUSION: Screeners appear to reduce noise by ensuring that respondents who are not 'qualified' to answer a question are screened out instead of providing unreliable responses. The increased precision resulting from 'screener' versions appears to more than offset the higher item non-response rates compared with 'no screener' versions.
RCT Entities:
BACKGROUND: The use of item screeners is viewed as an essential feature of quality survey design because only respondents who are 'qualified' to answer questions that apply to a subset of the sample are directed to answer. However, empirical evidence supporting this view is scant. OBJECTIVE: This study compares data quality resulting from the administration of ambulatory care experience measures that use item screeners versus tailored 'not applicable' options in response scales. METHODS:Patients from the practices of 367 primary care physicians in 65 medical groups were randomly assigned to receive one of two versions of a well validated ambulatory care experience survey. Respondents (n = 2240) represent random samples of active established patients from participating physicians' panels.The 'screener' survey version included item screeners for five test items and the 'no screener' version included tailored 'not applicable' options in response scales instead of using screeners.The main outcomes measures were data quality resulting from the two item versions, including the mean item scores, the level of missing values, outgoing patient sample sizes needed to achieve adequate medical group-level reliability, and the relative ranking of medical groups. RESULTS: Mean survey item scores generally did not differ by version. There were consistently fewer respondents to the 'screener' versions than 'no screener' versions. However, because the 'screener' versions improved measurement precision, smaller outgoing patient samples were needed to achieve adequate medical group-level reliability for four of the five items than for the 'no screener' version. The relative ranking of medical groups did not differ by item version. CONCLUSION: Screeners appear to reduce noise by ensuring that respondents who are not 'qualified' to answer a question are screened out instead of providing unreliable responses. The increased precision resulting from 'screener' versions appears to more than offset the higher item non-response rates compared with 'no screener' versions.
Authors: J A Schnaier; S F Sweeny; V S Williams; B Kosiak; J S Lubalin; R D Hays; L D Harris-Kojetin Journal: Med Care Date: 1999-03 Impact factor: 2.983
Authors: Hector P Rodriguez; Ted von Glahn; William H Rogers; Hong Chang; Gary Fanjiang; Dana Gelb Safran Journal: Med Care Date: 2006-02 Impact factor: 2.983
Authors: Dana Gelb Safran; Melinda Karp; Kathryn Coltin; Hong Chang; Angela Li; John Ogren; William H Rogers Journal: J Gen Intern Med Date: 2006-01 Impact factor: 5.128