| Literature DB >> 30106973 |
Marc Höglinger1,2, Ben Jann2.
Abstract
Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may thus be subject to considerable misreporting. To mitigate such response bias, various indirect question techniques, such as the randomized response technique (RRT), have been proposed. We evaluate the viability of several popular variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents' self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6, 505). Our results from two validation designs indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT we do observe a reduction of false negatives. At the same time, however, there is a non-ignorable increase in false positives; a flaw that previous evaluation studies relying on comparative or aggregate-level validation could not detect. Overall, none of the evaluated indirect techniques outperformed conventional direct questioning. Furthermore, our study demonstrates the importance of identifying false negatives as well as false positives to avoid false conclusions about the validity of indirect sensitive question techniques.Entities:
Mesh:
Year: 2018 PMID: 30106973 PMCID: PMC6091935 DOI: 10.1371/journal.pone.0201770
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Descriptive statistics of the sample.
| Variable | Category | Percent |
|---|---|---|
| Gender | male | 49.9 |
| female | 50.1 | |
| Age | 18–24 | 24.3 |
| 25–29 | 27.0 | |
| 30–34 | 18.5 | |
| 35–39 | 10.7 | |
| 40–49 | 10.1 | |
| 50 or older | 9.3 | |
| Education | college degree | 54.0 |
| some college | 34.2 | |
| high school or other | 11.8 | |
| Labor market status | employed | 54.1 |
| self-employed | 12.7 | |
| unemployed | 11.3 | |
| student | 13.0 | |
| other | 8.9 | |
| Prior MTurk studies | 0 | 6.8 |
| 1–9 | 19.3 | |
| 10–99 | 32.9 | |
| 100–999 | 30.2 | |
| 1000 or more | 10.8 | |
| Current location | at home | 85.4 |
| at work | 9.9 | |
| other | 4.7 |
Labor market status recoded from multiple response data (prioritizing categories in the order as listed in the table); N = 6, 152.
Sensitive questions.
| Item | Wording |
|---|---|
| Shoplifting | “Have you ever intentionally taken something from a store without paying for it?” |
| Tax evasion | “Have you ever provided misleading or incorrect information on your tax return?” |
| Non-voting | “Did you vote in the 2012 US presidential election?” |
| Cheating in dice game | Prediction game: “In the $2 dice task at the beginning of this survey: Did you honestly report whether your prediction of the dice roll was right?” |
| Roll-a-six game: “In the $2 dice game at the beginning of this survey: Did you honestly report whether you actually rolled a 6?” |
* Reverse coded for the purpose of analysis.
Number of observations by dice game variant and sensitive question technique.
| Prediction game | Roll-a-six game | |
|---|---|---|
| Direct questioning (DQ) | 387 | 382 |
| Crosswise-model RRT (CM) | 1168 | 1145 |
| Unrelated-question RRT (UQ) | 760 | 780 |
| Forced-response RRT (FR) | 759 | 771 |
Fig 1Comparative validation of sensitive question techniques.
Point estimates and 95% confidence intervals in percent.
Fig 2Aggregate-level validation of sensitive question techniques.
Point estimates and 95% confidence intervals in percent.
Fig 3Individual-level validation of sensitive question techniques.
Point estimates and 95% confidence intervals in percent. Negative false positive rates were set to zero for the computation of the correct classification rate.