| Literature DB >> 36018485 |
Craig Fowler1, Jian Jiao2, Margaret Pitts2.
Abstract
Academics are increasingly turning to crowdsourcing platforms to recruit research participants. Their endeavors have benefited from a proliferation of studies attesting to the quality of crowdsourced data or offering guidance on managing specific challenges associated with doing crowdsourced research. Thus far, however, relatively little is known about what it is like to be a participant in crowdsourced research. Our analysis of almost 1400 free-text responses provides insight into the frustrations encountered by workers on one widely used crowdsourcing site: Amazon's MTurk. Some of these frustrations stem from inherent limitations of the MTurk platform and cannot easily be addressed by researchers. Many others, however, concern factors that are directly controllable by researchers and that may also be relevant for researchers using other crowdsourcing platforms such as Prolific or CrowdFlower. Based on participants' accounts of their experiences as crowdsource workers, we offer recommendations researchers might consider as they seek to design online studies that demonstrate consideration for respondents and respect for their time, effort, and dignity.Entities:
Keywords: Crowdsourcing; Digital methods; Ethics; Internet; Job satisfaction; Online research; Participants
Year: 2022 PMID: 36018485 PMCID: PMC9415248 DOI: 10.3758/s13428-022-01955-9
Source DB: PubMed Journal: Behav Res Methods ISSN: 1554-351X
(Sub)themes of Turking frustrations
| (Sub)theme | Number of references | Percentage |
|---|---|---|
| Difficulties with survey design and accessibility | ||
| Structural and visual issues | 384 | 16.62% |
| Did not have a progress or completion bar | 117 | 5.06% |
| Should be shorter | 108 | 4.68% |
| Should allow more time to complete | 85 | 3.68% |
| Should be proofread | 55 | 2.38% |
| Survey accessibility | 74 | 3.20% |
| Should make it more interesting and engaging | 51 | 2.21% |
| Frustrations with question design | ||
| Repetition of questions | 218 | 9.44% |
| Question quality | 132 | 5.71% |
| Providing written responses | 111 | 4.81% |
| Store answers to common questions in profile | 57 | 2.47% |
| Some questions should not be asked | 40 | 1.73% |
| Fair pay for fair work | ||
| Did not pay well | 277 | 11.99% |
| Did not pay for qualification questions or failed attention checks | 25 | 1.08% |
| Should indicate how long payment will take and pay quicker | 5 | 0.22% |
| Frustrations due to qualification checks, attention checks, and confirmation codes | ||
| Troubles with or about confirmation codes | 111 | 4.81% |
| Troubles with or about qualification checks | 83 | 3.59% |
| Annoying attention checks | 60 | 2.60% |
| Should be more careful when rejecting work | 15 | 0.65% |
| Desire for clear, accurate, and convenient communication between workers and researchers | ||
| Clarity and accuracy of the HIT | 232 | 10.04% |
| Should enable more convenient communication between requestors and workers | 17 | 0.74% |
| No frustrations | ||
| No frustrations | 53 | 2.29% |
The bolded entries reflect the percentages for the overarching category