| Literature DB >> 31124463 |
Peter Washington1, Haik Kalantarian2, Qandeel Tariq2, Jessey Schwartz2, Kaitlyn Dunlap2, Brianna Chrisman1, Maya Varma3, Michael Ning2, Aaron Kline2, Nathaniel Stockham4, Kelley Paskov2, Catalin Voss3, Nick Haber2,5,6,7, Dennis Paul Wall5,7,8.
Abstract
BACKGROUND: Obtaining a diagnosis of neuropsychiatric disorders such as autism requires long waiting times that can exceed a year and can be prohibitively expensive. Crowdsourcing approaches may provide a scalable alternative that can accelerate general access to care and permit underserved populations to obtain an accurate diagnosis.Entities:
Keywords: autism; biomedical data science; citizen healthcare; crowdsourcing; diagnosis; diagnostics; digital health; human-computer interaction; mechanical turk; mobile health; neuropsychiatric conditions; pediatrics
Mesh:
Year: 2019 PMID: 31124463 PMCID: PMC6552453 DOI: 10.2196/13668
Source DB: PubMed Journal: J Med Internet Res ISSN: 1438-8871 Impact factor: 5.428
Summary of the videos used in all three studies.
| Studies | Video length, mean (range) | Child age (years), mean (range) | Female, % | Children with autism, % |
| 1, 2, 3 | 3 minutes 2 seconds (49 s to 6 min 39 s) | 3.2 (2-5) | 50 | 50 |
| 2 | 2 minutes 9 seconds (1 min 7 s to 4 min 40 s) | 2.9 (2-5) | 50 | 50 |
Figure 1An example question set on the paid crowdsourcing Mechanical Turk Study 1 task. Workers answered the same set of questions for 10 separate videos.
Figure 2Two questions on the paid crowdsourcing Amazon Mechanical Turk Study 2 multiple-choice tasks. Workers were asked to answer 31 multiple-choice questions for a single video per task. There were 10 available identical tasks with different videos.
Figure 3(A) The primary interface for the "citizen healthcare" public crowdsourcing study. Citizen healthcare providers watch a short video and then classify the video as "Autism" or "Not Autism." (B) After rating each video in the "citizen healthcare" public crowdsourcing study, users are asked a single demographic question about themselves. This allows us to collect demographic information without overwhelming the user, which would otherwise lead to lower participant retention rates. (C) At the end of the "citizen healthcare" public crowdsourcing study, users are informed of their score and the time they spent rating. They then have the option to play the game again and share their result on Facebook or Twitter.
Summary demographics of the crowd workers in Study 1 (N=54).
| Demographic | Value |
| Age, mean (SD) | 36.4 (9.0) |
| With autism, n (%) | 3 (5.6) |
| Is a parent, n (%) | 25 (46.3) |
| Female, n (%) | 20 (37.0) |
| Number of known affected children, mean (SD) | 0.7 (0.9) |
| Number of affected families, mean (SD) | 0.4 (0.7) |
| Number of affected friends, mean (SD) | 1.3 (1.2) |
| Number of total known affected people, mean (SD) | 2.3 (3.3) |
Ratings labeled as “Autism” across all 54 paid crowd workers in Study 1.
| Video number | Ratings labeled as “Autism”, % | True rating |
| 1 | 87 | Autism |
| 2 | 6 | Not autism |
| 3 | 2 | Not autism |
| 4 | 44 | Autism |
| 5 | 81 | Autism |
| 6 | 2 | Not autism |
| 7 | 39 | Autism |
| 8 | 49 | Not autism |
| 9 | 70 | Autism |
| 10 | 2 | Not autism |
Comparison of summary demographics of the crowd workers who performed well (≥8/10 videos correctly diagnosed) and poorly (<8/10) in Study 1 (N=27).
| Demographic | Performed well (score≥8/10) | Performed poorly (score<8/10) | |
| Age, mean (SD) | 34.7 (6.5) | 38.1 (10.8) | .17 |
| With autism, n (%) | 2 (7.4) | 1 (3.7) | .56 |
| Is a parent, n (%) | 12 (44.4) | 13 (48.1) | .79 |
| Female, n (%) | 12 (44.4) | 8 (29.6) | .27 |
| Number of known affected children, mean (SD) | 0.5 (0.7) | 1.0 (1.0) | .048 |
| Number of affected families, mean (SD) | 0.2 (0.4) | 0.5 (0.9) | .09 |
| Number of affected friends, mean (SD) | 1.1 (1.3) | 1.5 (1.2) | .23 |
| Number of total known affected people, mean (SD) | 2.3 (3.9) | 2.3 (2.6) | 0.97 |
Ratings labeled as “Autism” across all 22 paid crowd workers in the task with a different set of 10 videos.
| Video number | Ratings labeled as “Autism”, % | True rating |
| 11 | 100 | Autism |
| 12 | 0 | Not autism |
| 13 | 43 | Autism |
| 14 | 0 | Not autism |
| 15 | 90 | Autism |
| 16 | 76 | Autism |
| 17 | 90 | Autism |
| 18 | 10 | Not autism |
| 19 | 24 | Not autism |
| 20 | 0 | Not autism |
Figure 4A histogram of the AMT worker deviation from the gold standard ratings for all questions and all videos. The maximum possible deviation is 3.0. Most video ratings have a deviation below 1.0, which is an acceptable error. However, several worker responses deviated greatly from the gold standard. AMT: Amazon Mechanical Turk.
Questions where the average worker answer was >1.5/3.0 answer choices away from the gold standard rating for multiple videos.
| Question | Number of deviating videos (of 10) | |
| 13 | Does the child get upset, angry or irritated by particular sounds, tastes, smells, sights or textures? | 4 |
| 16 | Does the child stare at objects for long periods of time or focus on particular sounds, smells or textures, or like to sniff things? | 5 |