| Literature DB >> 26376443 |
Laughlin Stewart1, Evan L MacLean2, David Ivy1, Vanessa Woods3, Eliot Cohen1, Kerri Rodriguez4, Matthew McIntyre5, Sayan Mukherjee6, Josep Call7, Juliane Kaminski8, Ádám Miklósi9, Richard W Wrangham10, Brian Hare11.
Abstract
Family dogs and dog owners offer a potentially powerful way to conduct citizen science to answer questions about animal behavior that are difficult to answer with more conventional approaches. Here we evaluate the quality of the first data on dog cognition collected by citizen scientists using the Dognition.com website. We conducted analyses to understand if data generated by over 500 citizen scientists replicates internally and in comparison to previously published findings. Half of participants participated for free while the other half paid for access. The website provided each participant a temperament questionnaire and instructions on how to conduct a series of ten cognitive tests. Participation required internet access, a dog and some common household items. Participants could record their responses on any PC, tablet or smartphone from anywhere in the world and data were retained on servers. Results from citizen scientists and their dogs replicated a number of previously described phenomena from conventional lab-based research. There was little evidence that citizen scientists manipulated their results. To illustrate the potential uses of relatively large samples of citizen science data, we then used factor analysis to examine individual differences across the cognitive tasks. The data were best explained by multiple factors in support of the hypothesis that nonhumans, including dogs, can evolve multiple cognitive domains that vary independently. This analysis suggests that in the future, citizen scientists will generate useful datasets that test hypotheses and answer questions as a complement to conventional laboratory techniques used to study dog psychology.Entities:
Mesh:
Year: 2015 PMID: 26376443 PMCID: PMC4574109 DOI: 10.1371/journal.pone.0135176
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Age and sex of subjects in Beta, Live and combined datasets.
| Females | Males | Total | |
|---|---|---|---|
|
| |||
| 0 to 1 years | 17 | 24 | 41 |
| 2 to 6 years | 76 | 63 | 139 |
| 7 or more years | 33 | 32 | 65 |
|
| 126 | 119 | 245 |
|
| |||
| 0 to 1 years | 22 | 23 | 45 |
| 2 to 6 years | 82 | 90 | 172 |
| 7 or more years | 24 | 36 | 60 |
|
| 128 | 149 | 277 |
|
| |||
| 0 to 1 years | 39 | 47 | 86 |
| 2 to 6 years | 158 | 153 | 311 |
| 7 or more years | 57 | 68 | 125 |
|
| 254 | 268 | 522 |
The numbers of trials, the order tasks are presented to all participants, task names and general methods.
For more details see supplemental methods.
| # of Trials | Order | Task | General Method |
|---|---|---|---|
| 1 | 1 | Yawn control | Participant says “yellow” every 5s for 30s. |
| 1 | 2 | Yawn Exp. | Participant yawns every 5s for 30s. |
| 3 | 3 | Eye Contact Warm Up | Participant holds food to her face for 10s and gives treat to dog. |
| 3 | 4 | Eye Contact | Participant holds food to their face and record when and if dog breaks eye contact during 90s countdown. |
| 6 | 5 | Treat Warm Up | Participant introduces the two locations to find food and allows dog to retrieve. |
| 6 | 6 | Arm Pointing | Participant extends her arm and index finger toward one of two food pieces placed on floor and allows dog to retrieve. |
| 6 | 7 | Foot Pointing | Participant extends their leg and foot toward one of two food pieces placed on floor and allows dog to retrieve. |
| 2 | 8,11 | Watching | Participants face their dog, verbally forbid them from taking food and record when and if they retrieve food during 90s countdown. |
| 2 | 9 | Back Turned | Same as Watching except participants turn their back after placing food. |
| 2 | 10 | Eyes Covered | Same as Watching except participants cover their eyes with their hands after placing food. |
| 4 | 12, 13 | 1 & 2-Cup Warm Up | a) Participant shows dog food being hidden under one cup b) Participant place two cups and shows dog food being hidden in one of two cups. |
| 6 | 14 | Memory vs. Pointing | Same as 2-cup warm up plus participant points to empty cup. |
| 4 | 15 | Memory vs. Smell | Same as 2-cup warm up plus participant occludes dog view as treat is moved to empty cup. |
| 4 | 16 | Delayed Memory | Same as 2-cup warm up with increasing delays each trial (60, 90, 120 and 180 seconds) until dog is released to search. |
| 6 | 17 | Inferential Reasoning Warm Up | Participant hides food in one of two cups. Picks up cups showing dog which cup contains food. Places cups back in position. Picks up empty cup, puts it back and releases dog. |
| 4 | 18 | Inferential Reasoning Task | Same as Warm Up except dog is not shown in which cup food is hidden. |
| 4 | 19 | Physical Reasoning Warm Up | Participant props a piece of paper up with a food treat and allows dog to retrieve. |
| 4 | 20 | Physical Causality Task | Same as Warm Up plus 2nd piece of paper placed flat on the ground. |
Fig 1Experimental set up for all experiments requiring dogs to make a choice between one of two hiding locations.
Participants were instructed to place treats or cups (represented by cylinders) 1.2 meters apart while standing 1.8 meters from their dog. Three post-it notes (represented by grey squares) were placed on the ground to aid in live coding of a dog’s choice.
Means, Standard Error, degrees of freedom, test statistic and p-value from the quantitative comparisons between laboratory data and citizen science data collected through Dognition.com.
Welch independent t-tests were used for all comparisons except Memory vs. smell and Memory vs. pointing for which a Wilcoxon sign rank test (continuity corrected) was used, and Delayed Memory for which a proportions test was used.
| Test | Comparison Publication | Lab | Citizen Scientist (N = 522) | |||||
|---|---|---|---|---|---|---|---|---|
| Mean | SE | Mean | SE | df | statistic | p-value | ||
| Arm Pointing | Gácsi et al, 2009 (N = 180) | 67.97% | 1.18% | 66.28% | 0.90% | 403.39 | t = 1.14 | p = 0.25 |
| Foot Pointing | Lakatos et al, 2009 (N = 15) | 65.83% | 5.51% | 64.55% | 0.95% | 14.84 | t = -0.23 | p = 0.82 |
| Other’s visual cues | Call et al, 2003 (N = 14) Watching | 34.92s | 8.81s | 46.99s | 1.55s | 10.63 | t = 1.34 | p = 0.21 |
| Eyes Closed | 25.50s | 8.43s | 46.06s | 1.66s | 11.87 | t = 2.39 | p = 0.03 | |
| Back Turned | 24.63s | 8.23s | 47.54s | 1.63s | 11.88 | t = 2.73 | p = 0.02 | |
| Memory vs. Pointing | Szetei et al, 2003 (N = 10) | 52% | 6.11% | 68% | 1.40% | Wilcoxon Rank Sum | w = 3573.5 | p = 0.04 |
| Memory vs. Smell | Szetei et al, 2003 (N = 10) | 12% | 4.66% | 26.20% | 1.25% | Wilcoxon Rank Sum | w = 3295.5 | p = 0.13 |
| Delayed Memory | MacLean et. al, unpub. (N = 49) | 71% | 6.52% | 81% | 1.70% | 1 | X2 = 2.36 | p = 0.12 |
| Physical Reasoning | Bräuer et al, 2006 (N = 24) | 66.67% | 4.60% | 62.60% | 1.14% | 25.94 | t = -0.86 | p = 0.40 |
Descriptive and test statistics comparing experimental (E) and control conditions (C) in each task of the Beta (N = 245) and Live (N = 277) datasets.
All tests were One-Sample T-tests except visual cues tasks (Repeated ANOVA) and Yawning (McNemar’s).
| Beta Data | Live Data | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Exercise | Trials | Mean | SE | Stat | DF | p | Mean | SE | Stat | DF | p |
| Yawning | ChiSq = 5.513 | 1 | 0.018 | Chisq = 0 | 1 | 1 | |||||
| Control | 1 | C: 44/245 | C: 65/277 | ||||||||
| Experimental | 1 | E: 66/245 | E: 64/277 | ||||||||
| Eye Contact | 3 | 40.46 s | 1.5 | NA | NA | 46.38 s | 1.41 | NA | NA | ||
| Arm Pointing | 6 | 3.96 | 0.08 | 11.851 | 244 |
| 3.99 | 0.07 | 13.557 | 276 |
|
| Foot Pointing | 6 | 3.86 | 0.09 | 9.519 | 244 |
| 3.88 | 0.07 | 12.192 | 276 |
|
| Other’s Visual Cues | F = .733 | 2,488 | 0.481 | F = 6.115 | 2,552 |
| |||||
| Watching Condition | 2 | 42.57s | 2.24 | 50.92s | 2.12 | ||||||
| BackTurnedCondition | 2 | 42.79s | 2.43 | 51.75s | 2.18 | ||||||
| EyesCoveredCondition | 2 | 44.18s | 2.47 | 47.74s | 2.23 | ||||||
| Memory vs. Pointing | 6 | 4.42 | 0.12 | 12.342 | 244 |
| 3.8 | 0.12 | 6.7 | 276 |
|
| Memory vs. Smell | 4 | 1.05 | 0.07 | 12.884 | 244 |
| 1.05 | 0.07 | 13.819 | 276 |
|
| Delayed Memory | 4 | 3 | 0.07 | 13.591 | 244 |
| 3.14 | 0.06 | 18.567 | 276 |
|
| Inference Reasoning | 4 | 1.96 | 0.08 | 0.597 | 244 | 0.551 | 1.87 | 0.06 | 2.084 | 276 |
|
| Physical Reasoning | 4 | 2.51 | 0.07 | 7.394 | 244 |
| 2.5 | 0.06 | 8.126 | 276 |
|
Fig 2Mean performance (+/- SEM) in each task in which dogs faced a two way choice with chance being 50%.
In the memory exercises (Mem Vs. Point and Mem Vs. Smell) success was scored when a dog made a choice consistent with relying on their memory (i.e. not a gesture or olfactory cues). Light grey bars represent Beta and dark grey representing Live datasets. *<0.05 binomial probability.
Fig 3Factor loadings from exploratory factor analyses of the (A) Beta, and (B) Live datasets.
Both datasets were best described by four factor models consisting of factors related to gesture comprehension, memory, and cunning, with a fourth factor that varied between the Beta an Live datasets. The order of factors varied between datasets but the following factors resemble one another between analyses (Live PA1 & Beta PA2; Live PA2 & Beta PA3; Live PA3 & Beta PA4. The remaining factors (Live PA4 & Beta PA1) had no clear analog in the other dataset.