| Literature DB >> 30621092 |
Anna Gergely1, Eszter Petró2, Katalin Oláh3, József Topál4.
Abstract
We tested whether dogs and 14⁻16-month-old infants are able to integrate intersensory information when presented with conspecific and heterospecific faces and vocalisations. The looking behaviour of dogs and infants was recorded with a non-invasive eye-tracking technique while they were concurrently presented with a dog and a female human portrait accompanied with acoustic stimuli of female human speech and a dog's bark. Dogs showed evidence of both con- and heterospecific intermodal matching, while infants' looking preferences indicated effective auditory⁻visual matching only when presented with the audio and visual stimuli of the non-conspecifics. The results of the present study provided further evidence that domestic dogs and human infants have similar socio-cognitive skills and highlighted the importance of comparative examinations on intermodal perception.Entities:
Keywords: cross-modal matching; dog; infant; intermodal cognition
Year: 2019 PMID: 30621092 PMCID: PMC6357027 DOI: 10.3390/ani9010017
Source DB: PubMed Journal: Animals (Basel) ISSN: 2076-2615 Impact factor: 2.752
Figure 1Experimental stimuli. S1, S2, S3 = silence; V1, V2 = vocalisation (i.e., dog bark/human speech); grey line shows the separation of the two areas of interest (areas of interest (AOI); dog = AOI-D, human = AOI-H).
Figure 2Dogs’ and human infants’ visual preferences as measured by first look/fixation at dog or human images during dog bark or human speech. * p < 0.05.
Figure 3Results of looking duration in dogs (A) and infants (B). * p < 0.05. $ p = 0.09. CI—confidence interval.