| Literature DB >> 28386412 |
Benjamin J Pitcher1, Elodie F Briefer2, Luigi Baciadonna3, Alan G McElligott3.
Abstract
When identifying other individuals, animals may match current cues with stored information about that individual from the same sensory modality. Animals may also be able to combine current information with previously acquired information from other sensory modalities, indicating that they possess complex cognitive templates of individuals that are independent of modality. We investigated whether goats (Capra hircus) possess cross-modal representations (auditory-visual) of conspecifics. We presented subjects with recorded conspecific calls broadcast equidistant between two individuals, one of which was the caller. We found that, when presented with a stablemate and another herd member, goats looked towards the caller sooner and for longer than the non-caller, regardless of caller identity. By contrast, when choosing between two herd members, other than their stablemate, goats did not show a preference to look towards the caller. Goats show cross-modal recognition of close social partners, but not of less familiar herd members. Goats may employ inferential reasoning when identifying conspecifics, potentially facilitating individual identification based on incomplete information. Understanding the prevalence of cross-modal recognition and the degree to which different sensory modalities are integrated provides insight into how animals learn about other individuals, and the evolution of animal communication.Entities:
Keywords: individual recognition; mammals; multimodal communication; ungulates; visual recognition; vocal communication
Year: 2017 PMID: 28386412 PMCID: PMC5367292 DOI: 10.1098/rsos.160346
Source DB: PubMed Journal: R Soc Open Sci ISSN: 2054-5703 Impact factor: 2.963
Figure 1.(a) Presentation arena schematic. The presentation arena was separated from the field by a solid metal fence (solid line). Within the arena, enclosures consisted of portable metal fencing with bars approximately 10 cm apart (dotted lines). The subject (1) was placed in the central enclosure after two stimulus goats (2) had been placed in the triangular enclosures. A camera and speaker (3) were located equidistant between the stimulus goats, facing the subject. The arena was located against a timber fence (dashed line) with vegetation behind (4) to prevent other animals from moving behind the stimulus and minimize visual distractions to the subject. (b) A photo of the presentation arena.
Figure 2.Latency to look. Latency to look at the congruent (C) or incongruent (I) stimulus goat during the first series of playbacks (stablemate versus herd member; in white) and during the second series of playbacks (herd member versus herd member; in grey), (box plot: the horizontal line shows the median, the box extends from the lower to the upper quartile and the whiskers to 1.5 times the interquartile range above the upper quartile or below the lower quartile; the black circles indicate the means; n = 10 goats; linear mixed effects models: **p < 0.01, n.s. , non-significant).
Figure 3.Duration of looks. Duration of time spent looking at the congruent (C) or incongruent (I) stimulus goat during the first series of playbacks (stablemate versus herd member; in white) and during the second series of playbacks (herd member versus herd member; in grey), (box plot: the horizontal line shows the median, the box extends from the lower to the upper quartile and the whiskers to 1.5 times the interquartile range above the upper quartile or below the lower quartile; empty circles indicate outliers; the black circles indicate the means; n = 10 goats; linear mixed effects models: ***p < 0.001, n.s., non-significant).