| Literature DB >> 28135575 |
Andrea Ravignani1, Ruth Sonnweber2.
Abstract
Evolution has shaped animal brains to detect sensory regularities in environmental stimuli. In addition, many species map one-dimensional quantities across sensory modalities, such as conspecific faces to voices, or high-pitched sounds to bright light. If basic patterns like repetitions and identities are frequently perceived in different sensory modalities, it could be advantageous to detect cross-modal isomorphisms, i.e. develop modality-independent representations of structural features, exploitable in visual, tactile, and auditory processing. While cross-modal mappings are common in the animal kingdom, the ability to map similar (isomorphic) structures across domains has been demonstrated in humans but no other animals. We tested cross-modal isomorphisms in two chimpanzees (Pan troglodytes). Individuals were previously trained to choose structurally 'symmetric' image sequences (two identical geometrical shapes separated by a different shape) presented beside 'edge' sequences (two identical shapes preceded or followed by a different one). Here, with no additional training, the choice between symmetric and edge visual sequences was preceded by playback of three concatenated sounds, which could be symmetric (mimicking the symmetric structure of reinforced images) or edge. The chimpanzees spontaneously detected a visual-auditory isomorphism. Response latencies in choosing symmetric sequences were shorter when presented with (structurally isomorphic) symmetric, rather than edge, sound triplets: The auditory stimuli interfered, based on their structural properties, with processing of the learnt visual rule. Crucially, the animals had neither been exposed to the acoustic sequences before the experiment, nor were they trained to associate sounds to images. Our result provides the first evidence of structure processing across modalities in a non-human species. It suggests that basic cross-modal abstraction capacities transcend linguistic abilities and might involve evolutionary ancient neural mechanisms.Entities:
Keywords: Analogy; Audio-visual; Cross-modal; Matching; Pattern perception; Touchscreen
Mesh:
Year: 2017 PMID: 28135575 PMCID: PMC5348109 DOI: 10.1016/j.cognition.2017.01.005
Source DB: PubMed Journal: Cognition ISSN: 0010-0277
Fig. 1Types of cross-modal correspondences. Cross-modal mappings can be discrete (A), continuous (B), or isomorphic, involving whole structures mapped across domains (C), crucially with no reliance on previous specific associations between constituent elements (the diagonal symbol is successfully associated to both the high and low note).
Fig. 2(A, left) Schematic representation of one trial. Trials always started with the presentation of a red circle: once the chimpanzee touched it, the sound triplet was played, the two visual sequences shown and chimpanzees’ latency to respond recorded. Boxplots of FK’s (B) and KL’s (C) latencies in providing the correct response. Median latencies across trials were significantly shorter (see main text and Table 1) in the isomorphic than in the non-isomorphic condition, namely 5.68 vs. 8.25 s (ape FK) and 8.52 vs. 14.27 s (KL).
Median latency (number of trials in bold) for each combination of presented audio stimulus (rows) and chimpanzees’ choice of visual stimulus (columns). In parentheses, Spearman's rank correlation rho between latency and success in the pre-trial, including its significance level (* < 0.05; ** < 0.01).
| KL | FK | |||
|---|---|---|---|---|
| Visual symmetric | Visual edge | Visual symmetric | Visual edge | |
| Audio symmetric | 8.52 (.64*) | 17.59 (.72*) | 5.68 (.61**) | 7.79 (.15) |
| Audio edge | 14.27 (.61) | 13.33 (.81**) | 8.25 (.67**) | 8.01 (.29) |