| Literature DB >> 30861023 |
Gabriella Eördegh1, Attila Őze2, Balázs Bodosi2, András Puszta2, Ákos Pertich2, Anett Rosu3, György Godó4, Attila Nagy2.
Abstract
Associative learning is a basic cognitive function by which discrete and often different percepts are linked together. The Rutgers Acquired Equivalence Test investigates a specific kind of associative learning, visually guided equivalence learning. The test consists of an acquisition (pair learning) and a test (rule transfer) phase, which are associated primarily with the function of the basal ganglia and the hippocampi, respectively. Earlier studies described that both fundamentally-involved brain structures in the visual associative learning, the basal ganglia and the hippocampi, receive not only visual but also multisensory information. However, no study has investigated whether there is a priority for multisensory guided equivalence learning compared to unimodal ones. Thus we had no data about the modality-dependence or independence of the equivalence learning. In the present study, we have therefore introduced the auditory- and multisensory (audiovisual)-guided equivalence learning paradigms and investigated the performance of 151 healthy volunteers in the visual as well as in the auditory and multisensory paradigms. Our results indicated that visual, auditory and multisensory guided associative learning is similarly effective in healthy humans, which suggest that the acquisition phase is fairly independent from the modality of the stimuli. On the other hand, in the test phase, where participants were presented with acquisitions that were learned earlier and associations that were until then not seen or heard but predictable, the multisensory stimuli elicited the best performance. The test phase, especially its generalization part, seems to be a harder cognitive task, where the multisensory information processing could improve the performance of the participants.Entities:
Mesh:
Year: 2019 PMID: 30861023 PMCID: PMC6413907 DOI: 10.1371/journal.pone.0213094
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1The schematic drawing of the applied visual, auditory and multisensory guided associative learning paradigms.
See details in text.
Fig 2Performances in the sensory guided equivalence learning paradigms.
(A) denotes the number of the necessary trials in the acquisition phase of the paradigm. (B) shows the error ratios in the acquisition phase of the paradigm. (C) and (D) denote the error ratios in the retrieval and generalization parts of the test phase, respectively. In each panel, the first column (light grey) shows the results in the visual paradigm, the second column (white) denotes the results in the auditory paradigm and the third column (grey-white striped) demonstrates the results in the multisensory (audiovisual) paradigm. Mean ± SEM values are presented in each column. The black stars denote the significant differences. The single star in part C represents a significant difference, where p<0.05; the two stars in part D represent strongly significant differences, where p<0.001.
Fig 3Response latencies in the sensory guided equivalence learning paradigms.
(A) shows the response latencies in the acquisition phase of the paradigm, while (B) and (C) denote the response latencies in the retrieval and the generalization parts of the test phase, respectively. The ordinates show the latencies in millisecond (ms). Other conventions are the same as in Fig 2.