| Literature DB >> 25174663 |
Nora Hennies1, Penelope A Lewis2, Simon J Durrant3, James N Cousins2, Matthew A Lambon Ralph2.
Abstract
Conceptual knowledge about objects comprises a diverse set of multi-modal and generalisable information, which allows us to bring meaning to the stimuli in our environment. The formation of conceptual representations requires two key computational challenges: integrating information from different sensory modalities and abstracting statistical regularities across exemplars. Although these processes are thought to be facilitated by offline memory consolidation, investigations into how cross-modal concepts evolve offline, over time, rather than with continuous category exposure are still missing. Here, we aimed to mimic the formation of new conceptual representations by reducing this process to its two key computational challenges and exploring its evolution over an offline retention period. Participants learned to distinguish between members of two abstract categories based on a simple one-dimensional visual rule. Underlying the task was a more complex hidden indicator of category structure, which required the integration of information across two sensory modalities. In two experiments we investigated the impact of time- and sleep-dependent consolidation on category learning. Our results show that offline memory consolidation facilitated cross-modal category learning. Surprisingly, consolidation across wake, but not across sleep showed this beneficial effect. By demonstrating the importance of offline consolidation the current study provided further insights into the processes that underlie the formation of conceptual representations.Entities:
Keywords: Abstraction; Category learning; Cross-modal object representations; Memory consolidation; Sleep
Mesh:
Year: 2014 PMID: 25174663 PMCID: PMC4410790 DOI: 10.1016/j.neuropsychologia.2014.08.021
Source DB: PubMed Journal: Neuropsychologia ISSN: 0028-3932 Impact factor: 3.139
Parameters of bivariate normal distributions used for the stimulus generation.
| Parameter | Category 1 | Category 2 |
|---|---|---|
| −0.8 | 0.8 | |
| 0.8 | −0.8 | |
| 2 | 2 | |
| 2 | 2 | |
| 1.92 | 1.92 |
Fig. 1Visualisation of the category structure (A) and the CMCL-task trial structure (B). (A) An asterisk denotes stimuli from category 1. Stimuli from category 2 are indicated by open circles. The abscissa corresponds to the location along the horizontal screen axis, the space dimension of a stimulus. The ordinate corresponds to the pitch (frequency in log2(Hz)), the auditory dimension of a stimulus. In this two-dimensional space the category structure is an information-integration structure (Ashby & Gott 1988). Each two-dimensional stimulus is paired with an image of an orthographic character. Stimuli of category 1 are paired with characters, which have an enclosed space (for visualisation purpose coloured in grey); stimuli of category 2 are paired with open shaped characters. Category membership can be detected either based on a simple rule regarding the image (‘open’, ‘closed’) or by integrating information on location and tone. (B) Every trial started with the simultaneous presentation of just the auditory and the spatial dimension of a stimulus for 500 ms, before the orthographic character appeared. The three-dimensional stimulus was presented for 1200 ms.
Fig. 2Schematic illustration of the experimental procedures of experiments A and B. Both experiments consisted of two experimental sessions, separated by a consolidation interval. Consolidation interval characteristics differed between conditions as indicated above the arrows. Each session comprised several blocks of the cross-modal category learning (CMCL) task, with the respective final block serving as test block of interest, the control task and two (in Experiment B three) additional memory tasks.
Fig. 3Reaction time results for Experiment A (24 h and 15 min group) and Experiment B (12 h day and 12 h night group). Average response times are shown for all blocks of the CMCL-task and the control (C), for each experimental session. Standard error bars are included. In each session the final block of the CMCL-task was considered as test block (grey box) and response time differences between this block and the control (white box) served as measure for category learning. (A) Session 2: The 24 h group performed significantly faster in the final CMCL-task block than in the corresponding control, indicating the use of integrated auditory and spatial information. This difference was not significant for the 15 min group. (B) Session 2: The 12 h day group showed a significant reaction time decrease in the CMCL-task block compared to the corresponding control. This difference was not significant for the 12 h night group. The data points plotted in light grey correspond to the response times of experiment A. *p<0.05, **p<0.01.
Results of the explicit memory tasks of Experiments A and B.
| 15 min Group | 24 h Group | 12 h day Group | 12 h night Group | |||||
|---|---|---|---|---|---|---|---|---|
| S1 | S2 | S1 | S2 | S1 | S2 | S1 | S2 | |
| Recognition task | 1.05±0.4 | 1.16±0.6 | 0.71±0.5 | 1.01±0.4 | 1.11±0.5 | 1.16±0.5 | 0.92±0.5 | 0.90±0.5 |
| Association task | – | – | – | – | 0.11±0.5 | 0.35±0.4 | 0.23±0.4 | 0.34±0.4 |
| Categorisation task | 26.9±5.1 | 27.2±4.6 | 25.9±4.1 | 29.3±6.2 | 27.4±6.6 | 29.54±5.7 | 25.4±6.0 | 26.70±4.6 |
Data for recognition and association tasks are d’±SD. Data for the categorisation task are means±SD. Correct trials for a session are out of a total of 48.