| Literature DB >> 24204561 |
Kristina Karlsson1, Sverker Sikström, Johan Willander.
Abstract
The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues.Entities:
Mesh:
Year: 2013 PMID: 24204561 PMCID: PMC3810467 DOI: 10.1371/journal.pone.0073378
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
P-values showing differences between conditions (columns) on four semantic scale (rows).
| Semantic scale | All other | Multi modal | Visual | Auditory | Olfactory |
|
| .921 | – | .936 | .955 | .863 |
|
|
| .079 | – |
|
|
|
|
| .261 |
| – | .070 |
|
|
|
|
| .062 | – |
P-values ≤.05 are highlighted in boldface. Note. The rows represent data from the four semantics scales (multimodal, visual, auditory, olfactory) respectively contrasted to the three other conditions. See the text for details of how to calculate the semantic scales. Each cell represents the p-values as calculated from t-tests. The first column compares one condition with three other conditions; the last four columns are pairwise comparisons between conditions. P-values were not corrected for multiple comparisons. Notice that the results are not symmetrical because each row represents different scales, thus, the olfactory value on the visual scale (row 2, column 6) differs from the visual value on the olfactory scale (row 4, column 4).
Effect sizes (Cohen's d) for pairwise differences.
| All others | Multi modal | Visual | Auditory | Olfactory | |
|
| −.368 | – | −.491 | −.549 | −.352 |
|
| .806 | .457 | – | 1.012 | .869 |
|
| .425 | .205 | .679 | .476 | |
|
| .905 | .866 | 1.357 | .497 | – |
Note. See note in Table 1.
Figure 1Data-points plotted for all pairwise combinations of the four semantic scales.
Note. The axis represents the semantics scales (multimodal, visual, auditory, or olfactory). Panel A–F represent all possible pairwise combinations these scales, e.g., panel A have the multimodal scale on the x-axes and the visual semantic scale on the y-axis. The markers represent the participants' aggregated narratives on the semantic scale either as red crosses (visual condition), blue crosses (auditory condition), green crosses (olfactory condition), or yellow crosses (multimodal condition). See the methods section for details of how the semantic scales were computed. The four circles represent the mean value for the four conditions.
Figure 2Mean and standard deviations of each condition plotted for the pairwise unimodal combinations of the four semantic scales.
The axis represents the semantics scales (multimodal, visual, auditory, or olfactory). Panel A–C represent all possible pairwise combinations these scales, e.g., panel A have the auditory scale on the x-axes and the visual semantic scale on the y-axis. Crosses represent mean values for each condition (red crosses the visual-, blue crosses the auditory-, green crosses the olfactory-, and yellow crosses the multimodal condition respectively) and the circles represents a 95% confidence interval.