| Literature DB >> 34264550 |
Annika Hultén1,2, Marijn van Vliet1, Sasa Kivisaari1, Lotta Lammi1, Tiina Lindh-Knuutila1, Ali Faisal1, Riitta Salmelin1.
Abstract
In order to describe how humans represent meaning in the brain, one must be able to account for not just concrete words but, critically, also abstract words, which lack a physical referent. Hebbian formalism and optimization are basic principles of brain function, and they provide an appealing approach for modeling word meanings based on word co-occurrences. We provide proof of concept that a statistical model of the semantic space can account for neural representations of both concrete and abstract words, using MEG. Here, we built a statistical model using word embeddings extracted from a text corpus. This statistical model was used to train a machine learning algorithm to successfully decode the MEG signals evoked by written words. In the model, word abstractness emerged from the statistical regularities of the language environment. Representational similarity analysis further showed that this salient property of the model co-varies, at 280-420 ms after visual word presentation, with activity in regions that have been previously linked with processing of abstract words, namely the left-hemisphere frontal, anterior temporal and superior parietal cortex. In light of these results, we propose that the neural encoding of word meanings can arise through statistical regularities, that is, through grounding in language itself.Entities:
Keywords: MEG; RSA; abstract concepts; concrete words; decoding; machine learning; semantics; word processing
Mesh:
Year: 2021 PMID: 34264550 PMCID: PMC8449102 DOI: 10.1002/hbm.25593
Source DB: PubMed Journal: Hum Brain Mapp ISSN: 1065-9471 Impact factor: 5.038
FIGURE 1Visualization of the semantic space created by the Statistical model, obtained by projecting the word2vec vectors onto a two‐dimensional sheet using a Uniform Manifold Approximation and Projection for Dimension Reduction (UMAP). An interactive version of the figure is available at https://projector.tensorflow.org/?config=https://users.aalto.fi/~vanvlm1/redness1/projector_config.json
FIGURE 2(a) Group‐level item‐level decoding accuracy as a function of time. (b) Overall item‐level decoding results. The box plot on the left shows the quartiles and the variation in the group performance (percent of successful decoding across all stimulus‐item pairs permutations). On the right are the individual scores of each participant. Accuracy scores above 59% and 60%, respectively, for the time‐resolved and the overall decoding results were deemed to be statistically significantly above the chance level based on a permutation test. CI, confidence interval
FIGURE 3Comparison of the Statistical model and Abstractness model. (a) Representational similarity analysis (RSA) between the Statistical model and the MEG data (red) on the left and between the Abstractness model and the MEG data (purple) on the right. The overlap between the two RSAs is plotted in yellow. The results show all regions and time windows with statistically significant findings. For visualization purposes, the data was averaged over 60‐ms time windows. (b) Dissimilarity matrices of the Statistical model and the Abstractness model