Literature DB >> 21564253

Composition in distributional models of semantics.

Jeff Mitchell1, Mirella Lapata.   

Abstract

Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in sentential priming) suggesting that semantic similarity is more complex than simply a relation between isolated words. This article proposes a framework for representing the meaning of word combinations in vector space. Central to our approach is vector composition, which we operationalize in terms of additive and multiplicative functions. Under this framework, we introduce a wide range of composition models that we evaluate empirically on a phrase similarity task.
Copyright © 2010 Cognitive Science Society, Inc.

Entities:  

Year:  2010        PMID: 21564253     DOI: 10.1111/j.1551-6709.2010.01106.x

Source DB:  PubMed          Journal:  Cogn Sci        ISSN: 0364-0213


  23 in total

1.  Grounding compositional symbols: no composition without discrimination.

Authors:  Alberto Greco; Elena Carrea
Journal:  Cogn Process       Date:  2011-11-16

2.  Liberal Entity Extraction: Rapid Construction of Fine-Grained Entity Typing Systems.

Authors:  Lifu Huang; Jonathan May; Xiaoman Pan; Heng Ji; Xiang Ren; Jiawei Han; Lin Zhao; James A Hendler
Journal:  Big Data       Date:  2017-03       Impact factor: 2.128

3.  Neural representations of the concepts in simple sentences: Concept activation prediction and context effects.

Authors:  Marcel Adam Just; Jing Wang; Vladimir L Cherkassky
Journal:  Neuroimage       Date:  2017-06-17       Impact factor: 6.556

4.  Modelling meaning composition from formalism to mechanism.

Authors:  Andrea E Martin; Giosuè Baggio
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2019-12-16       Impact factor: 6.237

5.  An Integrated Neural Decoder of Linguistic and Experiential Meaning.

Authors:  Andrew James Anderson; Jeffrey R Binder; Leonardo Fernandino; Colin J Humphries; Lisa L Conant; Rajeev D S Raizada; Feng Lin; Edmund C Lalor
Journal:  J Neurosci       Date:  2019-09-30       Impact factor: 6.167

6.  How the brain composes morphemes into meaning.

Authors:  Laura Gwilliams
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2019-12-16       Impact factor: 6.237

7.  Feature Uncertainty Predicts Behavioral and Neural Responses to Combined Concepts.

Authors:  Sarah H Solomon; Sharon L Thompson-Schill
Journal:  J Neurosci       Date:  2020-05-13       Impact factor: 6.167

Review 8.  Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension.

Authors:  Uri Hasson; Giovanna Egidi; Marco Marelli; Roel M Willems
Journal:  Cognition       Date:  2018-07-24

9.  Deep Artificial Neural Networks Reveal a Distributed Cortical Network Encoding Propositional Sentence-Level Meaning.

Authors:  Andrew James Anderson; Douwe Kiela; Jeffrey R Binder; Leonardo Fernandino; Colin J Humphries; Lisa L Conant; Rajeev D S Raizada; Scott Grimm; Edmund C Lalor
Journal:  J Neurosci       Date:  2021-03-22       Impact factor: 6.167

10.  Encoding sequential information in semantic space models: comparing holographic reduced representation and random permutation.

Authors:  Gabriel Recchia; Magnus Sahlgren; Pentti Kanerva; Michael N Jones
Journal:  Comput Intell Neurosci       Date:  2015-04-07
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.