Literature DB >> 9892549

Rethinking eliminative connectionism.

G F Marcus1.   

Abstract

Humans routinely generalize universal relationships to unfamiliar instances. If we are told "if glork then frum," and "glork," we can infer "frum"; any name that serves as the subject of a sentence can appear as the object of a sentence. These universals are pervasive in language and reasoning. One account of how they are generalized holds that humans possess mechanisms that manipulate symbols and variables; an alternative account holds that symbol-manipulation can be eliminated from scientific theories in favor of descriptions couched in terms of networks of interconnected nodes. Can these "eliminative" connectionist models offer a genuine alternative? This article shows that eliminative connectionist models cannot account for how we extend universals to arbitrary items. The argument runs as follows. First, if these models, as currently conceived, were to extend universals to arbitrary instances, they would have to generalize outside the space of training examples. Next, it is shown that the class of eliminative connectionist models that is currently popular cannot learn to extend universals outside the training space. This limitation might be avoided through the use of an architecture that implements symbol manipulation. Copyright 1998 Academic Press.

Entities:  

Mesh:

Year:  1998        PMID: 9892549     DOI: 10.1006/cogp.1998.0694

Source DB:  PubMed          Journal:  Cogn Psychol        ISSN: 0010-0285            Impact factor:   3.468


  19 in total

1.  Can connectionist models of phonology assembly account for phonology?

Authors:  I Berent
Journal:  Psychon Bull Rev       Date:  2001-12

2.  Statistical learning in infants.

Authors:  Gerry T M Altmann
Journal:  Proc Natl Acad Sci U S A       Date:  2002-11-18       Impact factor: 11.205

3.  A depictive neural model for the representation of motion verbs.

Authors:  Sunil Rao; Igor Aleksander
Journal:  Cogn Process       Date:  2011-04-06

4.  Linguistic generalization and compositionality in modern artificial neural networks.

Authors:  Marco Baroni
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2019-12-16       Impact factor: 6.237

5.  Training neural networks to encode symbols enables combinatorial generalization.

Authors:  Ivan I Vankov; Jeffrey S Bowers
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2019-12-16       Impact factor: 6.237

6.  On the role of variables in phonology: Remarks on Hayes and Wilson (2008).

Authors:  Iris Berent; Colin Wilson; Gary Marcus; Doug Bemis
Journal:  Linguist Inq       Date:  2012

Review 7.  How does the mind work? Insights from biology.

Authors:  Gary Marcus
Journal:  Top Cogn Sci       Date:  2009-01

8.  Indirection and symbol-like processing in the prefrontal cortex and basal ganglia.

Authors:  Trenton Kriete; David C Noelle; Jonathan D Cohen; Randall C O'Reilly
Journal:  Proc Natl Acad Sci U S A       Date:  2013-09-23       Impact factor: 11.205

9.  The nature of regularity and irregularity: evidence from Hebrew nominal inflection.

Authors:  Iris Berent; Steven Pinker; Joseph Shimron
Journal:  J Psycholinguist Res       Date:  2002-09

10.  Categorial compositionality: a category theory explanation for the systematicity of human cognition.

Authors:  Steven Phillips; William H Wilson
Journal:  PLoS Comput Biol       Date:  2010-07-22       Impact factor: 4.475

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.