| Literature DB >> 25239476 |
Edmund T Rolls1, Gustavo Deco2.
Abstract
Neural principles that provide a foundation for memory, perception, and decision-making include place coding with sparse distributed representations, associative synaptic modification, and attractor networks in which the storage capacity is in the order of the number of associatively modifiable recurrent synapses on any one neuron. Based on those and further principles of cortical computation, hypotheses are explored in which syntax is encoded in the cortex using sparse distributed place coding. Each cortical module 2-3 mm in diameter is proposed to be formed of a local attractor neuronal network with a capacity in the order of 10,000 words (e.g. subjects, verbs or objects depending on the module). Such a system may form a deep language-of-thought layer. For the information to be communicated to other people, the modules in which the neurons are firing which encode the syntactic role, as well as which neurons are firing to specify the words, must be communicated. It is proposed that one solution to this (used in English) is temporal order encoding, for example subject-verb-object. It is shown with integrate-and-fire simulations that this order encoding could be implemented by weakly forward-coupled subject-verb-object modules. A related system can decode a temporal sequence. This approach based on known principles of cortical computation needs to be extended to investigate further whether it could form a biological foundation for the implementation of language in the brain. This article is part of a Special Issue entitled SI: Brain and Memory.Entities:
Keywords: Attractor network; Language; Semantics; Short-term memory; Stochastic neurodynamics; Syntax
Mesh:
Year: 2014 PMID: 25239476 DOI: 10.1016/j.brainres.2014.09.021
Source DB: PubMed Journal: Brain Res ISSN: 0006-8993 Impact factor: 3.252