Literature DB >> 3915497

Learning by statistical cooperation of self-interested neuron-like computing elements.

A G Barto.   

Abstract

Since the usual approaches to cooperative computation in networks of neuron-like computating elements do not assume that network components have any "preferences", they do not make substantive contact with game theoretic concepts, despite their use of some of the same terminology. In the approach presented here, however, each network component, or adaptive element, is a self-interested agent that prefers some inputs over others and "works" toward obtaining the most highly preferred inputs. Here we describe an adaptive element that is robust enough to learn to cooperate with other elements like itself in order to further its self-interests. It is argued that some of the longstanding problems concerning adaptation and learning by networks might be solvable by this form of cooperativity, and computer simulation experiments are described that show how networks of self-interested components that are sufficiently robust can solve rather difficult learning problems. We then place the approach in its proper historical and theoretical perspective through comparison with a number of related algorithms. A secondary aim of this article is to suggest that beyond what is explicitly illustrated here, there is a wealth of ideas from game theory and allied disciplines such as mathematical economics that can be of use in thinking about cooperative computation in both nervous systems and man-made systems.

Entities:  

Mesh:

Year:  1985        PMID: 3915497

Source DB:  PubMed          Journal:  Hum Neurobiol        ISSN: 0721-9075


  11 in total

1.  Connectionist models of conditioning: A tutorial.

Authors:  E J Kehoe
Journal:  J Exp Anal Behav       Date:  1989-11       Impact factor: 2.468

2.  Neural networks for perceptual processing: from simulation tools to theories.

Authors:  Kevin Gurney
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2007-03-29       Impact factor: 6.237

3.  A more biologically plausible learning rule for neural networks.

Authors:  P Mazzoni; R A Andersen; M I Jordan
Journal:  Proc Natl Acad Sci U S A       Date:  1991-05-15       Impact factor: 11.205

4.  Connectionistic models of Boolean category representation.

Authors:  D J Volper; S E Hampson
Journal:  Biol Cybern       Date:  1986       Impact factor: 2.086

5.  Disjunctive models of Boolean category learning.

Authors:  S E Hampson; D J Volper
Journal:  Biol Cybern       Date:  1987       Impact factor: 2.086

6.  Spike-based reinforcement learning in continuous state and action space: when policy gradient methods fail.

Authors:  Eleni Vasilaki; Nicolas Frémaux; Robert Urbanczik; Walter Senn; Wulfram Gerstner
Journal:  PLoS Comput Biol       Date:  2009-12-04       Impact factor: 4.475

7.  Democratic population decisions result in robust policy-gradient learning: a parametric study with GPU simulations.

Authors:  Paul Richmond; Lars Buesing; Michele Giugliano; Eleni Vasilaki
Journal:  PLoS One       Date:  2011-05-04       Impact factor: 3.240

Review 8.  Emergent structured transition from variation to repetition in a biologically-plausible model of learning in basal ganglia.

Authors:  Ashvin Shah; Kevin N Gurney
Journal:  Front Psychol       Date:  2014-02-11

9.  Reinforcement Learning of Linking and Tracing Contours in Recurrent Neural Networks.

Authors:  Tobias Brosch; Heiko Neumann; Pieter R Roelfsema
Journal:  PLoS Comput Biol       Date:  2015-10-23       Impact factor: 4.475

Review 10.  Eligibility Traces and Plasticity on Behavioral Time Scales: Experimental Support of NeoHebbian Three-Factor Learning Rules.

Authors:  Wulfram Gerstner; Marco Lehmann; Vasiliki Liakoni; Dane Corneil; Johanni Brea
Journal:  Front Neural Circuits       Date:  2018-07-31       Impact factor: 3.492

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.