Literature DB >> 33500975

Self-Optimization in Continuous-Time Recurrent Neural Networks.

Mario Zarco1, Tom Froese1,2.   

Abstract

A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors-even if these better solutions are normally so hard to find that they have never been previously visited. Ideally, after sufficient self-optimization the most optimal attractor dominates the state space, and the network will converge on it from any initial condition. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application to less restricted neural controllers, as typically used in evolutionary robotics, has not yet been attempted. Here we show for the first time that the self-optimization process can be implemented in a continuous-time recurrent neural network with asymmetrical connections. We discuss several open challenges that must still be addressed before this technique could be applied in actual robotic scenarios.
Copyright © 2018 Zarco and Froese.

Entities:  

Keywords:  Hebbian learning; Hopfield neural network; fixed-point attractors; modeling; optimization

Year:  2018        PMID: 33500975      PMCID: PMC7805835          DOI: 10.3389/frobt.2018.00096

Source DB:  PubMed          Journal:  Front Robot AI        ISSN: 2296-9144


  13 in total

1.  Resilient machines through continuous self-modeling.

Authors:  Josh Bongard; Victor Zykov; Hod Lipson
Journal:  Science       Date:  2006-11-17       Impact factor: 47.728

2.  A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks.

Authors:  Benoît Siri; Hugues Berry; Bruno Cessac; Bruno Delord; Mathias Quoy
Journal:  Neural Comput       Date:  2008-12       Impact factor: 2.026

3.  Optimization by simulated annealing.

Authors:  S Kirkpatrick; C D Gelatt; M P Vecchi
Journal:  Science       Date:  1983-05-13       Impact factor: 47.728

4.  Using Effect Size-or Why the P Value Is Not Enough.

Authors:  Gail M Sullivan; Richard Feinn
Journal:  J Grad Med Educ       Date:  2012-09

5.  Global adaptation in networks of selfish components: emergent associative memory at the system scale.

Authors:  Richard A Watson; Rob Mills; C L Buckley
Journal:  Artif Life       Date:  2011-05-09       Impact factor: 0.667

6.  Neural coordination can be enhanced by occasional interruption of normal firing patterns: a self-optimizing spiking neural network model.

Authors:  Alexander Woodward; Tom Froese; Takashi Ikegami
Journal:  Neural Netw       Date:  2014-09-16

7.  "Neural" computation of decisions in optimization problems.

Authors:  J J Hopfield; D W Tank
Journal:  Biol Cybern       Date:  1985       Impact factor: 2.086

8.  'Unlearning' has a stabilizing effect in collective memories.

Authors:  J J Hopfield; D I Feinstein; R G Palmer
Journal:  Nature       Date:  1983 Jul 14-20       Impact factor: 49.962

9.  Neurons with graded response have collective computational properties like those of two-state neurons.

Authors:  J J Hopfield
Journal:  Proc Natl Acad Sci U S A       Date:  1984-05       Impact factor: 11.205

10.  A dynamical systems account of sensorimotor contingencies.

Authors:  Thomas Buhrmann; Ezequiel Alejandro Di Paolo; Xabier Barandiaran
Journal:  Front Psychol       Date:  2013-05-27
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.