Literature DB >> 18255589

An analog VLSI recurrent neural network learning a continuous-time trajectory.

G Cauwenberghs1.   

Abstract

Real-time algorithms for gradient descent supervised learning in recurrent dynamical neural networks fail to support scalable VLSI implementation, due to their complexity which grows sharply with the network dimension. We present an alternative implementation in analog VLSI, which employs a stochastic perturbation algorithm to observe the gradient of the error index directly on the network in random directions of the parameter space, thereby avoiding the tedious task of deriving the gradient from an explicit model of the network dynamics. The network contains six fully recurrent neurons with continuous-time dynamics, providing 42 free parameters which comprise connection strengths and thresholds. The chip implementing the network includes local provisions supporting both the learning and storage of the parameters, integrated in a scalable architecture which can be readily expanded for applications of learning recurrent dynamical networks requiring larger dimensionality. We describe and characterize the functional elements comprising the implemented recurrent network and integrated learning system, and include experimental results obtained from training the network to represent a quadrature-phase oscillator.

Year:  1996        PMID: 18255589     DOI: 10.1109/72.485671

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  2 in total

1.  Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks.

Authors:  Hesham Mostafa; Bruno Pedroni; Sadique Sheik; Gert Cauwenberghs
Journal:  Front Neurosci       Date:  2017-09-06       Impact factor: 4.677

2.  Deep Supervised Learning Using Local Errors.

Authors:  Hesham Mostafa; Vishwajith Ramesh; Gert Cauwenberghs
Journal:  Front Neurosci       Date:  2018-08-31       Impact factor: 4.677

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.