Literature DB >> 18276417

Weight perturbation: an optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks.

M Jabri1, B Flower.   

Abstract

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called ;weight perturbation') is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented.

Entities:  

Year:  1992        PMID: 18276417     DOI: 10.1109/72.105429

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  5 in total

1.  Role of synaptic dynamics and heterogeneity in neuronal learning of temporal code.

Authors:  Ziv Rotman; Vitaly A Klyachko
Journal:  J Neurophysiol       Date:  2013-08-07       Impact factor: 2.714

2.  A reward-modulated hebbian learning rule can explain experimentally observed network reorganization in a brain control task.

Authors:  Robert Legenstein; Steven M Chase; Andrew B Schwartz; Wolfgang Maass
Journal:  J Neurosci       Date:  2010-06-23       Impact factor: 6.167

3.  Digitally programmable analogue circuits for sensor conditioning systems.

Authors:  Guillermo Zatorre; Nicolás Medrano; María Teresa Sanz; Concepción Aldea; Belén Calvo; Santiago Celma
Journal:  Sensors (Basel)       Date:  2009-05-14       Impact factor: 3.576

4.  Back-propagation operation for analog neural network hardware with synapse components having hysteresis characteristics.

Authors:  Michihito Ueda; Yu Nishitani; Yukihiro Kaneko; Atsushi Omote
Journal:  PLoS One       Date:  2014-11-13       Impact factor: 3.240

5.  Learning probabilistic neural representations with randomly connected circuits.

Authors:  Ori Maoz; Gašper Tkačik; Mohamad Saleh Esteki; Roozbeh Kiani; Elad Schneidman
Journal:  Proc Natl Acad Sci U S A       Date:  2020-09-18       Impact factor: 11.205

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.