| Literature DB >> 18276417 |
Abstract
Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called ;weight perturbation') is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented.Entities:
Year: 1992 PMID: 18276417 DOI: 10.1109/72.105429
Source DB: PubMed Journal: IEEE Trans Neural Netw ISSN: 1045-9227