Literature DB >> 18249524

A learning rule for very simple universal approximators consisting of a single layer of perceptrons.

Peter Auer1, Harald Burgsteiner, Wolfgang Maass.   

Abstract

One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. Note that in contrast to the familiar model of a "multi-layer perceptron" the parallel perceptron that we consider here has just binary values as outputs of gates on the hidden layer. For a long time one has thought that there exists no competitive learning algorithm for these extremely simple neural networks, which also came to be known as committee machines. It is commonly assumed that one has to replace the hard threshold gates on the hidden layer by sigmoidal gates (or RBF-gates) and that one has to tune the weights on at least two successive layers in order to achieve satisfactory learning results for any class of neural networks that yield universal approximators. We show that this assumption is not true, by exhibiting a simple learning algorithm for parallel perceptrons - the parallel delta rule (p-delta rule). In contrast to backprop for multi-layer perceptrons, the p-delta rule only has to tune a single layer of weights, and it does not require the computation and communication of analog values with high precision. Reduced communication also distinguishes our new learning rule from other learning rules for parallel perceptrons such as MADALINE. Obviously these features make the p-delta rule attractive as a biologically more realistic alternative to backprop in biological neural circuits, but also for implementations in special purpose hardware. We show that the p-delta rule also implements gradient descent-with regard to a suitable error measure-although it does not require to compute derivatives. Furthermore it is shown through experiments on common real-world benchmark datasets that its performance is competitive with that of other learning approaches from neural networks and machine learning. It has recently been shown [Anthony, M. (2007). On the generalization error of fixed combinations of classifiers. Journal of Computer and System Sciences 73(5), 725-734; Anthony, M. (2004). On learning a function of perceptrons. In Proceedings of the 2004 IEEE international joint conference on neural networks (pp. 967-972): Vol. 2] that one can also prove quite satisfactory bounds for the generalization error of this new learning rule.

Entities:  

Mesh:

Year:  2007        PMID: 18249524     DOI: 10.1016/j.neunet.2007.12.036

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  11 in total

1.  Detailed mutational analysis of Vga(A) interdomain linker: implication for antibiotic resistance specificity and mechanism.

Authors:  Jakub Lenart; Vladimir Vimberg; Ludmila Vesela; Jiri Janata; Gabriela Balikova Novotna
Journal:  Antimicrob Agents Chemother       Date:  2014-12-15       Impact factor: 5.191

2.  Predicting Thermal Behavior of Secondary Organic Aerosols.

Authors:  John H Offenberg; Michael Lewandowski; Tadeusz E Kleindienst; Kenneth S Docherty; Mohammed Jaoui; Jonathan Krug; Theran P Riedel; David A Olson
Journal:  Environ Sci Technol       Date:  2017-08-10       Impact factor: 9.028

3.  Functional identification of biological neural networks using reservoir adaptation for point processes.

Authors:  Tayfun Gürel; Stefan Rotter; Ulrich Egert
Journal:  J Comput Neurosci       Date:  2009-07-29       Impact factor: 1.621

4.  A Complex-Valued Oscillatory Neural Network for Storage and Retrieval of Multidimensional Aperiodic Signals.

Authors:  Dipayan Biswas; Sooryakiran Pallikkulath; V Srinivasa Chakravarthy
Journal:  Front Comput Neurosci       Date:  2021-05-24       Impact factor: 2.380

5.  Application of neurocomputing for data approximation and classification in wireless sensor networks.

Authors:  Amir Jabbari; Reiner Jedermann; Ramanan Muthuraman; Walter Lang
Journal:  Sensors (Basel)       Date:  2009-04-24       Impact factor: 3.576

6.  Morphological Neuron Classification Using Machine Learning.

Authors:  Xavier Vasques; Laurent Vanel; Guillaume Villette; Laura Cif
Journal:  Front Neuroanat       Date:  2016-11-01       Impact factor: 3.856

7.  Cascade recurring deep networks for audible range prediction.

Authors:  Yonghyun Nam; Oak-Sung Choo; Yu-Ri Lee; Yun-Hoon Choung; Hyunjung Shin
Journal:  BMC Med Inform Decis Mak       Date:  2017-05-18       Impact factor: 2.796

8.  SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.

Authors:  Friedemann Zenke; Surya Ganguli
Journal:  Neural Comput       Date:  2018-04-13       Impact factor: 2.026

9.  An Oscillatory Neural Autoencoder Based on Frequency Modulation and Multiplexing.

Authors:  Karthik Soman; Vignesh Muralidharan; V Srinivasa Chakravarthy
Journal:  Front Comput Neurosci       Date:  2018-07-10       Impact factor: 2.380

10.  Bio-Inspired Evolutionary Model of Spiking Neural Networks in Ionic Liquid Space.

Authors:  Ensieh Iranmehr; Saeed Bagheri Shouraki; Mohammad Mahdi Faraji; Nasim Bagheri; Bernabe Linares-Barranco
Journal:  Front Neurosci       Date:  2019-11-08       Impact factor: 4.677

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.