Literature DB >> 18282865

Stochastic competitive learning.

B Kosko1.   

Abstract

Competitive learning systems are examined as stochastic dynamical systems. This includes continuous and discrete formulations of unsupervised, supervised, and differential competitive learning systems. These systems estimate an unknown probability density function from random pattern samples and behave as adaptive vector quantizers. Synaptic vectors, in feedforward competitive neural networks, quantize the pattern space and converge to pattern class centroids or local probability maxima. A stochastic Lyapunov argument shows that competitive synaptic vectors converge to centroids exponentially quickly and reduces competitive learning to stochastic gradient descent. Convergence does not depend on a specific dynamical model of how neuronal activations change. These results extend to competitive estimation of local covariances and higher order statistics.

Year:  1991        PMID: 18282865     DOI: 10.1109/72.134289

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  1 in total

1.  Derivation of a novel efficient supervised learning algorithm from cortical-subcortical loops.

Authors:  Ashok Chandrashekar; Richard Granger
Journal:  Front Comput Neurosci       Date:  2012-01-10       Impact factor: 2.380

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.