| Literature DB >> 12662563 |
Richard J. Hathaway1, James C. Bezdek, Nikhil R. Pal.
Abstract
Several recent papers have described sequential competitive learning algorithms that are curious hybrids of algorithms used to optimize the fuzzy c-means (FCM) and learning vector quantization (LVQ) models. First, we show that these hybrids do not optimize the FCM functional. Then we show that the gradient descent conditions they use are not necessary conditions for optimization of a sequential version of the FCM functional. We give a numerical example that demonstrates some weaknesses of the sequential scheme proposed by Chung and Lee. And finally, we explain why these algorithms may work at times, by exhibiting the stochastic approximation problem that they unknowingly attempt to solve. Copyright 1996 Published by Elsevier Science LtdEntities:
Year: 1996 PMID: 12662563 DOI: 10.1016/0893-6080(95)00094-1
Source DB: PubMed Journal: Neural Netw ISSN: 0893-6080