Literature DB >> 22622264

Complex-valued autoencoders.

Pierre Baldi1, Zhiqin Lu.   

Abstract

Autoencoders are unsupervised machine learning circuits, with typically one hidden layer, whose learning goal is to minimize an average distortion measure between inputs and outputs. Linear autoencoders correspond to the special case where only linear transformations between visible and hidden variables are used. While linear autoencoders can be defined over any field, only real-valued linear autoencoders have been studied so far. Here we study complex-valued linear autoencoders where the components of the training vectors and adjustable matrices are defined over the complex field with the L(2) norm. We provide simpler and more general proofs that unify the real-valued and complex-valued cases, showing that in both cases the landscape of the error function is invariant under certain groups of transformations. The landscape has no local minima, a family of global minima associated with Principal Component Analysis, and many families of saddle points associated with orthogonal projections onto sub-space spanned by sub-optimal subsets of eigenvectors of the covariance matrix. The theory yields several iterative, convergent, learning algorithms, a clear understanding of the generalization properties of the trained autoencoders, and can equally be applied to the hetero-associative case when external targets are provided. Partial results on deep architecture as well as the differential geometry of autoencoders are also presented. The general framework described here is useful to classify autoencoders and identify general properties that ought to be investigated for each class, illuminating some of the connections between autoencoders, unsupervised learning, clustering, Hebbian learning, and information theory.
Copyright © 2012 Elsevier Ltd. All rights reserved.

Entities:  

Mesh:

Year:  2012        PMID: 22622264      PMCID: PMC3399055          DOI: 10.1016/j.neunet.2012.04.011

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  5 in total

1.  A fast learning algorithm for deep belief nets.

Authors:  Geoffrey E Hinton; Simon Osindero; Yee-Whye Teh
Journal:  Neural Comput       Date:  2006-07       Impact factor: 2.026

2.  Reducing the dimensionality of data with neural networks.

Authors:  G E Hinton; R R Salakhutdinov
Journal:  Science       Date:  2006-07-28       Impact factor: 47.728

3.  A connection between score matching and denoising autoencoders.

Authors:  Pascal Vincent
Journal:  Neural Comput       Date:  2011-04-14       Impact factor: 2.026

4.  Auto-association by multilayer perceptrons and singular value decomposition.

Authors:  H Bourlard; Y Kamp
Journal:  Biol Cybern       Date:  1988       Impact factor: 2.086

5.  A simplified neuron model as a principal component analyzer.

Authors:  E Oja
Journal:  J Math Biol       Date:  1982       Impact factor: 2.259

  5 in total
  2 in total

1.  Learning in the Machine: Random Backpropagation and the Deep Learning Channel.

Authors:  Pierre Baldi; Peter Sadowski; Zhiqin Lu
Journal:  Artif Intell       Date:  2018-04-03       Impact factor: 9.088

2.  Phase synchrony facilitates binding and segmentation of natural images in a coupled neural oscillator network.

Authors:  Holger Finger; Peter König
Journal:  Front Comput Neurosci       Date:  2014-01-27       Impact factor: 2.380

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.