Literature DB >> 11936959

Slow feature analysis: unsupervised learning of invariances.

Laurenz Wiskott1, Terrence J Sejnowski.   

Abstract

Invariant features of temporally varying signals are useful for analysis and classification. Slow feature analysis (SFA) is a new method for learning invariant or slowly varying features from a vectorial input signal. It is based on a nonlinear expansion of the input signal and application of principal component analysis to this expanded signal and its time derivative. It is guaranteed to find the optimal solution within a family of functions directly and can learn to extract a large number of decorrelated features, which are ordered by their degree of invariance. SFA can be applied hierarchically to process high-dimensional input signals and extract complex features. SFA is applied first to complex cell tuning properties based on simple cell output, including disparity and motion. Then more complicated input-output functions are learned by repeated application of SFA. Finally, a hierarchical network of SFA modules is presented as a simple model of the visual system. The same unstructured network can learn translation, size, rotation, contrast, or, to a lesser degree, illumination invariance for one-dimensional objects, depending on only the training stimulus. Surprisingly, only a few training objects suffice to achieve good generalization to new objects. The generated representation is suitable for object recognition. Performance degrades if the network is trained to learn multiple invariances simultaneously.

Mesh:

Year:  2002        PMID: 11936959     DOI: 10.1162/089976602317318938

Source DB:  PubMed          Journal:  Neural Comput        ISSN: 0899-7667            Impact factor:   2.026


  86 in total

1.  Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

Authors:  Edmund T Rolls
Journal:  Front Comput Neurosci       Date:  2012-06-19       Impact factor: 2.380

2.  Continuous transformation learning of translation invariant representations.

Authors:  G Perry; E T Rolls; S M Stringer
Journal:  Exp Brain Res       Date:  2010-06-11       Impact factor: 1.972

3.  View-invariance learning in object recognition by pigeons depends on error-driven associative learning processes.

Authors:  Fabian A Soto; Jeffrey Y M Siow; Edward A Wasserman
Journal:  Vision Res       Date:  2012-04-17       Impact factor: 1.886

4.  Robustness of neural codes and its implication on natural image processing.

Authors:  Sheng Li; Si Wu
Journal:  Cogn Neurodyn       Date:  2007-07-12       Impact factor: 5.082

5.  Category learning induces position invariance of pattern recognition across the visual field.

Authors:  Martin Jüttner; Ingo Rentschler
Journal:  Proc Biol Sci       Date:  2008-02-22       Impact factor: 5.349

6.  Empirical intrinsic geometry for nonlinear modeling and time series filtering.

Authors:  Ronen Talmon; Ronald R Coifman
Journal:  Proc Natl Acad Sci U S A       Date:  2013-07-11       Impact factor: 11.205

7.  A place for time: the spatiotemporal structure of neural dynamics during natural audition.

Authors:  Greg J Stephens; Christopher J Honey; Uri Hasson
Journal:  J Neurophysiol       Date:  2013-08-07       Impact factor: 2.714

8.  Unsupervised natural experience rapidly alters invariant object representation in visual cortex.

Authors:  Nuo Li; James J DiCarlo
Journal:  Science       Date:  2008-09-12       Impact factor: 47.728

9.  Neural Quadratic Discriminant Analysis: Nonlinear Decoding with V1-Like Computation.

Authors:  Marino Pagan; Eero P Simoncelli; Nicole C Rust
Journal:  Neural Comput       Date:  2016-09-14       Impact factor: 2.026

10.  Does learned shape selectivity in inferior temporal cortex automatically generalize across retinal position?

Authors:  David D Cox; James J DiCarlo
Journal:  J Neurosci       Date:  2008-10-01       Impact factor: 6.167

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.