| Literature DB >> 12959670 |
Abstract
Temporal slowness is a learning principle that allows learning of invariant representations by extracting slowly varying features from quickly varying input signals. Slow feature analysis (SFA) is an efficient algorithm based on this principle and has been applied to the learning of translation, scale, and other invariances in a simple model of the visual system. Here, a theoretical analysis of the optimization problem solved by SFA is presented, which provides a deeper understanding of the simulation results obtained in previous studies.Entities:
Mesh:
Year: 2003 PMID: 12959670 DOI: 10.1162/089976603322297331
Source DB: PubMed Journal: Neural Comput ISSN: 0899-7667 Impact factor: 2.026