| Literature DB >> 11674849 |
Abstract
An expectation-maximization algorithm for learning sparse and overcomplete data representations is presented. The proposed algorithm exploits a variational approximation to a range of heavy-tailed distributions whose limit is the Laplacian. A rigorous lower bound on the sparse prior distribution is derived, which enables the analytic marginalization of a lower bound on the data likelihood. This lower bound enables the development of an expectation-maximization algorithm for learning the overcomplete basis vectors and inferring the most probable basis coefficients.Year: 2001 PMID: 11674849 DOI: 10.1162/089976601753196003
Source DB: PubMed Journal: Neural Comput ISSN: 0899-7667 Impact factor: 2.026