| Literature DB >> 16999575 |
Odelia Schwartz1, Terrence J Sejnowski, Peter Dayan.
Abstract
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing.Entities:
Mesh:
Year: 2006 PMID: 16999575 PMCID: PMC2915771 DOI: 10.1162/neco.2006.18.11.2680
Source DB: PubMed Journal: Neural Comput ISSN: 0899-7667 Impact factor: 2.026