Literature DB >> 20181542

Simplifying mixture models through function approximation.

James T Kwok1, Kai Zhang.   

Abstract

The finite mixture model is widely used in various statistical learning problems. However, the model obtained may contain a large number of components, making it inefficient in practical applications. In this paper, we propose to simplify the mixture model by minimizing an upper bound of the approximation error between the original and the simplified model, under the use of the L (2) distance measure. This is achieved by first grouping similar components together and then performing local fitting through function approximation. The simplified model obtained can then be used as a replacement of the original model to speed up various algorithms involving mixture models during training (e.g., Bayesian filtering, belief propagation) and testing [e.g., kernel density estimation, support vector machine (SVM) testing]. Encouraging results are observed in the experiments on density estimation, clustering-based image segmentation, and simplification of SVM decision functions.

Mesh:

Year:  2010        PMID: 20181542     DOI: 10.1109/TNN.2010.2040835

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  2 in total

1.  On a Variational Definition for the Jensen-Shannon Symmetrization of Distances Based on the Information Radius.

Authors:  Frank Nielsen
Journal:  Entropy (Basel)       Date:  2021-04-14       Impact factor: 2.524

2.  The Fisher-Rao Distance between Multivariate Normal Distributions: Special Cases, Bounds and Applications.

Authors:  Julianna Pinele; João E Strapasson; Sueli I R Costa
Journal:  Entropy (Basel)       Date:  2020-04-01       Impact factor: 2.524

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.