Literature DB >> 11110132

The Bayesian evidence scheme for regularizing probability-density estimating neural networks.

D Husmeier1.   

Abstract

Training probability-density estimating neural networks with the expectation-maximization (EM) algorithm aims to maximize the likelihood of the training set and therefore leads to overfitting for sparse data. In this article, a regularization method for mixture models with generalized linear kernel centers is proposed, which adopts the Bayesian evidence approach and optimizes the hyperparameters of the prior by type II maximum likelihood. This includes a marginalization over the parameters, which is done by Laplace approximation and requires the derivation of the Hessian of the log-likelihood function. The incorporation of this approach into the standard training scheme leads to a modified form of the EM algorithm, which includes a regularization term and adapts the hyperparameters on-line after each EM cycle. The article presents applications of this scheme to classification problems, the prediction of stochastic time series, and latent space models.

Mesh:

Year:  2000        PMID: 11110132     DOI: 10.1162/089976600300014890

Source DB:  PubMed          Journal:  Neural Comput        ISSN: 0899-7667            Impact factor:   2.026


  1 in total

1.  Estimating Simultaneous Equation Models through an Entropy-Based Incremental Variational Bayes Learning Algorithm.

Authors:  Rocío Hernández-Sanjaime; Martín González; Antonio Peñalver; Jose J López-Espín
Journal:  Entropy (Basel)       Date:  2021-03-24       Impact factor: 2.524

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.