Literature DB >> 18263340

Similarities of error regularization, sigmoid gain scaling, target smoothing, and training with jitter.

R Reed1, R J Marks, S Oh.   

Abstract

The generalization performance of feedforward layered perceptrons can, in many cases, be improved either by smoothing the target via convolution, regularizing the training error with a smoothing constraint, decreasing the gain (i.e., slope) of the sigmoid nonlinearities, or adding noise (i.e., jitter) to the input training data, In certain important cases, the results of these procedures yield highly similar results although at different costs. Training with jitter, for example, requires significantly more computation than sigmoid scaling.

Year:  1995        PMID: 18263340     DOI: 10.1109/72.377960

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  1 in total

1.  Noise-injected neural networks show promise for use on small-sample expression data.

Authors:  Jianping Hua; James Lowey; Zixiang Xiong; Edward R Dougherty
Journal:  BMC Bioinformatics       Date:  2006-05-31       Impact factor: 3.169

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.