| Literature DB >> 18263340 |
Abstract
The generalization performance of feedforward layered perceptrons can, in many cases, be improved either by smoothing the target via convolution, regularizing the training error with a smoothing constraint, decreasing the gain (i.e., slope) of the sigmoid nonlinearities, or adding noise (i.e., jitter) to the input training data, In certain important cases, the results of these procedures yield highly similar results although at different costs. Training with jitter, for example, requires significantly more computation than sigmoid scaling.Year: 1995 PMID: 18263340 DOI: 10.1109/72.377960
Source DB: PubMed Journal: IEEE Trans Neural Netw ISSN: 1045-9227