| Literature DB >> 8697228 |
K R Müller1, M Finke, N Murata, K Schulten, S Amari.
Abstract
The universal asymptotic scaling laws proposed by Amari et al. are studied in large scale simulations using a CM5. Small stochastic multilayer feedforward networks trained with backpropagation are investigated. In the range of a large number of training patterns t, the asymptotic generalization error scales as 1/t as predicted. For a medium range t a faster 1/t2 scaling is observed. This effect is explained by using higher order corrections of the likelihood expansion. It is shown for small t that the scaling law changes drastically, when the network undergoes a transition from strong overfitting to effective learning.Entities:
Mesh:
Year: 1996 PMID: 8697228 DOI: 10.1162/neco.1996.8.5.1085
Source DB: PubMed Journal: Neural Comput ISSN: 0899-7667 Impact factor: 2.026