Literature DB >> 18276495

Improving generalization performance using double backpropagation.

H Drucker1, Y Le Cun.   

Abstract

In order to generalize from a training set to a test set, it is desirable that small changes in the input space of a pattern do not change the output components. This can be done by forcing this behavior as part of the training algorithm. This is done in double backpropagation by forming an energy function that is the sum of the normal energy term found in backpropagation and an additional term that is a function of the Jacobian. Significant improvement is shown with different architectures and different test sets, especially with architectures that had previously been shown to have very good performance when trained using backpropagation. It is shown that double backpropagation, as compared to backpropagation, creates weights that are smaller, thereby causing the output of the neurons to spend more time in the linear region.

Year:  1992        PMID: 18276495     DOI: 10.1109/72.165600

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  2 in total

1.  A System-Driven Taxonomy of Attacks and Defenses in Adversarial Machine Learning.

Authors:  Koosha Sadeghi; Ayan Banerjee; Sandeep K S Gupta
Journal:  IEEE Trans Emerg Top Comput Intell       Date:  2020-05-25

2.  BioGD: Bio-inspired robust gradient descent.

Authors:  Ilona Kulikovskikh; Sergej Prokhorov; Tomislav Lipić; Tarzan Legović; Tomislav Šmuc
Journal:  PLoS One       Date:  2019-07-05       Impact factor: 3.240

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.