Literature DB >> 16566479

Convergence of gradient method with momentum for two-layer feedforward neural networks.

Naimin Zhang, Wei Wu, Gaofeng Zheng.   

Abstract

A gradient method with momentum for two-layer feedforward neural networks is considered. The learning rate is set to be a constant and the momentum factor an adaptive variable. Both the weak and strong convergence results are proved, as well as the convergence rates for the error function and for the weight. Compared to the existing convergence results, our results are more general since we do not require the error function to be quadratic.

Mesh:

Year:  2006        PMID: 16566479     DOI: 10.1109/TNN.2005.863460

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  2 in total

1.  Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks.

Authors:  Huisheng Zhang; Ying Zhang; Dongpo Xu; Xiaodong Liu
Journal:  Cogn Neurodyn       Date:  2015-01-01       Impact factor: 5.082

2.  Convergence of batch gradient learning with smoothing regularization and adaptive momentum for neural networks.

Authors:  Qinwei Fan; Wei Wu; Jacek M Zurada
Journal:  Springerplus       Date:  2016-03-08
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.