Literature DB >> 18244591

Parallel nonlinear optimization techniques for training neural networks.

P H Phua1, Daohua Ming.   

Abstract

In this paper, we propose the use of parallel quasi-Newton (QN) optimization techniques to improve the rate of convergence of the training process for neural networks. The parallel algorithms are developed by using the self-scaling quasi-Newton (SSQN) methods. At the beginning of each iteration, a set of parallel search directions is generated. Each of these directions is selectively chosen from a representative class of QN methods. Inexact line searches are then carried out to estimate the minimum point along each search direction. The proposed parallel algorithms are tested over a set of nine benchmark problems. Computational results show that the proposed algorithms outperform other existing methods, which are evaluated over the same set of test problems.

Year:  2003        PMID: 18244591     DOI: 10.1109/TNN.2003.820670

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  2 in total

1.  A Computational Method for Optimizing Experimental Environments for Phellinus igniarius via Genetic Algorithm and BP Neural Network.

Authors:  Zhongwei Li; Beibei Sun; Yuezhen Xin; Xun Wang; Hu Zhu
Journal:  Biomed Res Int       Date:  2016-08-09       Impact factor: 3.411

2.  Multi-Sensor Data Fusion Identification for Shearer Cutting Conditions Based on Parallel Quasi-Newton Neural Networks and the Dempster-Shafer Theory.

Authors:  Lei Si; Zhongbin Wang; Xinhua Liu; Chao Tan; Jing Xu; Kehong Zheng
Journal:  Sensors (Basel)       Date:  2015-11-13       Impact factor: 3.576

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.