Literature DB >> 31398134

A Maximally Split and Relaxed ADMM for Regularized Extreme Learning Machines.

Xiaoping Lai, Jiuwen Cao, Xiaofeng Huang, Tianlei Wang, Zhiping Lin.   

Abstract

One of the salient features of the extreme learning machine (ELM) is its fast learning speed. However, in a big data environment, the ELM still suffers from an overly heavy computational load due to the high dimensionality and the large amount of data. Using the alternating direction method of multipliers (ADMM), a convex model fitting problem can be split into a set of concurrently executable subproblems, each with just a subset of model coefficients. By maximally splitting across the coefficients and incorporating a novel relaxation technique, a maximally split and relaxed ADMM (MS-RADMM), along with a scalarwise implementation, is developed for the regularized ELM (RELM). The convergence conditions and the convergence rate of the MS-RADMM are established, which exhibits linear convergence with a smaller convergence ratio than the unrelaxed maximally split ADMM. The optimal parameter values of the MS-RADMM are obtained and a fast parameter selection scheme is provided. Experiments on ten benchmark classification data sets are conducted, the results of which demonstrate the fast convergence and parallelism of the MS-RADMM. Complexity comparisons with the matrix-inversion-based method in terms of the numbers of multiplication and addition operations, the computation time and the number of memory cells are provided for performance evaluation of the MS-RADMM.

Entities:  

Year:  2019        PMID: 31398134     DOI: 10.1109/TNNLS.2019.2927385

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw Learn Syst        ISSN: 2162-237X            Impact factor:   10.451


  1 in total

1.  Tensor based stacked fuzzy neural network for efficient data regression.

Authors:  Jie Li; Jiale Hu; Guoliang Zhao; Sharina Huang; Yang Liu
Journal:  Soft comput       Date:  2022-08-17       Impact factor: 3.732

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.