Literature DB >> 25966482

Sparse LSSVM in Primal Using Cholesky Factorization for Large-Scale Problems.

Shuisheng Zhou.   

Abstract

For support vector machine (SVM) learning, least squares SVM (LSSVM), derived by duality LSSVM (D-LSSVM), is a widely used model, because it has an explicit solution. One obvious limitation of the model is that the solution lacks sparseness, which limits it from training large-scale problems efficiently. In this paper, we derive an equivalent LSSVM model in primal space LSSVM (P-LSSVM) by the representer theorem and prove that P-LSSVM can be solved exactly at some sparse solutions for problems with low-rank kernel matrices. Two algorithms are proposed for finding the sparse (approximate) solution of P-LSSVM by Cholesky factorization. One is based on the decomposition of the kernel matrix K as P P(T) with the best low-rank matrix P approximately by pivoting Cholesky factorization. The other is based on solving P-LSSVM by approximating the Cholesky factorization of the Hessian matrix with rank-one update scheme. For linear learning problems, theoretical analysis and experimental results support that P-LSSVM can give the sparsest solutions in all SVM learners. Experimental results on some large-scale nonlinear training problems show that our algorithms, based on P-LSSVM, can converge to acceptable test accuracies at very sparse solutions with a sparsity level <1%, and even as little as 0.01%. Hence, our algorithms are a better choice for large-scale training problems.

Entities:  

Year:  2015        PMID: 25966482     DOI: 10.1109/TNNLS.2015.2424684

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw Learn Syst        ISSN: 2162-237X            Impact factor:   10.451


  1 in total

1.  Efficient Multiple Kernel Learning Algorithms Using Low-Rank Representation.

Authors:  Wenjia Niu; Kewen Xia; Baokai Zu; Jianchuan Bai
Journal:  Comput Intell Neurosci       Date:  2017-08-22
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.