Literature DB >> 15992491

Learning bounds for kernel regression using effective data dimensionality.

Tong Zhang1.   

Abstract

Kernel methods can embed finite-dimensional data into infinite-dimensional feature spaces. In spite of the large underlying feature dimensionality, kernel methods can achieve good generalization ability. This observation is often wrongly interpreted, and it has been used to argue that kernel learning can magically avoid the "curse-of-dimensionality" phenomenon encountered in statistical estimation problems. This letter shows that although using kernel representation, one can embed data into an infinite-dimensional feature space; the effective dimensionality of this embedding, which determines the learning complexity of the underlying kernel machine, is usually small. In particular, we introduce an algebraic definition of a scale-sensitive effective dimension associated with a kernel representation. Based on this quantity, we derive upper bounds on the generalization performance of some kernel regression methods. Moreover, we show that the resulting convergent rates are optimal under various circumstances.

Mesh:

Year:  2005        PMID: 15992491     DOI: 10.1162/0899766054323008

Source DB:  PubMed          Journal:  Neural Comput        ISSN: 0899-7667            Impact factor:   2.026


  3 in total

1.  A PARTIALLY LINEAR FRAMEWORK FOR MASSIVE HETEROGENEOUS DATA.

Authors:  Tianqi Zhao; Guang Cheng; Han Liu
Journal:  Ann Stat       Date:  2016-07-07       Impact factor: 4.028

2.  Semi-Supervised Minimum Error Entropy Principle with Distributed Method.

Authors:  Baobin Wang; Ting Hu
Journal:  Entropy (Basel)       Date:  2018-12-14       Impact factor: 2.524

3.  The Data Efficiency of Deep Learning Is Degraded by Unnecessary Input Dimensions.

Authors:  Vanessa D'Amario; Sanjana Srivastava; Tomotake Sasaki; Xavier Boix
Journal:  Front Comput Neurosci       Date:  2022-01-31       Impact factor: 2.380

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.