Literature DB >> 25544785

OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS.

Zhaoran Wang1, Han Liu2, Tong Zhang3.   

Abstract

We provide theoretical analysis of the statistical and computational properties of penalized M-estimators that can be formulated as the solution to a possibly nonconvex optimization problem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regularization path-following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence for any local solution attained by the algorithm. Computationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all first-order algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In particular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty.

Entities:  

Keywords:  Nonconvex regularized M-estimation; geometric computational rate; optimal statistical rate; path-following method

Year:  2014        PMID: 25544785      PMCID: PMC4276088          DOI: 10.1214/14-AOS1238

Source DB:  PubMed          Journal:  Ann Stat        ISSN: 0090-5364            Impact factor:   4.028


  9 in total

1.  Variable Selection using MM Algorithms.

Authors:  David R Hunter; Runze Li
Journal:  Ann Stat       Date:  2005       Impact factor: 4.028

2.  SparseNet: Coordinate Descent With Nonconvex Penalties.

Authors:  Rahul Mazumder; Jerome H Friedman; Trevor Hastie
Journal:  J Am Stat Assoc       Date:  2011       Impact factor: 5.033

3.  COORDINATE DESCENT ALGORITHMS FOR NONCONVEX PENALIZED REGRESSION, WITH APPLICATIONS TO BIOLOGICAL FEATURE SELECTION.

Authors:  Patrick Breheny; Jian Huang
Journal:  Ann Appl Stat       Date:  2011-01-01       Impact factor: 2.083

4.  STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

Authors:  Jianqing Fan; Lingzhou Xue; Hui Zou
Journal:  Ann Stat       Date:  2014-06       Impact factor: 4.028

5.  OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS.

Authors:  Zhaoran Wang; Han Liu; Tong Zhang
Journal:  Ann Stat       Date:  2014       Impact factor: 4.028

6.  Regularization Paths for Generalized Linear Models via Coordinate Descent.

Authors:  Jerome Friedman; Trevor Hastie; Rob Tibshirani
Journal:  J Stat Softw       Date:  2010       Impact factor: 6.440

7.  One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.

Authors:  Hui Zou; Runze Li
Journal:  Ann Stat       Date:  2008-08-01       Impact factor: 4.028

8.  CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

Authors:  Lan Wang; Yongdai Kim; Runze Li
Journal:  Ann Stat       Date:  2013-10-01       Impact factor: 4.028

9.  Quantile Regression for Analyzing Heterogeneity in Ultra-high Dimension.

Authors:  Lan Wang; Yichao Wu; Runze Li
Journal:  J Am Stat Assoc       Date:  2012-06-11       Impact factor: 5.033

  9 in total
  11 in total

1.  Sparse and Low-rank Tensor Estimation via Cubic Sketchings.

Authors:  Botao Hao; Anru Zhang; Guang Cheng
Journal:  IEEE Trans Inf Theory       Date:  2020-03-23       Impact factor: 2.501

2.  Tighten after Relax: Minimax-Optimal Sparse PCA in Polynomial Time.

Authors:  Zhaoran Wang; Huanran Lu; Han Liu
Journal:  Adv Neural Inf Process Syst       Date:  2014

3.  GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING.

Authors:  Hongcheng Liu; Tao Yao; Runze Li
Journal:  Ann Stat       Date:  2016-04       Impact factor: 4.028

4.  Sparse PCA with Oracle Property.

Authors:  Quanquan Gu; Zhaoran Wang; Han Liu
Journal:  Adv Neural Inf Process Syst       Date:  2014

5.  Accelerated Path-following Iterative Shrinkage Thresholding Algorithm with Application to Semiparametric Graph Estimation.

Authors:  Tuo Zhao; Han Liu
Journal:  J Comput Graph Stat       Date:  2016-11-10       Impact factor: 2.302

6.  Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions.

Authors:  Hongcheng Liu; Tao Yao; Runze Li; Yinyu Ye
Journal:  Math Program       Date:  2017-02-10       Impact factor: 3.995

7.  Sample Average Approximation with Sparsity-Inducing Penalty for High-Dimensional Stochastic Programming.

Authors:  Hongcheng Liu; Xue Wang; Tao Yao; Runze Li; Yinyu Ye
Journal:  Math Program       Date:  2018-05-03       Impact factor: 3.995

8.  OPTIMAL COMPUTATIONAL AND STATISTICAL RATES OF CONVERGENCE FOR SPARSE NONCONVEX LEARNING PROBLEMS.

Authors:  Zhaoran Wang; Han Liu; Tong Zhang
Journal:  Ann Stat       Date:  2014       Impact factor: 4.028

9.  Challenges of Big Data Analysis.

Authors:  Jianqing Fan; Fang Han; Han Liu
Journal:  Natl Sci Rev       Date:  2014-06       Impact factor: 17.275

10.  DISTRIBUTED TESTING AND ESTIMATION UNDER SPARSE HIGH DIMENSIONAL MODELS.

Authors:  Heather Battey; Jianqing Fan; Han Liu; Junwei Lu; Ziwei Zhu
Journal:  Ann Stat       Date:  2018-05-03       Impact factor: 4.028

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.