Literature DB >> 21132082

Sparsistency and Rates of Convergence in Large Covariance Matrix Estimation.

Clifford Lam1, Jianqing Fan.   

Abstract

This paper studies the sparsistency and rates of convergence for estimating sparse covariance and precision matrices based on penalized likelihood with nonconvex penalty functions. Here, sparsistency refers to the property that all parameters that are zero are actually estimated as zero with probability tending to one. Depending on the case of applications, sparsity priori may occur on the covariance matrix, its inverse or its Cholesky decomposition. We study these three sparsity exploration problems under a unified framework with a general penalty function. We show that the rates of convergence for these problems under the Frobenius norm are of order (s(n) log p(n)/n)(1/2), where s(n) is the number of nonzero elements, p(n) is the size of the covariance matrix and n is the sample size. This explicitly spells out the contribution of high-dimensionality is merely of a logarithmic factor. The conditions on the rate with which the tuning parameter λ(n) goes to 0 have been made explicit and compared under different penalties. As a result, for the L(1)-penalty, to guarantee the sparsistency and optimal rate of convergence, the number of nonzero elements should be small: sn'=O(pn) at most, among O(pn2) parameters, for estimating sparse covariance or correlation matrix, sparse precision or inverse correlation matrix or sparse Cholesky factor, where sn' is the number of the nonzero elements on the off-diagonal entries. On the other hand, using the SCAD or hard-thresholding penalty functions, there is no such a restriction.

Entities:  

Year:  2009        PMID: 21132082      PMCID: PMC2995610          DOI: 10.1214/09-AOS720

Source DB:  PubMed          Journal:  Ann Stat        ISSN: 0090-5364            Impact factor:   4.028


  4 in total

1.  Sparse inverse covariance estimation with the graphical lasso.

Authors:  Jerome Friedman; Trevor Hastie; Robert Tibshirani
Journal:  Biostatistics       Date:  2007-12-12       Impact factor: 5.899

2.  Nonparametric estimation of covariance structure in longitudinal data.

Authors:  P J Diggle; A P Verbyla
Journal:  Biometrics       Date:  1998-06       Impact factor: 2.571

3.  NETWORK EXPLORATION VIA THE ADAPTIVE LASSO AND SCAD PENALTIES.

Authors:  Jianqing Fan; Yang Feng; Yichao Wu
Journal:  Ann Appl Stat       Date:  2009-06-01       Impact factor: 2.083

4.  One-step Sparse Estimates in Nonconcave Penalized Likelihood Models.

Authors:  Hui Zou; Runze Li
Journal:  Ann Stat       Date:  2008-08-01       Impact factor: 4.028

  4 in total
  60 in total

1.  Doubly regularized estimation and selection in linear mixed-effects models for high-dimensional longitudinal data.

Authors:  Yun Li; Sijian Wang; Peter X-K Song; Naisyin Wang; Ling Zhou; Ji Zhu
Journal:  Stat Interface       Date:  2018-09-19       Impact factor: 0.582

2.  HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

Authors:  Jianqing Fan; Yuan Liao; Martina Mincheva
Journal:  Ann Stat       Date:  2011-01-01       Impact factor: 4.028

3.  Penalized likelihood methods for estimation of sparse high-dimensional directed acyclic graphs.

Authors:  Ali Shojaie; George Michailidis
Journal:  Biometrika       Date:  2010-07-09       Impact factor: 2.445

4.  NONPARAMETRIC COVARIANCE MODEL.

Authors:  Jianxin Yin; Zhi Geng; Runze Li; Hansheng Wang
Journal:  Stat Sin       Date:  2010       Impact factor: 1.261

5.  Regularized Structural Equation Modeling.

Authors:  Ross Jacobucci; Kevin J Grimm; John J McArdle
Journal:  Struct Equ Modeling       Date:  2016-04-12       Impact factor: 6.125

6.  LARGE COVARIANCE ESTIMATION THROUGH ELLIPTICAL FACTOR MODELS.

Authors:  Jianqing Fan; Han Liu; Weichen Wang
Journal:  Ann Stat       Date:  2018-06-27       Impact factor: 4.028

7.  The cluster graphical lasso for improved estimation of Gaussian graphical models.

Authors:  Kean Ming Tan; Daniela Witten; Ali Shojaie
Journal:  Comput Stat Data Anal       Date:  2015-05       Impact factor: 1.681

8.  Regularized parameter estimation in high-dimensional gaussian mixture models.

Authors:  Lingyan Ruan; Ming Yuan; Hui Zou
Journal:  Neural Comput       Date:  2011-03-11       Impact factor: 2.026

9.  A sparse Ising model with covariates.

Authors:  Jie Cheng; Elizaveta Levina; Pei Wang; Ji Zhu
Journal:  Biometrics       Date:  2014-08-05       Impact factor: 2.571

10.  A SPARSE CONDITIONAL GAUSSIAN GRAPHICAL MODEL FOR ANALYSIS OF GENETICAL GENOMICS DATA.

Authors:  Jianxin Yin; Hongzhe Li
Journal:  Ann Appl Stat       Date:  2011-12       Impact factor: 2.083

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.