| Literature DB >> 25580042 |
Rahul Mazumder1, Jerome H Friedman2, Trevor Hastie3.
Abstract
We address the problem of sparse selection in linear models. A number of nonconvex penalties have been proposed in the literature for this purpose, along with a variety of convex-relaxation algorithms for finding good solutions. In this article we pursue a coordinate-descent approach for optimization, and study its convergence properties. We characterize the properties of penalties suitable for this approach, study their corresponding threshold functions, and describe a df-standardizing reparametrization that assists our pathwise algorithm. The MC+ penalty is ideally suited to this task, and we use it to demonstrate the performance of our algorithm. Certain technical derivations and experiments related to this article are included in the Supplementary Materials section.Entities:
Keywords: Degrees of freedom; LASSO; Nonconvex optimization; Regularization surface; Sparse regression; Variable selection
Year: 2011 PMID: 25580042 PMCID: PMC4286300 DOI: 10.1198/jasa.2011.tm09738
Source DB: PubMed Journal: J Am Stat Assoc ISSN: 0162-1459 Impact factor: 5.033