Literature DB >> 24039382

A Path Algorithm for Constrained Estimation.

Hua Zhou1, Kenneth Lange.   

Abstract

Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online.

Entities:  

Keywords:  Exact penalty; Shape-restricted regression; l1 regularization

Year:  2013        PMID: 24039382      PMCID: PMC3772096          DOI: 10.1080/10618600.2012.681248

Source DB:  PubMed          Journal:  J Comput Graph Stat        ISSN: 1061-8600            Impact factor:   2.302


  1 in total

1.  Network-constrained regularization and variable selection for analysis of genomic data.

Authors:  Caiyan Li; Hongzhe Li
Journal:  Bioinformatics       Date:  2008-03-01       Impact factor: 6.937

  1 in total
  3 in total

1.  Path Following in the Exact Penalty Method of Convex Programming.

Authors:  Hua Zhou; Kenneth Lange
Journal:  Comput Optim Appl       Date:  2015-07-01       Impact factor: 2.167

2.  A Generic Path Algorithm for Regularized Statistical Estimation.

Authors:  Hua Zhou; Yichao Wu
Journal:  J Am Stat Assoc       Date:  2014       Impact factor: 5.033

3.  Sparse generalized linear model with L0 approximation for feature selection and prediction with big omics data.

Authors:  Zhenqiu Liu; Fengzhu Sun; Dermot P McGovern
Journal:  BioData Min       Date:  2017-12-19       Impact factor: 2.522

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.