Literature DB >> 31235988

A unified framework for sparse non-negative least squares using multiplicative updates and the non-negative matrix factorization problem.

Igor Fedorov1, Alican Nalci1, Ritwik Giri2, Bhaskar D Rao1, Truong Q Nguyen1, Harinath Garudadri1.   

Abstract

We study the sparse non-negative least squares (S-NNLS) problem. S-NNLS occurs naturally in a wide variety of applications where an unknown, non-negative quantity must be recovered from linear measurements. We present a unified framework for S-NNLS based on a rectified power exponential scale mixture prior on the sparse codes. We show that the proposed framework encompasses a large class of S-NNLS algorithms and provide a computationally efficient inference procedure based on multiplicative update rules. Such update rules are convenient for solving large sets of S-NNLS problems simultaneously, which is required in contexts like sparse non-negative matrix factorization (S-NMF). We provide theoretical justification for the proposed approach by showing that the local minima of the objective function being optimized are sparse and the S-NNLS algorithms presented are guaranteed to converge to a set of stationary points of the objective function. We then extend our framework to S-NMF, showing that our framework leads to many well known S-NMF algorithms under specific choices of prior and providing a guarantee that a popular subclass of the proposed algorithms converges to a set of stationary points of the objective function. Finally, we study the performance of the proposed approaches on synthetic and real-world data.

Entities:  

Keywords:  Dictionary learning; Non-negativity; Sparsity

Year:  2018        PMID: 31235988      PMCID: PMC6590072          DOI: 10.1016/j.sigpro.2018.01.001

Source DB:  PubMed          Journal:  Signal Processing        ISSN: 0165-1684            Impact factor:   4.662


  6 in total

1.  Learning the parts of objects by non-negative matrix factorization.

Authors:  D D Lee; H S Seung
Journal:  Nature       Date:  1999-10-21       Impact factor: 49.962

2.  Nonnegative matrix factorization for rapid recovery of constituent spectra in magnetic resonance chemical shift imaging of the brain.

Authors:  Paul Sajda; Shuyan Du; Truman R Brown; Radka Stoyanova; Dikoma C Shungu; Xiangling Mao; Lucas C Parra
Journal:  IEEE Trans Med Imaging       Date:  2004-12       Impact factor: 10.048

3.  Projected gradient methods for nonnegative matrix factorization.

Authors:  Chih-Jen Lin
Journal:  Neural Comput       Date:  2007-10       Impact factor: 2.026

4.  Nonnegative matrix factorization with the Itakura-Saito divergence: with application to music analysis.

Authors:  Cédric Févotte; Nancy Bertin; Jean-Louis Durrieu
Journal:  Neural Comput       Date:  2009-03       Impact factor: 2.026

5. 

Authors:  Robert Peharz; Franz Pernkopf
Journal:  Neurocomputing       Date:  2012-03-15       Impact factor: 5.719

6.  Dictionary learning algorithms for sparse representation.

Authors:  Kenneth Kreutz-Delgado; Joseph F Murray; Bhaskar D Rao; Kjersti Engan; Te-Won Lee; Terrence J Sejnowski
Journal:  Neural Comput       Date:  2003-02       Impact factor: 2.026

  6 in total
  1 in total

1.  Rectified Gaussian Scale Mixtures and the Sparse Non-Negative Least Squares Problem.

Authors:  Alican Nalci; Igor Fedorov; Maher Al-Shoukairi; Thomas T Liu; Bhaskar D Rao
Journal:  IEEE Trans Signal Process       Date:  2018-04-06       Impact factor: 4.931

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.