Literature DB >> 24808312

Low-rank structure learning via nonconvex heuristic recovery.

Yue Deng, Qionghai Dai, Risheng Liu, Zengke Zhang, Sanqing Hu.   

Abstract

In this paper, we propose a nonconvex framework to learn the essential low-rank structure from corrupted data. Different from traditional approaches, which directly utilizes convex norms to measure the sparseness, our method introduces more reasonable nonconvex measurements to enhance the sparsity in both the intrinsic low-rank structure and the sparse corruptions. We will, respectively, introduce how to combine the widely used ℓp norm (0 < p < 1) and log-sum term into the framework of low-rank structure learning. Although the proposed optimization is no longer convex, it still can be effectively solved by a majorization-minimization (MM)-type algorithm, with which the nonconvex objective function is iteratively replaced by its convex surrogate and the nonconvex problem finally falls into the general framework of reweighed approaches. We prove that the MM-type algorithm can converge to a stationary point after successive iterations. The proposed model is applied to solve two typical problems: robust principal component analysis and low-rank representation. Experimental results on low-rank structure learning demonstrate that our nonconvex heuristic methods, especially the log-sum heuristic recovery algorithm, generally perform much better than the convex-norm-based method (0 < p < 1) for both data with higher rank and with denser corruptions.

Entities:  

Year:  2013        PMID: 24808312     DOI: 10.1109/TNNLS.2012.2235082

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw Learn Syst        ISSN: 2162-237X            Impact factor:   10.451


  7 in total

1.  Information transduction capacity reduces the uncertainties in annotation-free isoform discovery and quantification.

Authors:  Yue Deng; Feng Bao; Yang Yang; Xiangyang Ji; Mulong Du; Zhengdong Zhang; Meilin Wang; Qionghai Dai
Journal:  Nucleic Acids Res       Date:  2017-09-06       Impact factor: 16.971

2.  Differences help recognition: a probabilistic interpretation.

Authors:  Yue Deng; Yanyu Zhao; Yebin Liu; Qionghai Dai
Journal:  PLoS One       Date:  2013-06-03       Impact factor: 3.240

3.  Logsum using Garbled Circuits.

Authors:  José Portêlo; Bhiksha Raj; Isabel Trancoso
Journal:  PLoS One       Date:  2015-03-26       Impact factor: 3.240

4.  Adaptive distance metric learning for diffusion tensor image segmentation.

Authors:  Youyong Kong; Defeng Wang; Lin Shi; Steve C N Hui; Winnie C W Chu
Journal:  PLoS One       Date:  2014-03-20       Impact factor: 3.240

5.  Noise reduction of diffusion tensor images by sparse representation and dictionary learning.

Authors:  Youyong Kong; Yuanjin Li; Jiasong Wu; Huazhong Shu
Journal:  Biomed Eng Online       Date:  2016-01-13       Impact factor: 2.819

6.  PHOCOS: inferring multi-feature phenotypic crosstalk networks.

Authors:  Yue Deng; Steven J Altschuler; Lani F Wu
Journal:  Bioinformatics       Date:  2016-06-15       Impact factor: 6.937

7.  Structural Smoothing Low-Rank Matrix Restoration Based on Sparse Coding and Dual-Weighted Model.

Authors:  Jiawei Wu; Hengyou Wang
Journal:  Entropy (Basel)       Date:  2022-07-07       Impact factor: 2.738

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.