Literature DB >> 29990016

Disentangling the Modes of Variation in Unlabelled Data.

Mengjiao Wang, Yannis Panagakis, Patrick Snape, Stefanos P Zafeiriou.   

Abstract

Statistical methods are of paramount importance in discovering the modes of variation in visual data. The Principal Component Analysis (PCA) is probably the most prominent method for extracting a single mode of variation in the data. However, in practice, several factors contribute to the appearance of visual objects including pose, illumination, and deformation, to mention a few. To extract these modes of variations from visual data, several supervised methods, such as the TensorFaces relying on multilinear (tensor) decomposition have been developed. The main drawbacks of such methods is that they require both labels regarding the modes of variations and the same number of samples under all modes of variations (e.g., the same face under different expressions, poses etc.). Therefore, their applicability is limited to well-organised data, usually captured in well-controlled conditions. In this paper, we propose a novel general multilinear matrix decomposition method that discovers the multilinear structure of possibly incomplete sets of visual data in unsupervised setting (i.e., without the presence of labels). We also propose extensions of the method with sparsity and low-rank constraints in order to handle noisy data, captured in unconstrained conditions. Besides that, a graph-regularised variant of the method is also developed in order to exploit available geometric or label information for some modes of variations. We demonstrate the applicability of the proposed method in several computer vision tasks, including Shape from Shading (SfS) (in the wild and with occlusion removal), expression transfer, and estimation of surface normals from images captured in the wild.

Entities:  

Year:  2017        PMID: 29990016     DOI: 10.1109/TPAMI.2017.2783940

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  2 in total

1.  A Cascade Attention Based Facial Expression Recognition Network by Fusing Multi-Scale Spatio-Temporal Features.

Authors:  Xiaoliang Zhu; Zili He; Liang Zhao; Zhicheng Dai; Qiaolai Yang
Journal:  Sensors (Basel)       Date:  2022-02-10       Impact factor: 3.576

2.  Hybrid Attention Cascade Network for Facial Expression Recognition.

Authors:  Xiaoliang Zhu; Shihao Ye; Liang Zhao; Zhicheng Dai
Journal:  Sensors (Basel)       Date:  2021-03-12       Impact factor: 3.576

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.