Literature DB >> 24356346

Covariate shift adaptation for discriminative 3D pose estimation.

Makoto Yamada1, Leonid Sigal2, Michalis Raptis2.   

Abstract

Discriminative, or (structured) prediction, methods have proved effective for variety of problems in computer vision; a notable example is 3D monocular pose estimation. All methods to date, however, relied on an assumption that training (source) and test (target) data come from the same underlying joint distribution. In many real cases, including standard data sets, this assumption is flawed. In the presence of training set bias, the learning results in a biased model whose performance degrades on the (target) test set. Under the assumption of covariate shift, we propose an unsupervised domain adaptation approach to address this problem. The approach takes the form of training instance reweighting, where the weights are assigned based on the ratio of training and test marginals evaluated at the samples. Learning with the resulting weighted training samples alleviates the bias in the learned models. We show the efficacy of our approach by proposing weighted variants of kernel regression (KR) and twin Gaussian processes (TGP). We show that our weighted variants outperform their unweighted counterparts and improve on the state-of-the-art performance in the public (HumanEva) data set.

Mesh:

Year:  2014        PMID: 24356346     DOI: 10.1109/TPAMI.2013.123

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  2 in total

1.  Selective Transfer Machine for Personalized Facial Expression Analysis.

Authors:  Fernando De la Torre; Jeffrey F Cohn
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2016-03-28       Impact factor: 6.226

2.  Selective Transfer Machine for Personalized Facial Action Unit Detection.

Authors:  Wen-Sheng Chu; Fernando De la Torre; Jeffery F Cohn
Journal:  Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit       Date:  2013
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.