Literature DB >> 30465044

GANimation: Anatomically-aware Facial Animation from a Single Image.

Albert Pumarola1, Antonio Agudo1, Aleix M Martinez2, Alberto Sanfeliu1, Francesc Moreno-Noguer1.   

Abstract

Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis. The most successful architecture is StarGAN [4], that conditions GANs' generation process with images of a specific domain, namely a set of images of persons sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combine several of them. Additionally, we propose a fully unsupervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit attention mechanisms that make our network robust to changing backgrounds and lighting conditions. Extensive evaluation show that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild.

Entities:  

Keywords:  Action-Unit Condition; Face Animation; GANs

Year:  2018        PMID: 30465044      PMCID: PMC6240441          DOI: 10.1007/978-3-030-01249-6_50

Source DB:  PubMed          Journal:  Comput Vis ECCV


  2 in total

1.  StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks.

Authors:  Han Zhang; Tao Xu; Hongsheng Li; Shaoting Zhang; Xiaogang Wang; Xiaolei Huang; Dimitris N Metaxas
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2018-07-16       Impact factor: 6.226

2.  Compound facial expressions of emotion.

Authors:  Shichuan Du; Yong Tao; Aleix M Martinez
Journal:  Proc Natl Acad Sci U S A       Date:  2014-03-31       Impact factor: 11.205

  2 in total
  4 in total

1.  Countering Malicious DeepFakes: Survey, Battleground, and Horizon.

Authors:  Felix Juefei-Xu; Run Wang; Yihao Huang; Qing Guo; Lei Ma; Yang Liu
Journal:  Int J Comput Vis       Date:  2022-05-04       Impact factor: 13.369

2.  Single Image Video Prediction with Auto-Regressive GANs.

Authors:  Jiahui Huang; Yew Ken Chia; Samson Yu; Kevin Yee; Dennis Küster; Eva G Krumhuber; Dorien Herremans; Gemma Roig
Journal:  Sensors (Basel)       Date:  2022-05-06       Impact factor: 3.847

3.  You can try without visiting: a comprehensive survey on virtually try-on outfits.

Authors:  Hajer Ghodhbani; Mohamed Neji; Imran Razzak; Adel M Alimi
Journal:  Multimed Tools Appl       Date:  2022-03-10       Impact factor: 2.577

4.  Self-Difference Convolutional Neural Network for Facial Expression Recognition.

Authors:  Leyuan Liu; Rubin Jiang; Jiao Huo; Jingying Chen
Journal:  Sensors (Basel)       Date:  2021-03-23       Impact factor: 3.576

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.