Literature DB >> 31107649

AttGAN: Facial Attribute Editing by Only Changing What You Want.

Zhenliang He, Wangmeng Zuo, Meina Kan, Shiguang Shan, Xilin Chen.   

Abstract

Facial attribute editing aims to manipulate single or multiple attributes on a given face image, i.e., to generate a new face image with desired attributes while preserving other details. Recently, the generative adversarial net (GAN) and encoder-decoder architecture are usually incorporated to handle this task with promising results. Based on the encoder-decoder architecture, facial attribute editing is achieved by decoding the latent representation of a given face conditioned on the desired attributes. Some existing methods attempt to establish an attribute-independent latent representation for further attribute editing. However, such attribute-independent constraint on the latent representation is excessive because it restricts the capacity of the latent representation and may result in information loss, leading to over-smooth or distorted generation. Instead of imposing constraints on the latent representation, in this work, we propose to apply an attribute classification constraint to the generated image to just guarantee the correct change of desired attributes, i.e., to change what you want. Meanwhile, the reconstruction learning is introduced to preserve attribute-excluding details, in other words, to only change what you want. Besides, the adversarial learning is employed for visually realistic editing. These three components cooperate with each other forming an effective framework for high quality facial attribute editing, referred as AttGAN. Furthermore, the proposed method is extended for attribute style manipulation in an unsupervised manner. Experiments on two wild datasets, CelebA and LFW, show that the proposed method outperforms the state-of-the-art on realistic attribute editing with other facial details well preserved.

Year:  2019        PMID: 31107649     DOI: 10.1109/TIP.2019.2916751

Source DB:  PubMed          Journal:  IEEE Trans Image Process        ISSN: 1057-7149            Impact factor:   10.856


  5 in total

1.  A survey on generative adversarial networks for imbalance problems in computer vision tasks.

Authors:  Vignesh Sampath; Iñaki Maurtua; Juan José Aguilar Martín; Aitor Gutierrez
Journal:  J Big Data       Date:  2021-01-29

2.  Countering Malicious DeepFakes: Survey, Battleground, and Horizon.

Authors:  Felix Juefei-Xu; Run Wang; Yihao Huang; Qing Guo; Lei Ma; Yang Liu
Journal:  Int J Comput Vis       Date:  2022-05-04       Impact factor: 13.369

3.  Attribution-Driven Explanation of the Deep Neural Network Model via Conditional Microstructure Image Synthesis.

Authors:  Shusen Liu; Bhavya Kailkhura; Jize Zhang; Anna M Hiszpanski; Emily Robertson; Donald Loveland; Xiaoting Zhong; T Yong-Jin Han
Journal:  ACS Omega       Date:  2022-01-07

4.  Deep models of superficial face judgments.

Authors:  Joshua C Peterson; Stefan Uddenberg; Thomas L Griffiths; Alexander Todorov; Jordan W Suchow
Journal:  Proc Natl Acad Sci U S A       Date:  2022-04-21       Impact factor: 12.779

5.  Fair Facial Attribute Classification via Causal Graph-Based Attribute Translation.

Authors:  Sunghun Kang; Gwangsu Kim; Chang D Yoo
Journal:  Sensors (Basel)       Date:  2022-07-14       Impact factor: 3.847

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.