Literature DB >> 35471874

FSGANv2: Better Subject Agnostic Face Swapping and Reenactment.

Yuval Nirkin, Tal Hassner, Yosi Keller.   

Abstract

We present Face Swapping GAN (FSGAN) for face swapping and reenactment. Unlike previous work, we offer a subject agnostic swapping scheme that can be applied to pairs of faces without requiring training using those faces. We derive a novel iterative deep learning based approach for face reenactment which adjusts significant pose and expression variations that can be applied to a single image or a video sequence. For video sequences, we introduce continuous interpolation of the face views based on reenactment, Delaunay Triangulation, and barycentric coordinates. Occluded face regions are handled by a face completion network. Finally, we use a face blending network for seamless blending of the two faces while preserving the target skin color and lighting conditions. This network uses a novel Poisson blending loss combining Poisson optimization with a perceptual loss. We compare our approach to existing state-of-the-art systems and show our results to be both qualitatively and quantitatively superior. This work describes extensions of the FSGAN method, proposed in an earlier, conference version of our work [1], as well as additional experiments and results.

Entities:  

Year:  2022        PMID: 35471874     DOI: 10.1109/TPAMI.2022.3155571

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  1 in total

1.  Enriching Facial Anti-Spoofing Datasets via an Effective Face Swapping Framework.

Authors:  Jiachen Yang; Guipeng Lan; Shuai Xiao; Yang Li; Jiabao Wen; Yong Zhu
Journal:  Sensors (Basel)       Date:  2022-06-22       Impact factor: 3.847

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.