| Literature DB >> 35808193 |
Jiachen Yang1, Guipeng Lan1, Shuai Xiao1, Yang Li1, Jiabao Wen1, Yong Zhu1.
Abstract
In the era of rapid development of the Internet of things, deep learning, and communication technologies, social media has become an indispensable element. However, while enjoying the convenience brought by technological innovation, people are also facing the negative impact brought by them. Taking the users' portraits of multimedia systems as examples, with the maturity of deep facial forgery technologies, personal portraits are facing malicious tampering and forgery, which pose a potential threat to personal privacy security and social impact. At present, the deep forgery detection methods are learning-based methods, which depend on the data to a certain extent. Enriching facial anti-spoofing datasets is an effective method to solve the above problem. Therefore, we propose an effective face swapping framework based on StyleGAN. We utilize the feature pyramid network to extract facial features and map them to the latent space of StyleGAN. In order to realize the transformation of identity, we explore the representation of identity information and propose an adaptive identity editing module. We design a simple and effective post-processing process to improve the authenticity of the images. Experiments show that our proposed method can effectively complete face swapping and provide high-quality data for deep forgery detection to ensure the security of multimedia systems.Entities:
Keywords: biomedical big data; facial anti-spoofing; generative adversarial network; latent feature analysis; multimedia security
Mesh:
Year: 2022 PMID: 35808193 PMCID: PMC9268752 DOI: 10.3390/s22134697
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Some technologies realize the results of face forgery. The first line is face reenactment and the second line is face swapping. The result of face swapping is realized by the framework proposed in this paper.
Figure 2The overall structure of the network, which is composed of feature pyramid network with ResNet, mapping network, identity weight generation module, adaptive identity editing module, generator, and post-processing process.
Figure 3The visualization result of the output value of the identity weight generation module.
Figure 4The structure of the adaptive identity editing module.
Figure 5Partial results of our framework.
Figure 6The results of the experiment to verify the superiority of the adaptive editing module.
Quantitative analysis of the superiority of the adaptive identity editing module. Bold represents the optimal value.
| Method | Id Similarity ↑ | Exp Similarity ↓ | FID ↓ |
|---|---|---|---|
|
| 0.57 | 0.25 |
|
|
|
|
| 58.8624 |
Figure 7Compared with other methods. The comparison methods consist of FaceSwap, FSGAN, SimSwap, and Faceshifter.
Quantitative analysis of the comparative experiment with other method. Bold represents the optimal value.
| Method | Id Similarity ↑ | Exp Similarity ↓ | FID ↓ |
|---|---|---|---|
|
| 0.37 | 3.32 | 216.78 |
|
| 0.45 | 1.64 | 67.54 |
|
| 0.54 | 1.31 | 69.84 |
|
| 0.51 |
| 58.9625 |
|
|
| 0.19 |
|