Ryota Natsume, Tatsuya Yatagawa, Shigeo Morishima

RSGAN: Face Swapping and Editing using Face and Hair Representation in Latent Spaces

CoRR abs/1804.03447 (2018)

arXiv

https://arxiv.org/abs/1804.03447

In this paper, we present an integrated system for automatically generating and editing face images through face swapping, attribute-based editing, and random face parts synthesis 리눅스 서버에서 파일 다운로드. The proposed system is based on a deep neural network that variationally learns the face and hair regions with large-scale face image datasets. Different from conventional variational methods, the proposed network represents the latent spaces individually for faces and hairs Download Get Gear. We refer to the proposed network as region-separative generative adversarial network (RSGAN). The proposed network independently handles face and hair appearances in the latent spaces, and then, face swapping is achieved by replacing the latent-space representations of the faces, and reconstruct the entire face image with them Pastel Maldives. This approach in the latent space robustly performs face swapping even for images which the previous methods result in failure due to inappropriate fitting or the 3D morphable models Download the ld player. In addition, the proposed system can further edit face-swapped images with the same network by manipulating visual attributes or by composing them with randomly generated face or hair parts 시벨리우스 퍼스트.