Abstract

Face swapping in images is a process where an input identity is transformed into a target identity while preserving the former's facial expression, pose and lighting. In this paper, a new face swap method using autoencoder networks is investigated, which can capture the two different image sets to be trained. The images from two sets are unstructured collection of people's photographs. This approach is enabled by framing the face swapping problem in terms of style transfer, where the goal is to render an image in the style of another one. We design a new autoencoder network that enables the processing to produce photorealistic results. Moreover, in low resolution video, the proposed method can achieve face swap by combining autoencoder network with pre- and post-processing steps.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call