Abstract

Generative Adversarial Networks (GANs) have brought great progress in image-to-image translation. The problem that we focus on is how to accurately extract and transfer the makeup style from a reference facial image to a target face. We propose a GAN-based generative model with Target-aware makeup Style Encoding and Verification, which is referred to as TSEV-GAN. This design is due to the following two insights: (a) When directly encoding the reference image, the encoder may focus on regions which are not necessarily important or desirable. To precisely capture the style, we encode the difference map between the reference and corresponding de-makeup images, and then inject the obtained style code into a generator. (b) A generic real-fake discriminator cannot ensure the correctness of the rendered makeup pattern. In view of this, we impose style representation learning on a conditional discriminator. By identifying style consistency between the reference and synthesized images, the generator is induced to precisely replicate the desirable makeup. We perform extensive experiments on the existing makeup benchmarks to verify the effectiveness of our improvement strategies in transferring a variety of makeup styles. Moreover, the proposed model is able to outperform other existing state-of-the-art makeup transfer methods in terms of makeup similarity and irrelevant content preservation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call