Abstract

Generating face from human eyes, named eyes-to-face generation, is an interesting research topic of face synthesis, which has great potential in the field of public security. One of the main challenges in eyes-to-face generation is the unbalanced information between inputs and outputs, where the outputs are complete facial images while the inputs only contain limited information in the region of eyes. The existing methods generate faces directly from eyes without considering the possibly available facial information (e.g. facial attributes), resulting in inaccurate predictions and high uncertainty in those features less correlated with eyes (e.g. hairstyle, moustache, facial contour). To address this challenge, we propose a two-stage solution (named EA2F-GAN) to dynamically optimize eyes-to-face generation via attribute vocabulary. In addition, a dataset named TEAF is constructed based on the public datasets CelebA and LFW, containing 138,934 triples of eye image, attribute vocabulary, and face image. Sufficient experimental results show that, by incorporating additional facial attributes, our proposed approach can synthesize realistic face with high consistency to the original one, significantly overwhelming state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call