Abstract

The three-dimensional morphable model (3DMM) is the most widely used representative model for obtaining a three-dimensional (3-D) face from a target on an image. Although 3DMMs have demonstrated the powerful capability to represent various facial shapes on natural images, they are limited to capturing texture variations of in-the-wild human faces. Based on the fact that fitting a 3-D facial model to an image determines the corresponding UV map, we propose a novel method for facial fitting and synthesis by competitively training two deep learning networks for facial alignment and UV texture completion. When the completion network is trained using well-aligned UV maps, it can model facial textures precisely and, consequently, fill the missing regions more completely. Accordingly, we use a UV completion network, denoted as a UV energy-based generative adversarial network (UV EB-GAN), to discriminate whether a UV map from the alignment network is well aligned by defining the generative loss of the completion network as the energy. Competitive learning facilitates training the completion network without ground-truth facial UV maps and training the alignment network without hard constraints and regularization terms. The proposed network can be trained in an end-to-end manner. The facial texture, albedo, lighting parameters, and 3-D facial shape can be obtained through this network. The results of the experiments on 2-D alignment, 3-D reconstruction, texture synthesis, and illumination estimation verified that the proposed method achieves remarkable improvements over the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call