Abstract

Unlike natural images, the topology similarity among meshes can hardly be handled with classical deep learning because of their irregular structures. Parameterization provides a way to represent meshes in the form of geometry and normal images, which reflects the correlation between neighboring sample locations. Generative Adversarial Networks (GANs) can efficiently generate images without explicitly computing probability densities of the underlying distribution. However, existing GANs such as Coupled Generative Adversarial Network (CoGAN) generally have two drawbacks: (1) Inability to process unnatural images. (2) Insufficient exploration of the inherent relation between normal and the corresponding geometry image. To address these issues, this paper proposes an efficient method named Prediction-Compensation Generative Adversarial Network (PCGAN) to learn a joint distribution of both geometry and normal images, which aims for generating meshes with two GANs. The consistency of two GANs for the geometry and the normal is guaranteed by utilizing a sequence of prediction-compensation pairs. The sequence can estimate the normal image from the geometry image and compensate the geometry from normal progressively. Particularly, the prediction has a closed-form expression, which provides high estimation accuracy and reduces training complexity. Extensive experimental results on facial mesh generation indicate that our PCGAN outperforms CoGAN and other architectures in retaining the geometry of the faces and in generating realistic face meshes with rich facial attributes such as facial expression and morphology. Moreover, quantitative evaluations demonstrate our superior performance compared with the methods mentioned above.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call