Abstract

The reconstruction of photorealistic 3D face geometry, textures and reflectance (BRDF) is one of the most popular fields in computer vision, graphics and machine learning. However, the acquisition of facial reflectance remains a challenge. In this article, we propose a method for estimating the facial reflection properties of a single portrait image based on image translation. From a RGB face image, we obtain the BRDF with a large amount of detail. To achieve it, we perform a reverse engineer, which renders face images with the obtained texture map to form training data pairs based on the Blinn-Phong illumination model. We also apply random rotate-and-crop and sliding-window-crop to augment the data and optimize the network weights by minimizing the generated adversarial loss and reconstruction loss. As demonstrated in a chain of quantitative and qualitative experiments, our method achieves superior performance compared to the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call