Abstract

AbstractGiven a set of unstructured photographs of a subject under unknown lighting, 3D geometry reconstruction is relatively easy, but reflectance estimation remains a challenge. This is because it requires disentangling lighting from reflectance in the ambiguous observations. Solutions exist leveraging statistical, data‐driven priors to output plausible reflectance maps even in the under‐constrained single‐view, unknown lighting setting. We propose a very low‐cost inverse optimization method that does not rely on data‐driven priors, to obtain high‐quality diffuse and specular, albedo and normal maps in the setting of multi‐view unknown lighting. We introduce compact neural networks that learn the shading of a given scene by efficiently finding correlations in the appearance across the face. We jointly optimize the implicit global illumination of the scene in the networks with explicit diffuse and specular reflectance maps that can subsequently be used for physically‐based rendering. We analyze the veracity of results on ground truth data, and demonstrate that our reflectance maps maintain more detail and greater personal identity than state‐of‐the‐art deep learning and differentiable rendering methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call