Abstract

We propose a neural rendering-based 3D reconstruction method for reconstructing the geometry and BRDF of reflective objects from multi-view images captured in unknown environments. Multi-view reconstruction of reflective objects is extremely challenging because specular reflections are view-dependent and therefore violate multi-view consistency, which is the cornerstone of most multi-view reconstruction methods. Recent neural rendering techniques can model the interaction between ambient light and object surfaces to adapt to view-dependent reflections, making it possible to reconstruct reflective objects from multi-view images. However, accurately modeling ambient light in neural rendering is difficult. We propose a two-step approach to solve this problem. First, by introducing view-dependent photometric losses, our method accurately reconstructs the geometry of reflective objects. Then, with the object geometry fixed, we use more accurate sampling to recover the BRDF of the ambient light and object. Experiments show that our method is able to accurately reconstruct the geometry and BRDF of reflective objects from only RGB images without knowing the ambient light and object masks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call