Abstract

Most active scene recovery techniques assume that a scene point is illuminated only directly by the illumination source. Consequently, global illumination effects due to inter-reflections, sub-surface scattering and volumetric scattering introduce strong biases in the recovered scene shape. Our goal is to recover scene properties in the presence of global illumination. To this end, we study the interplay between global illumination and the depth cue of illumination defocus. By expressing both these effects as low pass filters, we derive an approximate invariant that can be used to separate them without explicitly modeling the light transport. This is directly useful in any scenario where limited depth-of-field devices (such as projectors) are used to illuminate scenes with global light transport and significant depth variations. We show two applications: (a) accurate depth recovery in the presence of global illumination, and (b) factoring out the effects of defocus for correct direct-global separation in large depth scenes. We demonstrate our approach using scenes with complex shapes, reflectances, textures and translucencies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.