Abstract

Proper lighting is a key element in developing a photorealistic computer-generated image. This paper introduces a novel approach for robustly extracting lighting conditions from an RGB-D (RGB + depth) image. Existing studies on lighting estimation have developed image analysis techniques by constraining the scope and condition of the target objects. For example, they have assumed that the objects have homogeneous surfaces, inter-reflections can be ignored, and their three-dimensional (3D) geometries consist of a noise-free mesh. These assumptions, however, are unrealistic; real objects often have complicated non-homogeneous surfaces, inter-reflections that affect a considerable portion of illumination, or unpredictable noise that can affect sensor measurements. To overcome these limitations, this study takes non-homogeneous surface objects into account in the inverse lighting framework via segment-based scene representation. Moreover, we employ outlier removal and appropriate region selection to achieve robust lighting estimation in the presence of inter-reflections and noise. We demonstrate the effectiveness of the proposed approach by conducting extensive experiments on synthetic and real RGB-D images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.