Abstract

Abstract The authors present a framework for image-based surface appearance editing for light-field data. Their framework improves over the state of the art without the need for a full “inverse rendering,” so that full geometrical data, or presence of highly specular or reflective surfaces are not required. It is robust to noisy or missing data, and handles many types of camera array setup ranging from a dense light field to a wide-baseline stereo-image pair. They start by extracting intrinsic layers from the light-field image set maintaining consistency between views. It is followed by decomposing each layer separately into frequency bands, and applying a wide range of “band-sifting” operations. The above approach enables a rich variety of perceptually plausible surface finishing and materials, achieving novel effects like translucency. Their GPU-based implementation allow interactive editing of an arbitrary light-field view, which can then be consistently propagated to the rest of the views. The authors provide extensive evaluation of our framework on various datasets and against state-of-the-art solutions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.