Abstract

Image-based relighting, projector compensation and depth/normal reconstruction are three important tasks of projector-camera systems (ProCams) and spatial augmented reality (SAR). Although they share a similar pipeline of finding projector-camera image mappings, in tradition, they are addressed independently, sometimes with different prerequisites, devices and sampling images. In practice, this may be cumbersome for SAR applications to address them one-by-one. In this paper, we propose a novel end-to-end trainable model named DeProCams to explicitly learn the photometric and geometric mappings of ProCams, and once trained, DeProCams can be applied simultaneously to the three tasks. DeProCams explicitly decomposes the projector-camera image mappings into three subprocesses: shading attributes estimation, rough direct light estimation and photorealistic neural rendering. A particular challenge addressed by DeProCams is occlusion, for which we exploit epipolar constraint and propose a novel differentiable projector direct light mask. Thus, it can be learned end-to-end along with the other modules. Afterwards, to improve convergence, we apply photometric and geometric constraints such that the intermediate results are plausible. In our experiments, DeProCams shows clear advantages over previous arts with promising quality and meanwhile being fully differentiable. Moreover, by solving the three tasks in a unified model, DeProCams waives the need for additional optical devices, radiometric calibrations and structured light.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call