Abstract

The availability of depth information in an image enables the simulation of distinct visual effects (e.g., refocus, desaturation, haze) that are related to the distance of the camera to the objects in the scene. To generate depth from color data in single images, existing techniques typically use learning-based strategies or require user-guided depth annotations. Learning-based techniques suffer from generality issues, while user-guided techniques solve a costly optimization problem that prevents a real-time feedback of the depth map generated from the user annotation. In this paper, we overcome the latter problem and propose a GPU-based algorithm that provides live feedback on the output depth map estimated during the user annotation. We follow previous work and treat the depth map estimation as a 2D Poisson problem that can be optimized using a sparse linear solver. However, we change the way that the sparse linear coefficients are computed to favor a more smooth, spatially coherent depth map, able to provide the desired visual effects. Moreover, our approach is designed to run almost entirely on the GPU, achieving real-time performance even for high-resolution images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call