Abstract

Depth images acquired by consumer depth sensors, such as Kinect and ToF, usually are of low resolution and insufficient quality. This limits the application of these depth sensors. Therefore, depth map enhancement is essential to its application. Most existing depth map super-resolution methods employ an RGB image of the same scene in the depth image as a guidance to up-sample the depth map. However, due to part of edges in RGB image do not occurrence in depth image, such as texture in RGB image, most existing methods will introduce a problem of texture-copy in these areas. To address this problem, we propose an approach that introduce semantic information of RGB image. On the other hand, existing methods rely on various kinds of explicit filter construction or hand-designed objective function. It is thus difficult to understand, improve, and accelerate them in a coherent framework. In this paper we use a learning-based approach to construct a joint filter based on Convolutional Neural Networks. In contrast to existing methods that consider only the RGB guidance image, our method can suppress the texture-copy problem. We validate the effectiveness of the proposed method through extensive comparisons with state-of-the-art methods on NYU v2 dataset. Experiment results show that our method suppress the texture-copy problem.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.