Abstract

This paper proposes an image-guided depth super-resolution framework to improve the quality of depth map captured by low-cost depth sensors, like the Microsoft Kinect. First, a contour-guided fast marching method is proposed to preprocess the raw depth map for recovering the missing data. Then, by using the non-local total generalized variation (NL-TGV) regularization, a convex optimization model is constructed to up-sample the preprocessed depth map to a high-resolution one. To preserve the sharpness of depth discontinuities, the color image and its multi-level segmentation information are utilized to assign the weights within the NL-TGV through a novel weight combining scheme. The texture energy from color image and local structure coherence around neighbor pixels in low-resolution depth map are applied to adjust the combination weights for further suppressing texture-transfer. Quantitative and qualitative evaluations of the proposed method on the Middlebury datasets and real-sensor datasets show the promising results in quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call