Abstract

This paper proposes an image-guided depth super-resolution framework to improve the quality of depth map captured by low-cost depth sensors, like the Microsoft Kinect. First, a contour-guided fast marching method is proposed to preprocess the raw depth map for recovering the missing data. Then, by using the non-local total generalized variation (NL-TGV) regularization, a convex optimization model is constructed to up-sample the preprocessed depth map to a high-resolution one. To preserve the sharpness of depth discontinuities, the color image and its multi-level segmentation information are utilized to assign the weights within the NL-TGV through a novel weight combining scheme. The texture energy from color image and local structure coherence around neighbor pixels in low-resolution depth map are applied to adjust the combination weights for further suppressing texture-transfer. Quantitative and qualitative evaluations of the proposed method on the Middlebury datasets and real-sensor datasets show the promising results in quality.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.