The depth map captured by depth sensors (e.g., the time of flight (ToF) and Kinect) is often prone to low resolution, degradation, noise, and poor quality. This paper proposes a novel model for the robust depth estimation of RGB-D images through local and nonlocal manifold regularizations. The first stage called deep depth prior manifold (DDPM), is inspired partly by the deep depth prior (DDP) model, that is a deep convolutional neural network (CNN) integrated with a local manifold regularization term. The local neighboring relationships between depth pixels and color images are employed to promote smoothing in the results. The Laplacian Eigenmap technique used for local manifold modeling produces over-smooth depth map. To improve the quality of the reconstructed image, a nonlocal manifold modeling stage was suggested, where the similarity between the depth and the corresponding color image is determined by characterizing their matching aspects. These objectives are aggregated within an optimization problem. Moreover, to extract edges better considering visual nonlocal characteristics, the structured low-rank Hankel approximation was adopted to better eliminate depth degradations, and to extract highly promoted edges and sharp points. Three types of the degradations were handling in this work, containing undersampling, ToF-like, and Kinect-like degradations. Experimental results indicate that the proposed method outperformed the state-of-the-art restoration techniques on standard benchmark images, in terms of well-known criteria like PSNR.
Read full abstract