Abstract

The emergence of depth sensor facilitates the real-time and low-cost depth capture. However, the quality of its depth map is still inadequate for further applications due to holes, noises and artifacts existing within its depth information. In this paper, we propose an iterative depth boundary refinement framework to recover Kinect depth map. We extract depth edges and detect the incorrect regions, and then re-fill the incorrect regions until the depth edges are consistent with color edges. In the incorrect region detection procedure, we propose a RGB-D data edge detection method inspired by the recently developed deep learning. In the depth in-painting procedure, we propose a priority-determined fill order in which the high confidence pixels and strong edges are assigned to high priority. The actual depth values are computed by using a weighted cost filter, in which color, spatial similarity measures and Gaussian error model are considered. Experimental results demonstrate that the proposed method provides sharp and clear edges for the Kinect depth, and depth edges are aligned with the color edges.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call