Abstract

The maturity of depth sensors and laser scanning techniques has enabled the convenient acquisition of 3D dynamic point clouds—one natural representation of 3D objects/scenes in motion, leading to a wide range of applications such as immersive tele-presence, autonomous driving, augmented and virtual reality. Nevertheless, dynamic point clouds usually exhibit holes of missing data, thus inpainting is crucial to the subsequent rendering or downstream understanding tasks. Dynamic point cloud inpainting has been largely overlooked so far, which is also quite challenging due to the irregular sampling patterns both in the spatial domain and temporal domain. To this end, we propose an efficient dynamic point cloud inpainting method based on a learnable spatial-temporal graph representation, exploiting both the second-order inter-frame coherence and the intra-frame self-similarity. The key is the second-order inter-frame coherence that enforces the consistent flow in 3D motion over time, for which we search the temporal correspondence in consecutive frames for the same underlying surface by the point-to-plane distance and represent the correlation between them via temporal edge weights in the graph. Based on the second-order inter-frame coherence and intra-frame self-similarity, we formulate dynamic point cloud inpainting as a joint optimization problem of the desired point cloud and underlying spatial-temporal graph, which is regularized by consistency in the temporal edge weights and smoothness in the spatial domain. We analyze and reformulate the optimization, leading to an efficient alternating minimization algorithm. Experimental results show that the proposed approach outperforms several competing methods significantly, both on synthetic holes and real holes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call