Abstract

In this article, we propose a new distortion quantification method for point clouds, the multiscale potential energy discrepancy (MPED). Currently, there is a lack of effective distortion quantification for a variety of point cloud perception tasks. Specifically, in human vision tasks, a distortion quantification method is used to predict human subjective scores and optimize the selection of human perception task parameters, such as dense point cloud compression and enhancement. In machine vision tasks, a distortion quantification method usually serves as loss function to guide the training of deep neural networks for unsupervised learning tasks (e.g., sparse point cloud reconstruction, completion, and upsampling). Therefore, an effective distortion quantification should be differentiable, distortion discriminable, and have low computational complexity. However, current distortion quantification cannot satisfy all three conditions. To fill this gap, we propose a new point cloud feature description method, the point potential energy (PPE), inspired by classical physics. We regard the point clouds are systems that have potential energy and the distortion can change the total potential energy. By evaluating various neighborhood sizes, the proposed MPED achieves global-local tradeoffs, capturing distortion in a multiscale fashion. We further theoretically show that classical Chamfer distance is a special case of our MPED. Extensive experiments show that the proposed MPED is superior to current methods on both human and machine perception tasks. Our code is available at https://github.com/Qi-Yangsjtu/MPED.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call