Abstract

Point Cloud Compression (PCC) algorithms can be roughly categorized into: (i) traditional Signal-Processing (SP) based and, more recently, (ii) Machine-Learning (ML) based. PCC algorithms are often evaluated with very different datasets, metrics, and parameters, which in turn makes the evaluation results hard to interpret. In this paper, we propose an open-source benchmark, called PCC Arena, which consists of several point cloud datasets, a suite of performance metrics, and a unified procedure. To demonstrate its practicality, we employ PCC Arena to evaluate three SP-based and one ML-based PCC algorithms. We also conduct a user study to quantify the user experience on rendered objects reconstructed from different PCC algorithms. Several interesting insights are revealed in our evaluations. For example, SP-based PCC algorithms have diverse design objectives and strike different trade-offs between coding efficiency and time complexity. Furthermore, although ML-based PCC algorithms are quite promising, they may suffer from long running time, unscalability to diverse point cloud densities, and high engineering complexity. Nonetheless, ML-based PCC algorithms are worth of more in-depth studies, and PCC Arena will play a critical role in the follow-up research for more interpretable and comparable evaluation results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.