Abstract

The dynamic point cloud is widely needed in 3D vision related applications such as virtual reality and telepresence. Due to the huge amount of data, a key technology before the effective application is the dynamic point cloud compression. The state-of-the-art dynamic point cloud compression scheme, video-based point cloud compression (V-PCC), generates 2D videos with some uncorrelation due to the patch segmentation and packing process, which will affect the compression efficiency. In this paper, we propose a Packing with Patch Correlation Improvement (PPCI) algorithm to adaptively remove the uncorrelated parts between matched patches in packing for the sake of inter-prediction performance. We first propose a basic unidirectional patch re-segmentation operator to remove the uncorrelated parts of the patches in the current point cloud relative to the patches in its reference point cloud. The removed parts will be formed as new patches and added to the patch collection of the current point cloud. Then we propose a back-and-forth structure, which is a combination of several basic patch re-segmentation operators, to bilaterally remove the uncorrelated parts of matched patches in a back-and-forth (BF) unit. Furthermore, we propose a framework to adaptively decide the best length of each BF unit in a point cloud sequence. Experimental results show that our method achieves noticeable bitrate savings compared with the existing V-PCC packing methods, particularly for sequences with small motion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call