Abstract

Digital representation of 3D content in the form of 3D point clouds (PC) has gained increasing interest and has emerged in various computer vision applications. However, various degradation may appear on the PC during acquisition, transmission, or treatment steps in the 3D processing pipeline. Therefore, several Full-Reference, Reduced-Reference, and No-Reference metrics have been proposed to estimate the visual quality of PC. However, Full-Reference and Reduced-Reference metrics require reference information, which is not accessible in real-world applications, and No-Reference metrics still lack precision in evaluating the PC quality. In this context, we propose a novel deep learning-based method for No-Reference Point Cloud Quality Assessment (NR-PCQA) that aims to automatically predict the perceived visual quality of the PC without using the reference content. More specifically, in order to imitate the human visual system during the PC quality evaluation that captures the geometric and color degradation, we render the PC into different 2D views using a perspective projection. Then, the projected 2D views are divided into patches that are fed to a Convolutional Neural Network (CNN) to learn sophisticated and discriminative visual quality features for evaluating the local quality of each patch. Finally, the overall quality score of the PC is obtained by pooling the quality score patches. We conduct extensive experiments on three benchmark databases: ICIP2020, SJTU, and WPC, and we compare the proposed model to the existing Full-Reference, Reduced-Reference, and No-Reference state-of-the-art methods. Based on the experimental results, our proposed model achieves high correlations with the subjective quality scores and outperforms the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call