Abstract

In recent years, 3D imaging technologies have advanced tremendously, allowing more faithful representations of the physical world. More specifically, imaging technology advancements have driven the production of plenoptic devices that can capture and display visual contents. These devices represent the visual data using an approximation of the plenoptic illumination function, which can describe visible objects in any position and from any point-of-view of the 3D space. Depending on the capturing device, this approximation can correspond to holograms, light fields, or Point Clouds (PCs) imaging formats. Among these formats, PCs have become very popular for a wide range of applications, such as immersive virtual reality scenarios. As a consequence, in the last couple of years, there has been a great effort to develop novel acquisition, representation, compression, and transmission solutions for PC contents in the research community. In particular, the development of objective quality assessment methods that are able to predict the perceptual quality of PCs has attracted a lot of attention. In this paper, we present an effective framework for assessing the quality of PCs, which is based on descriptors that extract geometry-aware texture information of PC contents. In this framework, the statistics of the extracted information are used to model the PC visual quality. We also present the research and experiments carried to evaluate the most appropriate distance metrics and regression methods to be used together with the proposed descriptors. Experimental results show that the proposed framework exhibit good and robust performance when compared with several state-of-the-art Point Cloud Quality Assessment (PCQA) methods. A C++ implementation of the metrics described in this paper can be found at https://gitlab.com/gpds-unb/pc_metric.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call