Abstract

Existing blind stereoscopic 3D (S3D) image quality assessment (IQA) metrics usually require supervised learning methods to predict S3D image quality, which limits their applicability in practice. In this paper, we propose an unsupervised blind S3D IQA metric that utilizes the joint spatial and frequency representations of visual perception. The metric proposed in this work was inspired by the binocular visual mechanism; furthermore, it is unsupervised and does not require subject-rated samples for training. To be more specific, first, the various binocular quality-aware features in spatial and frequency domains are extracted from the monocular and cyclopean views of natural S3D image patches. Subsequently, these features are utilized to establish a pristine multivariate Gaussian (MVG) model to characterize natural S3D image regularities. Finally, with the learned MVG model, the final quality score for a distorted S3D image can be yielded using a Bhattacharyya-like distance. Our experimental results illustrate that, compared to related existing metrics, the devised metric achieves competitive prediction performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call