Abstract

Perceptual quality assessment of stereoscopic images is a challenge in three-dimensional video systems. Existing studies suggest that simply averaging the quality of left and right views can effectively predict the quality of symmetrically distorted stereoscopic images, but prediction deviation occurs in the case of asymmetrically distorted stereoscopic images. Most previous stereoscopic image quality assessment (SIQA) methods have been based only on the luminance component of the images; in addition, the basis of human visual perception is critical to image quality assessment and lies on the low-dimensional manifold. Inspired by this, a new perceptual SIQA method is proposed, which includes two stages: training stage and quality prediction stage. In the training stage, the authors apply Tucker decomposition to RGB images to reduce dimensions along colour channels to produce training sets, and the projection matrix is obtained through manifold learning. In the quality prediction stage, considering the binocular visual characteristics of visual perception, the overall stereoscopic estimate depends on the monocular image quality via a local energy ratio based pooling strategy and cyclopean based binocular quality. Extensive experiments on three available benchmark databases demonstrate that the proposed metric has better performance and achieves highly consistent alignment with subjective assessment compared with state-of-the-art SIQA metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call