Abstract

Quality assessment of stereoscopic 3D images is much more complex than that of 2D images. Quality assessment metrics for 3D images have achieved good performance on symmetric distorted 3D images, while given poor performance on asymmetric distorted 3D images. To further improve the perception consistency for symmetric and asymmetric distorted 3D images, this paper presents a machine learning-based full-reference 3D image quality assessment method that learns from multi IQA metrics. Considering that changes in left view, right view and depth information all impact human perception, our method considers the quality of two views and depth information. Since symmetric and asymmetric distortions impact left and right views of a 3D image differently, we propose different features for symmetric and asymmetric distorted 3D images. The experimental results show that our method has better performance than both 2D-based 3D quality assessment metrics and the state of the art specially designed 3D quality assessment metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call