Abstract

Stereoscopic 3D (S3D) visual quality prediction (VQP) is used to predict human perception of visual quality for S3D images accurately and automatically. Unlike that of 2D VQP, the quality prediction of S3D images is more difficult owing to complex binocular vision mechanisms. In this study, inspired by the binocular fusion and competition of the binocular visual system (BVS), we designed a blind deep visual quality predictor for S3D images. The proposed predictor is a multi-layer fusion network that fuses different levels of features. The left- and right-view sub-networks use the same structure and parameters. The weights and qualities for the left- and right-view patches of S3D images can be predicted. Furthermore, training patches with more saliency information can improve the accuracy of prediction results, which also make the predictor more robust. The LIVE 3D Phase I and II datasets were used to evaluate the proposed predictor. The results demonstrate that the performance of the proposed predictor surpasses most existing predictors on both asymmetrically and symmetrically distorted S3D images.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.