Abstract
With the widespread adoption of virtual reality and 360-degree video, there is a pressing need for objective metrics to assess quality in this immersive panoramic format reliably. However, existing image quality assessment models developed for traditional fixed-viewpoint content do not fully consider the specific perceptual issues involved in 360-degree viewing. This paper proposes a 360-degree image full-reference quality assessment (FR-IQA) methodology based on a multi-channel architecture. The proposed 360-degree FR-IQA method further optimizes and identifies the distorted image quality using two easily obtained useful saliency and depth-aware image features. The convolutional neural network (CNN) is designed for training. Furthermore, the proposed method accounts for predicting user viewing behaviors within 360-degree images, which will further benefit the multi-channel CNN architecture and enable the weighted average pooling of the predicted FR-IQA scores. The performance is evaluated on publicly available databases to demonstrate the advantages brought by the proposed multi-channel model in performance evaluation and cross-database evaluation experiments, where it outperforms other state-of-the-art ones. Moreover, an ablation study exhibits good generalization ability and robustness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Pattern Recognition and Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.