Abstract

With the rapid development of virtual reality (VR) technologies, quality assessment of 360-degree images has become increasingly urgent. Unlike the traditional 2D images, the distortion is not evenly distributed in the 360-degree images, e.g., the projection distortion in the polar region of the Equirectangular projection (ERP) is more serious than that in other regions. Thus, the traditional 2D quality model cannot be directly applied to 360-degree images. In this paper, we propose a saliency-guided CNN model for blind 360-degree image quality assessment (SG360BIQA), which is mainly composed of a saliency prediction network (SP-Net) and a feature extraction network (F-Net). By training the whole network with the two sub-networks together, more discriminant features can be extracted and the mapping from feature representations to quality scores can be established more accurately. Besides, due to the lack of sufficient large 360-degree images database, we use a pre-trained network model instead of the initial random parameter model to overcome the limitation. Experimental results on two public 360-IQA databases demonstrate that our proposed model outperforms state-of-the-art full-reference and no-reference IQA metrics in terms of generalization ability and evaluation accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call