Abstract

Image quality assessment (IQA) is very important for both end-users and service-providers since a high-quality image can significantly improve the user's quality of experience (QoE). Most existing blind image quality assessment (BIQA) models were developed for synthetically distorted images, however, they perform poorly on in-the-wild images, which are widely existed in various practical applications. In this paper, inspired by perceptual visual quality being affected by both low-level visual features and high-level semantic information, we propose an effective BIQA model for in-the-wild images by considering rich features extracted from the convolution neural network (CNN). Specifically, we propose a staircase structure to hierarchically integrate the features from intermediate layers of the CNN into the quality-aware feature representation, which enables the model to make full use of visual information from low-level to high-level and are more suitable for the in-the-wild IQA task. Experimental results show that the proposed model outperforms other state-of-the-art BIQA models on six in-the-wild IQA databases by a large margin. Moreover, the proposed model is flexible and can be replaced with popular CNN models to meet the various needs of practical applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call