Abstract

Image quality assessment (IQA), as one of the fundamental techniques in image processing, is widely used in many computer vision and image processing applications. In this paper, we propose a novel visual saliency based blind IQA model, which combines the property of human visual system (HVS) with features extracted by a deep convolutional neural network (CNN). The proposed model is totally data-driven thus using no hand-crafted features. Instead of feeding the model with patches selected randomly from images, we introduce a salient object detection algorithm to calculate regions of interest which are acted as training data. Experimental results on the LIVE and CSIQ database demonstrate that our approach outperforms the state-of-art methods compared.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call