Abstract

Image quality assessment (IQA), as one of the fundamental techniques in image processing, is widely used in many computer vision and image processing applications. In this paper, we propose a novel visual saliency based blind IQA model, which combines the property of human visual system (HVS) with features extracted by a deep convolutional neural network (CNN). The proposed model is totally data-driven thus using no hand-crafted features. Instead of feeding the model with patches selected randomly from images, we introduce a salient object detection algorithm to calculate regions of interest which are acted as training data. Experimental results on the LIVE and CSIQ database demonstrate that our approach outperforms the state-of-art methods compared.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.