Abstract

Deep learning has been widely used for image quality assessment (IQA). When designing convolutional neural network (CNN) model for IQA, researcher usually take image patches as input of the network and predict quality score for each patch. In the CNN-based models, excepting the design of convolution layers, there are also two important issues should be well settled, which will affect the quality evaluation results very much. The first one is the ground truth assignment of image patches in the training set, while the second one is the pooling strategy to fuse scores of all patches together into a final quality score for one image. In this paper, we proposed a new IQA model based on visual saliency and gradient features. Particularly, we take the visual saliency map as a weighting map to change the patch’s label. Moreover, after obtaining the predicted scores of patches using the CNN-IQA model, we combine the patch’s scores according to the image gradient to get the final image score. This framework is trained on TID2013 dataset, and shows state-of-the-art performance on the LIVE and TID2008 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call