Abstract

Faithful measurement of perceptual quality is of significant importance to various multimedia applications. By fully utilizing reference images, full-reference image quality assessment (FR-IQA) methods usually achieves better prediction performance. On the other hand, no-reference image quality assessment (NR-IQA), also known as blind image quality assessment (BIQA), which does not consider the reference image, makes it a challenging but important task. Previous NR-IQA methods have focused on spatial measures at the expense of information in the available frequency bands. In this paper, we present a multiscale deep blind image quality assessment method (BIQA, M.D.) with spatial optimal-scale filtering analysis. Motivated by the multi-channel behavior of the human visual system and contrast sensitivity function, we decompose an image into a number of spatial frequency bands by multiscale filtering and extract features for mapping an image to its subjective quality score by applying convolutional neural network. Experimental results show that BIQA, M.D. compares well with existing NR-IQA methods and generalizes well across datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call