Abstract

Faithful measurement of perceptual quality is of significant importance to various multimedia applications. By fully utilizing reference images, full-reference image quality assessment (FR-IQA) methods usually achieves better prediction performance. On the other hand, no-reference image quality assessment (NR-IQA), also known as blind image quality assessment (BIQA), which does not consider the reference image, makes it a challenging but important task. Previous NR-IQA methods have focused on spatial measures at the expense of information in the available frequency bands. In this paper, we present a multiscale deep blind image quality assessment method (BIQA, M.D.) with spatial optimal-scale filtering analysis. Motivated by the multi-channel behavior of the human visual system and contrast sensitivity function, we decompose an image into a number of spatial frequency bands by multiscale filtering and extract features for mapping an image to its subjective quality score by applying convolutional neural network. Experimental results show that BIQA, M.D. compares well with existing NR-IQA methods and generalizes well across datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.