Abstract

In this paper, we tackle no-reference image quality assessment (NR-IQA), which aims to predict the perceptual quality of a distorted image without referencing its pristinequality counterpart. Inspired by the free-energy principle, we assume that, while perceiving a distorted image, the human visual system (HVS) tends to predict the pristine image then estimates the perceptual quality based on the distorted-restored pair. Furthermore, the perceptual quality depends heavily on the way how human beings attend to distorted images, namely, the cooperation of foveal vision and the eye movement mechanism. Inspired by these properties of the HVS, given the distortedrestored pair, we implement an attention-driven NR-IQA method with reinforcement learning (RL). The model learns a policy to attend to several regions parallelly. The observations of the fixation regions are aggregated in a weighted average way, which is inspired by the robust averaging strategy. For policy learning, the rewards are derived from two tasks-distortion type classification and perceptual score estimation. The goal of policy learning is to maximize the expectation of the accumulated rewards. Extensive experiments on LIVE, TID2008, TID2013 and CSIQ demonstrate the superiority of our methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.