Abstract

When human visual system (HVS) looks at a scene, it extracts various features from the image about the scene to understand it. The extracted features are compared with the stored memory on the analogous scene to judge their similarity <xref ref-type="bibr" rid="ref1" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">[1]</xref> . By analyzing to the similarity, HVS understands the scene presented on eyes. Based on the neurobiological basis, we propose a 2D full reference (FR) image quality assessment (IQA) method, named mean and deviation of deep and local similarity (MaD-DLS) that compares similarity between many original and distorted deep feature maps from convolutional neural networks (CNNs). MaD-DLS uses a deep learning algorithm, but since it uses the convolutional layers of a pre-trained model, it is free from training. For pooling of local quality scores within a deep similarity map, we employ two important descriptive statistics, (weighted) mean and standard deviation and name it mean and deviation (MaD) pooling. The two statistics each have the physical meaning: the weighted mean reflects effect of visual saliency on quality, whereas the standard deviation reflects effect of distortion distribution within the image on it. Experimental results show that MaD-DLS is superior or competitive to the existing methods and the MaD pooling is effective. The MATLAB source code of MaD-DLS will be available online soon.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call