Abstract

Nowadays, methods based on deep learning have achieved state-of-the-art performance in image quality assessment (IQA). Many efforts have been made to design convolutional neural network (CNN) for IQA. However, current CNN-based methods mainly have the following shortcomings. (1) CNN-based methods usually segment the image into image patches of the same size for data enhancement, which cannot retain both local spatial information and global spatial information of images with large differences in resolution. (2) Most CNN-based methods input all image patches of the entire image into CNN and assign the image patches the same quality score as the entire image, ignoring the difference in the attention of the human visual system (HVS) to different regions of the image. (3) CNN-based methods design neural network structures with the same channel weight without considering the difference in correlation between different image channels and the key information of image patches. Thus, we propose a multi-scale CNN-based model assisted with visual saliency, which we call MS-SECNN in this paper. We fuse two single-scale CNNs into a multi-scale CNN through a fully connected layer. The Squeeze and Excitation (SE) module is embedded to each single-scale CNN to assign corresponding weight to each image channel. Moreover, we select the image patches input to the multi-scale CNN according to the visual saliency map. Experiments results validate the effectiveness of our proposed method compared to typical full-reference (FR) IQA methods and state-of-the-art no-reference (NR) IQA methods. Our code and two self-built datasets are publicly available <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup>

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.