Image quality assessment has become increasingly important in image quality monitoring and reliability assuring of image processing systems. Most of the existing no-reference image quality assessment methods mainly exploit the global information of image while ignoring vital local information. Actually, the introduced distortion depends on a slight difference in details between the distorted image and the non-distorted reference image. In light of this, we propose a no-reference image quality assessment method based on a multi-scale convolutional neural network, which integrates both global information and local information of an image. We first adopt the image pyramid method to generate four scale images required for network input and then provide two network models by respectively using two fusion strategies to evaluate image quality. In order to better adapt to the quality assessment of the entire image, we use two different loss functions in the training and validation phases. The superiority of the proposed method is verified by several different experiments on the LIVE datasets and TID2008 datasets.
Read full abstract