Abstract

Image Quality Assessment (IQA), which aims to provide computational models for automatically predicting perceptual image quality, is an important computer vision task with many applications. In recent years, a variety of IQA methods have been proposed based on different metric de-signs, which measure the quality of images affected by various types of distortion. However, with the rapid development of Generative Adversarial Networks (GAN), a new challenge has been brought to the IQA community. Especially, the GAN-based Image Reconstruction (IR) methods overfit the traditional PSNR-based IQA methods by generating images with sharper edges and texture-like noises, leading the outputs to be similar to the reference image in appearance but with loss of details. In this paper, we propose a bilateral-branch multi-scale image quality estimation network, named IQMA network. The two branches are designed with Feature Pyramid Network (FPN)-like architecture, extracting multi-scale features for patches of the reference image and corresponding patches of the distorted image separately. Then features of the same scale from both branches are sent into several scale-specific feature fusion modules. Each module performs feature fusion and a novelly designed pooling operation for corresponding features. Then several score regression modules are used to learn a quality score for each scale. Finally, image scores for different scales are fused as the quality score of the image. IQMA network has achieved 1st place on the NTIRE 21 IQA public leaderboard and 2nd place on the NTIRE 21 IQA private leaderboard, and consistently outperforms existing state-of-the-art (SOTA) methods on LIVE and TID2013.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call