Abstract

Since human observers are the ultimate receivers of an image, most of the image quality assessment (IQA) methods are based on analysis of the properties and mechanism of the human visual system. However, due to the lack of undistorted images for a reference, the accuracy of the no-reference IQA (NR-IQA) cannot compete with that of the full-reference IQA (FR-IQA). To bridge the performance gap between the FR-IQA and NR-IQA methods, we propose a NR-IQA method based on multi-task generative adversarial network, which attempts to restore dependable hallucinated images to compensate for the missing corresponding reference images. Two tasks, hallucination images and the quality maps are outputted by the generator and are combined with the specific loss to improve the reliability of hallucination images. Besides, two discriminator networks are used to respectively distinguish the undistorted images and hallucination images pairs, quality maps and structural similarity index measurement maps pairs. Finally, the hallucination images and distorted images are input into the IQA network, and quality scores are evaluated based on the differences between them. The superiority of our proposed method is verified by several different experiments on the LIVE datasets, TID2008 datasets, and TID2013 datasets.

Highlights

  • Image quality assessment (IQA) is a fundamental task in computer vision and plays an important role in process evaluation, image encoding, and monitoring

  • The main advantage of the full-reference IQA (FR-IQA) is that it can quantify visual sensitivity based on a difference between the distorted and reference images, which enables it to adopt the behavior of the human visual system (HVS) effectively

  • To bridge the performance gap between the FR-IQA and no-reference IQA (NR-IQA) methods, we propose a NR-IQA method based on multi-task generative adversarial network (GAN), which attempts to restore dependable hallucinated images to compensate for the missing corresponding undistorted images

Read more

Summary

Introduction

Image quality assessment (IQA) is a fundamental task in computer vision and plays an important role in process evaluation, image encoding, and monitoring. It is crucial to develop an effective image quality assessment method. The full-reference IQA (FR-IQA) algorithms use all the information on an undistorted reference image to evaluate image quality scores. The no-reference IQA (NR-IQA) algorithms evaluate image quality without using any information on an undistorted image. The main advantage of the FR-IQA is that it can quantify visual sensitivity based on a difference between the distorted and reference images, which enables it to adopt the behavior of the human visual system (HVS) effectively. Due to the lack of the information on a reference image, most of the existing NR-IQA methods mainly try to extract features that can express the HVS process from the statistical char-

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.