Abstract

Blind image quality assessment (BIQA) has made significant progress, but it remains a challenging problem due to the wide variation in image content and the diverse nature of distortions. To address these challenges and improve the adaptability of BIQA algorithms to different image contents and distortions, we propose a novel model that incorporates multiperspective consistency. Our approach introduces a multiperspective strategy to extract features from various viewpoints, enabling us to capture more beneficial cues from the image content. To map the extracted features to a scalar score, we employ a content-aware hypernetwork architecture. Additionally, we integrate all perspectives by introducing a consistency supervision strategy, which leverages cues from each perspective and enforces a learning consistency constraint between them. To evaluate the effectiveness of our proposed approach, we conducted extensive experiments on five representative datasets. The results demonstrate that our method outperforms state-of-the-art techniques on both authentic and synthetic distortion image databases. Furthermore, our approach exhibits excellent generalization ability. The source code is publicly available at https://github.com/gn-share/multi-perspective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call