Abstract

Image quality assessment (IQA) is an important problem in computer vision with many applications. We propose a transformer-based multi-task learning framework for the IQA task. Two subtasks: constructing an auxiliary information error map and completing image quality prediction, are jointly optimized using a shared feature extractor. We use visual transformers (ViT) as a feature extractor for feature extraction and guide ViT to focus on image quality-related features by building auxiliary information error map subtask. In particular, we propose a fusion network that includes a channel focus module. Unlike the fusion methods commonly used in previous IQA methods, we use the fusion network, including the channel attention module, to fuse the auxiliary information error map features with the image features, which facilitates the model to mine the image quality features for more accurate image quality assessment. And by jointly optimizing the two subtasks, ViT focuses more on extracting image quality features and building a more precise mapping from feature representation to quality score. With slight adjustments to the model, our approach can be used in both no-reference (NR) and full-reference (FR) IQA environments. We evaluate the proposed method in multiple IQA databases, showing better performance than state-of-the-art FR and NR IQA methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call