Abstract

Blind image quality assessment (BIQA) is a fundamental task in computer vision. Humans can evaluate image quality from local and global aspects without information on reference images. Inspired by this, we propose a BIQA method named DS-IQA by mimicking human visual system (HVS). A dual-stream hybrid module is established to get dual-stream quality-aware features. A CNN branch is used to mimic the active inference process of HVS to extract local quality-aware features. An enhanced Transformer-branch is used to extract global quality-aware features by modeling nonlocal relations of image patches. Finally, a quality evaluator based on Transformer layers is developed to map the dual-stream features and output the final quality score. The proposed approach is evaluated on five databases. The PLCC of DS-IQA reaches 0.975, 0.938, and 0.963 respectively on synthetic databases (LIVE, TID2013, CSIQ), and individual distortion experimental on TID2013 shows that DS-IQA outperforms in 8 of the 24 distortion categories on TID2013. On authentic databases (LIVEC, KonIQ-10k), the PLCC of DS-IQA reaches 0.887, 0.918, experiments show the superiority of the proposed method over other state-of-the-art BIQA metrics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call