Abstract

Light field (LF), as a new visual information representation for immersive experiences in virtual reality and augmented reality, has attracted extensive attention. It is of significance to establish the light field image quality assessment (LF-IQA) that conforms to the characteristics of human visual perception. Due to the spatial-angular inter-twined imaging mechanism, the quality evaluation for LF images faces more challenges than ordinary 2D/3D IQA. This paper proposes a feature-level contrastive learning based full-reference LF-IQA with a divide-and-conquer learning strategy. It first employs the cross-attention mechanism to learn the local contrast features related to the reference features from three subsets of distorted sub-aperture images (SAIs). Then a Transformer-based feature fusion is used to predict the overall perceptual quality. The evaluation experiments are carried out on three commonly used LF-IQA datasets: Win5-LID, NBU-LF1.0, and SHU. The experimental results demonstrate that our model outperforms the existing state-of-the-art LF-IQA methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call