Abstract

Light field (LF), as a new visual information representation for immersive experiences in virtual reality and augmented reality, has attracted extensive attention. It is of significance to establish the light field image quality assessment (LF-IQA) that conforms to the characteristics of human visual perception. Due to the spatial–angular inter-twined imaging mechanism, the quality evaluation for LF images faces more challenges than ordinary 2D/3D IQA. This paper proposes a feature-level contrastive learning based full-reference LF-IQA with a divide-and-conquer learning strategy. It first employs the cross-attention mechanism to learn the local contrast features related to the reference features from three subsets of distorted sub-aperture images (SAIs). Then a Transformer-based feature fusion is used to predict the overall perceptual quality. The evaluation experiments are carried out on three commonly used LF-IQA datasets: Win5-LID, NBU-LF1.0, and SHU. The experimental results demonstrate that our model outperforms the existing state-of-the-art LF-IQA methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.