Abstract

Light Field (LF) cameras capture angular and spa-tial information and, consequently, require a large amount of resources in memory and bandwidth. To reduce these requirements, LF contents generally need to undergo compression and transmission protocols. Since these techniques may introduce distortions, the design of Light-Field Image Quality Assessment (LFI - IQA) methods are important to monitor the quality of the LF image (LFI) content at the user side. The majority of the existing LFI-IQA methods work in the spatial domain, where it is more difficult to analyze changes in the spatial and angular domains. In this work, we present a novel NR LFI-IQA, which is based on a Deep Neural Network that uses Frequency domain inputs (DNNF-LFIQA). The proposed method predicts the quality of an LF image by taking as input the Fourier magnitude spectrum of LF contents, represented as horizontal and ver-tical Epipolar Plane Images (EPI)s. Specifically, DNNF-LFIQA is composed of two processing streams (streaml and stream2) that take as inputs the horizontal and the vertical epipolar plane images in the frequency domain. Both streams are composed of identical blocks of convolutional neural networks (CNNs), with their outputs being combined using two fusion blocks. Finally, the fused feature vector is fed to a regression block to generate the quality prediction. Results show that the proposed method is fast, robust, and accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call