Abstract

Light Field (LF) cameras can capture angular and spatial information simultaneously, making them suitable for a wide range of applications such as refocusing, disparity estimation, and virtual reality. However, the limited spatial resolution of the LF images hinders their applicability. In order to address this issue, we propose an end-to-end learning-based light field super-resolution (LFSR) model called MFSR, which integrates multiple features, including spatial, angular, epipolar plane images (EPI), and global features. These features are extracted separately from the LF image and then fused together to obtain a comprehensive feature using the Feature Extract Block (FE Block) iteratively. Gradient loss is added into the loss function to ensure that the MFSR has good performance for LF images with rich texture. Experimental results on synthetic and real-world datasets demonstrate that the proposed method outperforms other state-of-the-art methods, with a peak signal-to-noise ratio (PSNR) improvement of 0.208 dB and 0.274 dB on average for the 2× and 4× super-resolution tasks, and structural similarity (SSIM) of both improvements of 0.01 on average.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call